A Machine Learning Model For Average Fuel Consumption in Heavy Vehicles
A Machine Learning Model For Average Fuel Consumption in Heavy Vehicles
INTRODUCTION
Fuel consumption models for vehicles are of interest to manufacturers, regulators, and
consumers. They are needed across all the phases of the vehicle life-cycle. In this paper,
we focus on modeling average fuel consumption for heavy vehicles during the operation
and maintenance phase. In general, techniques used to develop models for fuel
consumption fall under three main categories: • Physics-based models, which are
derived from an indepth understanding of the physical system. These models describe
the dynamics of the components of the vehicle at each time step using detailed
mathematical equations [1], [2]. • Machine learning models, which are data-driven and
represent an abstract mapping from an input space consisting of a selected set of
predictors to an output space that represents the target output, in this case average fuel
consumption [3], [4]. • Statistical models, which are also data-driven and establish a
mapping between the probability distribution of a selected set of predictors and the
target outcome [5], [6].
1.2 Motivation:
Trade-offs among the above techniques are primarily with respect to cost and accuracy
as per the requirements of the intended application.
In this paper, a model that can be easily developed for individual heavy vehicles in a
large fleet is proposed. Relying on accurate models of all of the vehicles in a fleet, a
fleet manager can optimize the route planning for all of the vehicles based on each
unique vehicle predicted fuel consumption thereby ensuring the route assignments are
aligned to minimize overall fleet fuel consumption. These types of fleets exist in various
sectors including, road transportation of goods [7], public transportation [3],
construction trucks [8] and refuse trucks [9]. For each fleet, the methodology must
apply and adapt to many different vehicle technologies (including future ones) and
configurations without detailed knowledge of the vehicles specific physical
characteristics and measurements. These requirements make machine learning the
technique of choice when taking into consideration the desired accuracy versus the cost
1
of the development and adaptation of an individualized model for each vehicle in the
fleet.
1.3 Objective:
Existing model that can be easily developed for individual heavy vehicles in a large fleet is
proposed. Relying on accurate models of all of the vehicles in a fleet, a fleet manager can
optimize the route planning for all of the vehicles based on each unique vehicle predicted
fuel consumption thereby ensuring the route assignments are aligned to minimize overall
fleet fuel consumption.
This approach is used in conjunction with seven predictors derived from vehicle speed and
road grade to produce a highly predictive neural network model for average fuel
consumption in heavy vehicles.
Different window sizes are evaluated and the results show that a 1 km window is able to
predict fuel consumption with a 0.91 coefficient of determination and mean absolute peak-to-
peak percent error less than 4% for routes that include both city and highway duty cycle
segments.
As mentioned above Artificial Neural Networks (ANN) are often used to develop digital
models for complex systems.
The models proposed in [15] highlight some of the difficulties faced by machine learning
models when the input and output have different domains.
In this study, the input is aggregated in the time domain over 10 minutes intervals and the
output is fuel consumption over the distance traveled during the same time period.
The complex system is represented by a transfer function F(p) = o, where F(·) represents
the system, p refers to the input predictors and o is the response of the system or the
output.
The ANNs used in this paper are Feed Forward Neural Networks (FNN).
Training is an iterative process and can be performed using multiple approaches including
particle swarm optimization [20] and back propagation. Other approaches will be
considered in future work in order to evaluation their ability to improve the model’s
predictive accuracy.
Each iteration in the training selects a pair of (input, output) features from Ftr at random
and updates the weights in the network. This is done by calculating the error between the
2
actual output value and the value predicted by the model
• Data is collected at a rate that is proportional to its impact on the outcome. When the
input space is sampled with respect to time, the amount of data collected from a vehicle
at a stop is the same as the amount of data collected when the vehicle is moving.
• The predictors in the model are able to capture the impact of both the duty cycle and the
environment on the average fuel consumption of the vehicle (e.g., the number of stops in
an urban traffic over a given distance).
• Data from raw sensors can be aggregated on-board into few predictors with lower
storage and transmission bandwidth requirements. Given the increase in computational
capabilities of new vehicles, data summarization is best performed on-board near the
source of the data.
3
CHAPTER 2
TECHNOLOGIES LEARNT
What is Python :-
Below are some facts about Python.
Python is currently the most widely used multi-purpose, high-level programming language.
Advantages of Python :-
1. Extensive Libraries
Python downloads with an extensive library and it contain code for various purposes like regular
expressions, documentation-generation, unit-testing, web browsers, threading, databases, CGI,
email, image manipulation, and more. So, we don’t have to write the complete code for that
manually.
2. Extensible
As we have seen earlier, Python can be extended to other languages. You can write some of your
code in languages like C++ or C. This comes in handy, especially in projects.
4
3. Embeddable
Complimentary to extensibility, Python is embeddable as well. You can put your Python code in your
source code of a different language, like C++. This lets us add scripting capabilities to our code in
the other language.
4. Improved Productivity
Since Python forms the basis of new platforms like Raspberry Pi, it finds the future bright for the
Internet Of Things. This is a way to connect the language with the real world.
When working with Java, you may have to create a class to print ‘Hello World’. But in Python, just
a print statement will do. It is also quite easy to learn, understand, and code. This is why when
people pick up Python, they have a hard time adjusting to other more verbose languages like Java.
7. Readable
Because it is not such a verbose language, reading Python is much like reading English. This is the
reason why it is so easy to learn, understand, and code. It also does not need curly braces to define
blocks, and indentation is mandatory. This further aids the readability of the code.
8. Object-Oriented
Like we said earlier, Python is freely available. But not only can you download Python for free,
but you can also download its source code, make changes to it, and even distribute it. It downloads
with an extensive collection of libraries to help you with your tasks.
10. Portable
When you code your project in a language like C++, you may need to make some changes to it if
you want to run it on another platform. But it isn’t the same with Python. Here, you need to code
5
only once, and you can run it anywhere. This is called Write Once Run Anywhere (WORA).
However, you need to be careful enough not to include any system-dependent features.
11. Interpreted
Lastly, we will say that it is an interpreted language. Since statements are executed one by
one, debugging is easier than in compiled languages.
Any doubts till now in the advantages of Python? Mention in the comment section.
1. Less Coding
Almost all of the tasks done in Python requires less coding when the same task is done in other
languages. Python also has an awesome standard library support, so you don’t have to search for any
third-party libraries to get your job done. This is the reason that many people suggest learning
Python to beginners.
2. Affordable
Python is free therefore individuals, small companies or big organizations can leverage the free
available resources to build applications. Python is popular and widely used so it gives you better
community support.
The 2019 Github annual survey showed us that Python has overtaken Java in the most popular
programming language category.
Python code can run on any machine whether it is Linux, Mac or Windows. Programmers need to
learn different languages for different jobs but with Python, you can professionally build web apps,
perform data analysis and machine learning, automate things, do web scraping and also build games
and powerful visualizations. It is an all-rounder programming language.
Disadvantages of Python
So far, we’ve seen why Python is a great choice for your project. But if you choose it, you should be
aware of its consequences as well. Let’s now see the downsides of choosing Python over another
language.
6
1. Speed Limitations
We have seen that Python code is executed line by line. But since Python is interpreted, it often
results in slow execution. This, however, isn’t a problem unless speed is a focal point for the project.
In other words, unless high speed is a requirement, the benefits offered by Python are enough to
distract us from its speed limitations.
2. Weak in Mobile Computing and Browsers
While it serves as an excellent server-side language, Python is much rarely seen on the client-side.
Besides that, it is rarely ever used to implement smartphone-based applications. One such application
is called Carbonnelle.
The reason it is not so famous despite the existence of Brython is that it isn’t that secure.
3. Design Restrictions
As you know, Python is dynamically-typed. This means that you don’t need to declare the type of
variable while writing the code. It uses duck-typing. But wait, what’s that? Well, it just means that if
it looks like a duck, it must be a duck. While this is easy on the programmers during coding, it
can raise run-time errors.
4. Underdeveloped Database Access Layers
No, we’re not kidding. Python’s simplicity can indeed be a problem. Take my example. I don’t do
Java, I’m more of a Python person. To me, its syntax is so simple that the verbosity of Java code
seems unnecessary.
This was all about the Advantages and Disadvantages of Python Programming Language.
History of Python : -
What do the alphabet and the programming language Python have in common? Right, both start with
ABC. If we are talking about ABC in the Python context, it's clear that the programming language
ABC is meant. ABC is a general-purpose programming language and programming environment,
which had been developed in the Netherlands, Amsterdam, at the CWI (Centrum Wiskunde &
7
Informatica). The greatest achievement of ABC was to influence the design of Python. Python was
conceptualized in the late 1980s. Guido van Rossum worked that time in a project at the CWI, called
Amoeba, a distributed operating system. In an interview with Bill Venners 1, Guido van Rossum said:
"In the early 1980s, I worked as an implementer on a team building a language called ABC at
Centrum voor Wiskunde en Informatica (CWI). I don't know how well people know ABC's influence
on Python. I try to mention ABC's influence because I'm indebted to everything I learned during that
project and to the people who worked on it." Later on in the same Interview, Guido van Rossum
continued: "I remembered all my experience and some of my frustration with ABC. I decided to try to
design a simple scripting language that possessed some of ABC's better properties, but without its
problems. So I started typing. I created a simple virtual machine, a simple parser, and a simple
runtime. I made my own version of the various ABC parts that I liked. I created a basic syntax, used
indentation for statement grouping instead of curly braces or begin-end blocks, and developed a small
number of powerful data types: a hash table (or dictionary, as we call it), a list, strings, and numbers."
What is Machine Learning : -
Before we take a look at the details of various machine learning methods, let's start by looking at
what machine learning is, and what it isn't. Machine learning is often categorized as a subfield of
artificial intelligence, but I find that categorization can often be misleading at first brush. The study of
machine learning certainly arose from research in this context, but in the data science application of
machine learning methods, it's more helpful to think of machine learning as a means of building
models of data.
Fundamentally, machine learning involves building mathematical models to help understand data.
"Learning" enters the fray when we give these models tunable parameters that can be adapted to
observed data; in this way the program can be considered to be "learning" from the data. Once these
models have been fit to previously seen data, they can be used to predict and understand aspects of
newly observed data. I'll leave to the reader the more philosophical digression regarding the extent to
which this type of mathematical, model-based "learning" is similar to the "learning" exhibited by the
human brain.Understanding the problem setting in machine learning is essential to using these tools
effectively, and so we will start with some broad categorizations of the types of approaches we'll
discuss here.
At the most fundamental level, machine learning can be categorized into two main types: supervised
learning and unsupervised learning.
8
Supervised learning involves somehow modeling the relationship between measured features of data
and some label associated with the data; once this model is determined, it can be used to apply labels
to new, unknown data. This is further subdivided into classification tasks and regression tasks: in
classification, the labels are discrete categories, while in regression, the labels are continuous
quantities. We will see examples of both types of supervised learning in the following section.
Unsupervised learning involves modeling the features of a dataset without reference to any label, and
is often described as "letting the dataset speak for itself." These models include tasks such
as clustering and dimensionality reduction. Clustering algorithms identify distinct groups of data,
while dimensionality reduction algorithms search for more succinct representations of the data. We
will see examples of both types of unsupervised learning in the following section.
Human beings, at this moment, are the most intelligent and advanced species on earth because they
can think, evaluate and solve complex problems. On the other side, AI is still in its initial stage and
haven’t surpassed human intelligence in many aspects. Then the question is that what is the need to
make machine learn? The most suitable reason for doing this is, “to make decisions, based on data,
with efficiency and scale”.
Lately, organizations are investing heavily in newer technologies like Artificial Intelligence, Machine
Learning and Deep Learning to get the key information from data to perform several real-world tasks
and solve problems. We can call it data-driven decisions taken by machines, particularly to automate
the process. These data-driven decisions can be used, instead of using programing logic, in the
problems that cannot be programmed inherently. The fact is that we can’t do without human
intelligence, but other aspect is that we all need to solve real-world problems with efficiency at a huge
scale. That is why the need for machine learning arises.
While Machine Learning is rapidly evolving, making significant strides with cybersecurity and
autonomous cars, this segment of AI as whole still has a long way to go. The reason behind is that
ML has not been able to overcome number of challenges. The challenges that ML is facing currently
are −
Quality of data − Having good-quality data for ML algorithms is one of the biggest challenges. Use
of low-quality data leads to the problems related to data preprocessing and feature extraction.
9
Time-Consuming task − Another challenge faced by ML models is the consumption of time
especially for data acquisition, feature extraction and retrieval.
Lack of specialist persons − As ML technology is still in its infancy stage, availability of expert
resources is a tough job.
No clear objective for formulating business problems − Having no clear objective and well-
defined goal for business problems is another key challenge for ML because this technology is not
that mature yet.
Curse of dimensionality − Another challenge ML model faces is too many features of data points.
This can be a real hindrance.
Machine Learning is the most rapidly growing technology and according to researchers we are in the
golden year of AI and ML. It is used to solve many real-world complex problems which cannot be
solved with traditional approach. Following are some real-world applications of ML −
Emotion analysis
Sentiment analysis
Speech synthesis
Speech recognition
Customer segmentation
Object recognition
Fraud detection
Fraud prevention
10
Recommendation of products to customer in online shopping
Arthur Samuel coined the term “Machine Learning” in 1959 and defined it as a “Field of study that
gives computers the capability to learn without being explicitly programmed”.
And that was the beginning of Machine Learning! In modern times, Machine Learning is one of the
most popular (if not the most!) career choices. According to Indeed, Machine Learning Engineer Is
The Best Job of 2019 with a 344% growth and an average base salary of $146,085 per year.
But there is still a lot of doubt about what exactly is Machine Learning and how to start learning it? So
this article deals with the Basics of Machine Learning and also the path you can follow to eventually
become a full-fledged Machine Learning Engineer. Now let’s get started!!!
This is a rough roadmap you can follow on your way to becoming an insanely talented Machine
Learning Engineer. Of course, you can always modify the steps according to your needs to reach your
desired end-goal!
In case you are a genius, you could start ML directly but normally, there are some prerequisites that
you need to know which include Linear Algebra, Multivariate Calculus, Statistics, and Python. And if
you don’t know these, never fear! You don’t need a Ph.D. degree in these topics to get started but you
do need a basic understanding.
Both Linear Algebra and Multivariate Calculus are important in Machine Learning. However, the
extent to which you need them depends on your role as a data scientist. If you are more focused on
application heavy machine learning, then you will not be that heavily focused on maths as there are
many common libraries available. But if you want to focus on R&D in Machine Learning, then
mastery of Linear Algebra and Multivariate Calculus is very important as you will have to implement
many ML algorithms from scratch.
11
(b) Learn Statistics
Data plays a huge role in Machine Learning. In fact, around 80% of your time as an ML expert will be
spent collecting and cleaning data. And statistics is a field that handles the collection, analysis, and
presentation of data. So it is no surprise that you need to learn it!!!
Some of the key concepts in statistics that are important are Statistical Significance, Probability
Distributions, Hypothesis Testing, Regression, etc. Also, Bayesian Thinking is also a very important
part of ML which deals with various concepts like Conditional Probability, Priors, and Posteriors,
Maximum Likelihood, etc.
Some people prefer to skip Linear Algebra, Multivariate Calculus and Statistics and learn them as they
go along with trial and error. But the one thing that you absolutely cannot skip is Python! While there
are other languages you can use for Machine Learning like R, Scala, etc. Python is currently the most
popular language for ML. In fact, there are many Python libraries that are specifically useful for
Artificial Intelligence and Machine Learning such as Keras, TensorFlow, Scikit-learn, etc.
So if you want to learn ML, it’s best if you learn Python! You can do that using various online
resources and courses such as Fork Python available Free on GeeksforGeeks.
Now that you are done with the prerequisites, you can move on to actually learning ML (Which is the
fun part!!!) It’s best to start with the basics and then move on to the more complicated stuff. Some of
the basic concepts in ML are:
Model – A model is a specific representation learned from data by applying some machine learning
algorithm. A model is also called a hypothesis.
Feature – A feature is an individual measurable property of the data. A set of numeric features can be
conveniently described by a feature vector. Feature vectors are fed as input to the model. For example,
in order to predict a fruit, there may be features like color, smell, taste, etc.
12
Target (Label) – A target variable or label is the value to be predicted by our model. For the fruit
example discussed in the feature section, the label with each set of input would be the name of the fruit
like apple, orange, banana, etc.
Training – The idea is to give a set of inputs(features) and it’s expected outputs(labels), so after
training, we will have a model (hypothesis) that will then map new data to one of the categories trained
on.
Prediction – Once our model is ready, it can be fed a set of inputs to which it will provide a predicted
output(label).
Supervised Learning – This involves learning from a training dataset with labeled data using
classification and regression models. This learning process continues until the required level of
performance is achieved.
Unsupervised Learning – This involves using unlabelled data and then finding the underlying
structure in the data in order to learn more and more about the data itself using factor and cluster
analysis models.
Semi-supervised Learning – This involves using unlabelled data like Unsupervised Learning with a
small amount of labeled data. Using labeled data vastly increases the learning accuracy and is also more
cost-effective than Supervised Learning.
Reinforcement Learning – This involves learning optimal actions through trial and error. So the next
action is decided by learning behaviors that are based on the current state and that will maximize the
reward in the future.
Advantages of Machine learning :-
Machine Learning can review large volumes of data and discover specific trends and patterns that would
not be apparent to humans. For instance, for an e-commerce website like Amazon, it serves to
understand the browsing behaviors and purchase histories of its users to help cater to the right products,
deals, and reminders relevant to them. It uses the results to reveal relevant advertisements to them.
2. No human intervention needed (automation)
With ML, you don’t need to babysit your project every step of the way. Since it means giving machines
the ability to learn, it lets them make predictions and also improve the algorithms on their own. A
common example of this is anti-virus softwares; they learn to filter new threats as they are recognized.
13
ML is also good at recognizing spam.
3. Continuous Improvement
As ML algorithms gain experience, they keep improving in accuracy and efficiency. This lets them
make better decisions. Say you need to make a weather forecast model. As the amount of data you have
keeps growing, your algorithms learn to make more accurate predictions faster.
4. Handling multi-dimensional and multi-variety data
Machine Learning algorithms are good at handling data that are multi-dimensional and multi-variety,
and they can do this in dynamic or uncertain environments.
5. Wide Applications
You could be an e-tailer or a healthcare provider and make ML work for you. Where it does apply, it
holds the capability to help deliver a much more personal experience to customers while also targeting
the right customers.
Disadvantages of Machine Learning :-
1. Data Acquisition
Machine Learning requires massive data sets to train on, and these should be inclusive/unbiased, and of
good quality. There can also be times where they must wait for new data to be generated.
2. Time and Resources
ML needs enough time to let the algorithms learn and develop enough to fulfill their purpose with a
considerable amount of accuracy and relevancy. It also needs massive resources to function. This can
mean additional requirements of computer power for you.
3. Interpretation of Results
Another major challenge is the ability to accurately interpret results generated by the algorithms. You
must also carefully choose the algorithms for your purpose.
4. High error-susceptibility
Machine Learning is autonomous but highly susceptible to errors. Suppose you train an algorithm with
data sets small enough to not be inclusive. You end up with biased predictions coming from a biased
training set. This leads to irrelevant advertisements being displayed to customers. In the case of ML,
such blunders can set off a chain of errors that can go undetected for long periods of time. And when
14
they do get noticed, it takes quite some time to recognize the source of the issue, and even longer to
correct it.
Purpose :-
We demonstrated that our approach enables successful segmentation of intra-retinal layers—even
with low-quality images containing speckle noise, low contrast, and different intensity ranges
throughout—with the assistance of the ANIS feature.
Python
Python is Interpreted − Python is processed at runtime by the interpreter. You do not need to compile
your program before executing it. This is similar to PERL and PHP.
Python is Interactive − you can actually sit at a Python prompt and interact with the interpreter directly
to write your programs.
Python also acknowledges that speed of development is important. Readable and terse code is part of
this, and so is access to powerful constructs that avoid tedious repetition of code. Maintainability also
ties into this may be an all but useless metric, but it does say something about how much code you
have to scan, read and/or understand to troubleshoot problems or tweak behaviors. This speed of
development, the ease with which a programmer of other languages can pick up basic Python skills
and the huge standard library is key to another area where Python excels. All its tools have been
quick to implement, saved a lot of time, and several of them have later been patched and updated by
people with no Python background - without breaking.
Tensorflow
TensorFlow was developed by the Google Brain team for internal Google use. It was released under
the Apache 2.0 open-source license on November 9, 2015.
Numpy
It is the fundamental package for scientific computing with Python. It contains various features
including these important ones:
Pandas
Pandas is an open-source Python Library providing high-performance data manipulation and analysis
tool using its powerful data structures. Python was majorly used for data munging and preparation. It
had very little contribution towards data analysis. Pandas solved this problem. Using Pandas, we can
accomplish five typical steps in the processing and analysis of data, regardless of the origin of data
load, prepare, manipulate, model, and analyze. Python with Pandas is used in a wide range of fields
including academic and commercial domains including finance, economics, Statistics, analytics, etc.
Matplotlib
Matplotlib is a Python 2D plotting library which produces publication quality figures in a variety of
hardcopy formats and interactive environments across platforms. Matplotlib can be used in Python
scripts, the Python and IPython shells, the Jupyter Notebook, web application servers, and four
graphical user interface toolkits. Matplotlib tries to make easy things easy and hard things possible.
You can generate plots, histograms, power spectra, bar charts, error charts, scatter plots, etc., with just
a few lines of code. For examples, see the sample plots and thumbnail gallery.
Scikit – learn
Scikit-learn provides a range of supervised and unsupervised learning algorithms via a consistent
interface in Python. It is licensed under a permissive simplified BSD license and is distributed under
many Linux distributions, encouraging academic and commercial use. Python
Python features a dynamic type system and automatic memory management. It supports multiple
programming paradigms, including object-oriented, imperative, functional and procedural, and has a
large and comprehensive standard library.
17
Python is Interpreted − Python is processed at runtime by the interpreter. You do not need to compile
your program before executing it. This is similar to PERL and PHP.
Python is Interactive − you can actually sit at a Python prompt and interact with the interpreter directly
to write your programs.
Python also acknowledges that speed of development is important. Readable and terse code is part of
this, and so is access to powerful constructs that avoid tedious repetition of code. Maintainability also
ties into this may be an all but useless metric, but it does say something about how much code you
have to scan, read and/or understand to troubleshoot problems or tweak behaviors. This speed of
development, the ease with which a programmer of other languages can pick up basic Python skills
and the huge standard library is key to another area where Python excels. All its tools have been
quick to implement, saved a lot of time, and several of them have later been patched and updated by
people with no Python background - without breaking.
Python a versatile programming language doesn’t come pre-installed on your computer devices.
Python was first released in the year 1991 and until today it is a very popular high-level
programming language. Its style philosophy emphasizes code readability with its notable use of
great whitespace.
The object-oriented approach and language construct provided by Python enables programmers to
write both clear and logical code for projects. This software does not come pre-packaged with
Windows.
There have been several updates in the Python version over the years. The question is how to install
Python? It might be confusing for the beginner who is willing to start learning Python but this tutorial
will solve your query. The latest or the newest version of Python is version 3.7.4 or in other words, it is
Python 3.
Note: The python version 3.7.4 cannot be used on Windows XP or earlier devices.
Before you start with the installation process of Python. First, you need to know about your System
Requirements. Based on your system type i.e. operating system and based processor, you must
download the python version. My system type is a Windows 64-bit operating system. So the steps
18
below are to install python version 3.7.4 on Windows 7 device or to install Python 3. Download the
Python Cheatsheet here.The steps on how to install Python on Windows 10, 8 and 7 are divided into 4
parts to help understand better.
Step 1: Go to the official site to download and install python using Google Chrome or any other
web browser. OR Click on the following link: https://www.python.org
Now, check for the latest and the correct version for your operating system.
19
Step 3: You can either select the Download Python for windows 3.7.4 button in Yellow Color or
you can scroll further down and click on download with respective to their version. Here, we are
downloading the most recent python version for windows 3.7.4
Step 4: Scroll down the page until you find the Files option.
Step 5: Here you see a different version of python along with the operating system.
20
To download Windows 32-bit python, you can select any one from the three options: Windows
x86 embeddable zip file, Windows x86 executable installer or Windows x86 web-based installer.
To download Windows 64-bit python, you can select any one from the three options: Windows
x86-64 embeddable zip file, Windows x86-64 executable installer or Windows x86-64 web-based
installer.
Here we will install Windows x86-64 web-based installer. Here your first part regarding which version
of python is to be downloaded is completed. Now we move ahead with the second part in installing
python i.e. Installation
Note: To know the changes or updates that are made in the version you can click on the Release Note
Option.
Installation of Python
Step 1: Go to Download and Open the downloaded python version to carry out the installation process.
21
Step 2: Before you click on Install Now, Make sure to put a tick on Add Python 3.7 to
PATH.
Step 3: Click on Install NOW After the installation is successful. Click on Close.
With these above three steps on python installation, you have successfully and correctly installed
Python. Now is the time to verify the installation.
22
Note: The installation process might take a couple of minutes.
23
Check how the Python IDLE works
Step 1: Click on Start
Step 2: In the Windows Run command, type “python idle”
Step 3: Click on IDLE (Python 3.7 64-bit) and launch the program
Step 4: To go ahead with working in IDLE you must first save the file. Click on File > Click on
Save
Step 5: Name the file and save as type should be Python files. Click on SAVE. Here I have named
the files as Hey World.
Step 6: Now for e.g. enter print (“Hey World”) and Press Enter.
24
You will see that the command given is launched. With this, we end our tutorial on how to install
Python. You have learned how to download python for windows into your respective operating system.
Note: Unlike Java, Python doesn’t need semicolons at the end of the statements otherwise it won’t
work.
This stack that includes:
25
CHAPTER 3
SYSTEM DESIGN
26
3.2 Module description
Tensorflow
TensorFlow was developed by the Google Brain team for internal Google use. It was
released under the Apache 2.0 open-source license on November 9, 2015.
Numpy
It is the fundamental package for scientific computing with Python. It contains various
features including these important ones:
Pandas
Matplotlib
27
Matplotlib is a Python 2D plotting library which produces publication quality figures in a
variety of hardcopy formats and interactive environments across platforms. Matplotlib can
be used in Python scripts, the Python and IPython shells, the Jupyter Notebook, web
application servers, and four graphical user interface toolkits. Matplotlib tries to make easy
things easy and hard things possible. You can generate plots, histograms, power spectra,
bar charts, error charts, scatter plots, etc., with just a few lines of code. For examples, see
the sample plots and thumbnail gallery.
Scikit – learn
28
we perform classification by finding the hyper-plane that differentiates the two classes very well
(look at the below snapshot).
Support Vectors are simply the co-ordinates of individual observation. The SVM classifier is a
frontier which best segregates the two classes (hyper-plane/ line).You can look at support vector
machines and a few examples of its working here.
Above, we got accustomed to the process of segregating the two classes with a hyper-plane. Now the
burning question is “How can we identify the right hyper-plane?”. Don’t worry, it’s not as hard as
you think!
Let’s understand:
Identify the right hyper-plane (Scenario-1): Here, we have three hyper-planes (A, B and C). Now,
identify the right hyper-plane to classify star and circle.
29
You need to remember a thumb rule to identify the right hyper-plane: “Select the hyper-plane which
segregates the two classes better”. In this scenario, hyper-plane “B” has excellently performed this
job.
Identify the right hyper-plane (Scenario-2): Here, we have three hyper-planes (A, B and C) and all
are segregating the classes well. Now, How can we identify the right hyper-plane?
Here, maximizing the distances between nearest data point (either class) and hyper-plane will help us
to decide the right hyper-plane. This distance is called as Margin. Let’s look at
30
:
Above, you can see that the margin for hyper-plane C is high as compared to both A and B. Hence,
we name the right hyper-plane as C. Another lightning reason for selecting the hyper-plane with
higher margin is robustness. If we select a hyper-plane having low margin then there is high chance
of miss-classification.
Identify the right hyper-plane (Scenario-3):Hint: Use the rules as discussed in previous section to
identify the right hyper-plane
Some of you may have selected the hyper-plane B as it has higher margin compared to A. But, here is
the catch, SVM selects the hyper-plane which classifies the classes accurately prior to maximizing
margin. Here, hyper-plane B has a classification error and A has classified all correctly. Therefore,
the right hyper-plane is A.
31
Can we classify two classes (Scenario-4)?: Below, I am unable to segregate the two classes using
a straight line, as one of the stars lies in the territory of other(circle) class as an outlier.
As I have already mentioned, one star at other end is like an outlier for star class. The SVM algorithm
has a feature to ignore outliers and find the hyper-plane that has the maximum margin. Hence, we can
say, SVM classification is robust to outliers
Find the hyper-plane to segregate to classes (Scenario-5): In the scenario below, we can’t have
linear hyper-plane between the two classes, so how does SVM classify these two classes? Till now,
32
we have only looked at the linear hyper-plane.
SVM can solve this problem. Easily! It solves this problem by introducing additional feature. Here,
we will add a new feature z=x^2+y^2. Now, let’s plot the data points on axis x and z:
All values for z would be positive always because z is the squared sum of both x and y
In the original plot, red circles appear close to the origin of x and y axes, leading to lower value of z
and star relatively away from the origin result to higher value of z.
33
In the SVM classifier, it is easy to have a linear hyper-plane between these two classes. But, another
burning question which arises is, should we need to add this feature manually to have a hyper-plane.
No, the SVM algorithm has a technique called the kernel trick. The SVM kernel is a function that
takes low dimensional input space and transforms it to a higher dimensional space i.e. it converts not
separable problem to separable problem. It is mostly useful in non-linear separation problem. Simply
put, it does some extremely complex data transformations, then finds out the process to separate the
data based on the labels or outputs you’ve defined.
When we look at the hyper-plane in original input space it looks like a circle:
Now, let’s look at the methods to apply SVM classifier algorithm in a data science challenge.
2. The data should be accessible through any devices connected to the Internet;
3. The service should be capable to synchronize the user’s data between multiple devices
(notebooks, smart phones, etc.);
4. The service should preserve all historical changes (versioning);
34
5. Data should be shareable with other users;
7. The service should be interoperable with other cloud storage services, enabling data migration
from one CSP to another.
• Script:
• Database :
• Hard Disk - 20 GB
UML is an acronym that stands for Unified Modeling Language. Simply put, UML is a modern
approach to modeling and documenting software. In fact, it’s one of the most popular business process
modeling techniques.
GOALS:
1. Provide users a ready-to-use, expressive visual modeling Language so that they can develop
and exchange meaningful models.
2. Provide extendibility and specialization mechanisms to extend the core concepts.
6 Support higher level development concepts such as collaborations, frameworks, patterns and
components.
7 Integrate best practices.
36
i. USE CASE DIAGRAM:
A use case diagram in the Unified Modeling Language (UML) is a type of behavioral
diagram defined by and created from a Use-case analysis. Its purpose is to present a graphical
overview of the functionality provided by a system in terms of actors, their goals (represented as use
cases), and any dependencies between those use cases. The main purpose of a use case diagram is to
show what system functions are performed for which actor. Roles of the actors in the system can be
depicted.
37
ii. SEQUENCE DIAGRAM:
A sequence diagram in Unified Modeling Language (UML) is a kind of interaction diagram that
shows how processes operate with one another and in what order. It is a construct of a Message
Sequence Chart. Sequence diagrams are sometimes called event diagrams, event scenarios, and
timing diagrams.
38
iii. CLASS DIAGRAM:
In software engineering, a class diagram in the Unified Modeling Language (UML) is a type of static
structure diagram that describes the structure of a system by showing the system's classes, their
attributes, operations (or methods), and the relationships among the classes. It explains which class
contains information.
39
iv. DATA FLOW DIAGRAM:
1. The DFD is also called as bubble chart. It is a simple graphical formalism that can be used to
represent a system in terms of input data to the system, various processing carried out on this data,
and the output data is generated by this system.
2. The data flow diagram (DFD) is one of the most important modeling tools. It is used to model the
system components. These components are the system process, the data used by the process, an
external entity that interacts with the system and the information flows in the system.
3. DFD shows how the information moves through the system and how it is modified by a series of
transformations. It is a graphical technique that depicts information flow and the transformations that
are applied as data moves from input to output.
4. DFD is also known as bubble chart. A DFD may be used to represent a system at any level of
abstraction. DFD may be partitioned into levels that represent increasing information flow and
functional detail.
40
Component Diagram :-
Component diagram is a special kind of diagram in UML. The purpose is also different from all other
diagrams discussed so far. It does not describe the functionality of the system but it describes the
components used to make those functionalities.
Thus from that point of view, component diagrams are used to visualize the physical components in a
system. These components are libraries, packages, files, etc.
Component diagrams can also be described as a static implementation view of a system. Static
implementation represents the organization of the components at a particular moment.
A single component diagram cannot represent the entire system but a collection of diagrams is used to
represent the whole.
UML Component diagrams are used in modeling the physical aspects of object-oriented systems
that are used for visualizing, specifying, and documenting component-based systems and also for
constructing executable systems through forward and reverse engineering. Component diagrams are
essentially class diagrams that focus on a system's components that often used to model the static
implementation view of a system.
41
v. ACTIVITY DIAGRAM:
Activity diagrams are graphical representations of workflows of stepwise activities and actions with
support for choice, iteration and concurrency. In the Unified Modeling Language, activity diagrams
can be used to describe the business and operational step-by-step workflows of components in a
system. An activity diagram shows the overall flow of control.
42
Flow Chart Diagram :-
A flowchart is simply a graphical representation of steps. It shows steps in sequential order and is
widely used in presenting the flow of algorithms, workflow or processes. Typically, a flowchart
shows the steps as boxes of various kinds, and their order by connecting them with arrows.
43
CHAPTER 4
IMPLEMENTATION
main = tkinter.Tk()
main.title("Average Fuel Consumption") #designing main screen
main.geometry("1300x1200")
global filename
global train_x, test_x, train_y, test_y
global balance_data
global model
global ann_acc
global testdata
global predictdata
44
def importdata():
global balance_data
balance_data = pd.read_csv(filename)
balance_data = balance_data.abs()
return balance_data
def splitdataset(balance_data):
global train_x, test_x, train_y, test_y
X = balance_data.values[:, 0:7]
y_ = balance_data.values[:, 7]
print(y_)
y_ = y_.reshape(-1, 1)
encoder = OneHotEncoder(sparse=False)
Y = encoder.fit_transform(y_)
print(Y)
train_x, test_x, train_y, test_y = train_test_split(X, Y, test_size=0.2)
text.insert(END,"Dataset Length : "+str(len(X))+"\n");
return train_x, test_x, train_y, test_y
def generateModel():
global train_x, test_x, train_y, test_y
data = importdata()
train_x, test_x, train_y, test_y = splitdataset(data)
text.insert(END,"Splitted Training Length : "+str(len(train_x))+"\n");
text.insert(END,"Splitted Test Length : "+str(len(test_x))+"\n");
def ann():
global model
global ann_acc
45
model = Sequential()
model.add(Dense(200, input_shape=(7,), activation='relu', name='fc1'))
model.add(Dense(200, activation='relu', name='fc2'))
model.add(Dense(19, activation='softmax', name='output'))
optimizer = Adam(lr=0.001)
model.compile(optimizer, loss='categorical_crossentropy', metrics=['accuracy'])
print('CNN Neural Network Model Summary: ')
print(model.summary())
model.fit(train_x, train_y, verbose=2, batch_size=5, epochs=200)
results = model.evaluate(test_x, test_y)
text.insert(END,"ANN Accuracy for dataset "+filename+"\n");
text.insert(END,"Accuracy Score : "+str(results[1]*100)+"\n\n")
ann_acc = results[1] * 100
def predictFuel():
global testdata
global predictdata
text.delete('1.0', END)
filename = filedialog.askopenfilename(initialdir="dataset")
testdata = pd.read_csv(filename)
testdata = testdata.values[:, 0:7]
predictdata = model.predict_classes(testdata)
print(predictdata)
for i in range(len(testdata)):
text.insert(END,str(testdata[i])+" Average Fuel Consumption : "+str(predictdata[i])+"\n");
def graph():
x = []
y = []
for i in range(len(testdata)):
x.append(i)
46
y.append(predictdata[i])
plt.plot(x, y)
plt.xlabel('Vehicle ID')
plt.ylabel('Fuel Consumption/10KM')
plt.title('Average Fuel Consumption Graph')
plt.show()
main.config(bg='LightSkyBlue')
main.mainloop()
CHAPTER – 5
TEST RESULTS
The purpose of testing is to discover errors. Testing is the process of trying to discover every
conceivable fault or weakness in a work product. It provides a way to check the functionality of
components, subassemblies, assemblies and/or a finished product It is the process of exercising
software with the intent of ensuring that the
Software system meets its requirements and user expectations and does not fail in an unacceptable
manner. There are various types of test. Each test type addresses a specific testing requirement.
TYPES OF TESTS
Unit testing
Unit testing involves the design of test cases that validate that the internal program logic is
functioning properly, and that program inputs produce valid outputs. All decision branches and
48
internal code flow should be validated. It is the testing of individual software units of the application
.it is done after the completion of an individual unit before integration. This is a structural testing,
that relies on knowledge of its construction and is invasive. Unit tests perform basic tests at
component level and test a specific business process, application, and/or system configuration. Unit
tests ensure that each unique path of a business process performs accurately to the documented
specifications and contains clearly defined inputs and expected results.
Integration testing
Integration tests are designed to test integrated software components to determine if they
actually run as one program. Testing is event driven and is more concerned with the basic outcome
of screens or fields. Integration tests demonstrate that although the components were individually
satisfaction, as shown by successfully unit testing, the combination of components is correct and
consistent. Integration testing is specifically aimed at exposing the problems that arise from the
combination of components.
Functional test
Functional tests provide systematic demonstrations that functions tested are available as
specified by the business and technical requirements, system documentation, and user manuals.
System testing ensures that the entire integrated software system meets requirements. It tests a
configuration to ensure known and predictable results. An example of system testing is the
configuration oriented system integration test. System testing is based on process descriptions and
flows, emphasizing pre-driven process links and integration points.
White Box Testing is a testing in which in which the software tester has knowledge of the inner
workings, structure and language of the software, or at least its purpose. It is purpose. It is used to
test areas that cannot be reached from a black box level.
Black Box Testing is testing the software without any knowledge of the inner workings,
structure or language of the module being tested. Black box tests, as most other kinds of tests, must
be written from a definitive source document, such as specification or requirements document, such
as specification or requirements document. It is a testing in which the software under test is treated,
as a black box .you cannot ―see‖ into it. The test provides inputs and responds to outputs without
considering how the software works.
5.1 Unit Testing:
Unit testing is usually conducted as part of a combined code and unit test phase of the
software lifecycle, although it is not uncommon for coding and unit testing to be conducted as two
distinct phases.
Field testing will be performed manually and functional tests will be written in detail.
Test objectives
All field entries must work properly.
50
Pages must be activated from the identified link.
Features to be tested
Verify that the entries are of the correct format
Software integration testing is the incremental integration testing of two or more integrated
software components on a single platform to produce failures caused by interface defects.
The task of the integration test is to check that components or software applications, e.g. components
in a software system or – one step up – software applications at the company level – interact without
error.
Test Results: All the test cases mentioned above passed successfully. No defects encountered.
User Acceptance Testing is a critical phase of any project and requires significant
participation by the end user. It also ensures that the system meets the functional requirements.
Test Results: All the test cases mentioned above passed successfully. No defects encountered.
51
CHAPTER 6
RESULTS
To run this project double click on ‘run.bat’ file to get below screen
In above screen click on ‘Upload Heavy Vehicles Fuel Dataset’ button to upload train dataset
In above screen uploading ‘Fuel_Dataset.txt’ which can be used to train model. After uploading dataset
will get below screen
52
Now in above screen click on ‘Read Dataset & Generate Model’ button to read uploaded dataset and to
generate train and test data
In above screen we can see total number of records in dataset, number of records used for training and
number for records used for testing. Now click on ‘Run ANN Algorithm’ button to input train and test
data to ANN to build ANN model.
53
In above black console we can see all ANN processing details, After building model will get below
screen
In above screen we got ANN prediction accuracy upto 86%. Now click on ‘Predict Average Fuel
Consumption’ button to upload test data and to predict consumption for test data
54
After uploading test data will get fuel consumption prediction result in below screen
In above screen we got average fuel consumption for each test record per 100 kilo meter. Now click on
‘Fuel Consumption Graph’ to view below graph
55
In above graph x-axis represents test record number as vehicle id and y-axis represents fuel consumption
for that record.
56
CHAPTER 7
CONCLUSION
machine learning model that can be conveniently developed for each heavy vehicle in a fleet.
The model relies on seven predictors: number of stops, stop time, average moving speed,
characteristic acceleration, aerodynamic speed squared, change in kinetic energy and change in
potential energy.
The last two predictors are introduced in this paper to help capture the average dynamic behavior of
the vehicle. All of the predictors of the model are derived from vehicle speed and road grade.
These variables are readily available from telematics devices that are becoming an integral part of
connected vehicles. Moreover, the predictors can be easily computed on-board from these two
variables.
Future Work
In this paper author is describing concept to predict average fuel consumption in heavy vehicles
using Machine Learning Algorithm such as ANN (Artificial Neural Networks). To predict fuel
consumption author has extracted 7 predictor features from heavy vehicle dataset
57
CHAPTER-8
REFERENCES
[1] B. Lee, L. Quinones, and J. Sanchez, “Development of greenhouse gas emissions model for
2014-2017 heavy-and medium-duty vehicle compliance,” SAE Technical Paper, Tech. Rep.,
2011.
[2] G. Fontaras, R. Luz, K. Anagnostopoulus, D. Savvidis, S. Hausberger, and M. Rexeis,
“Monitoring co2 emissions from hdv in europe-an experimental proof of concept of the proposed
methodolgical approach,” in 20th International Transport and Air Pollution Conference, 2014.
[3] S. Wickramanayake and H. D. Bandara, “Fuel consumption prediction of fleet vehicles using
machine learning: A comparative study,” in Moratuwa Engineering Research Conference
(MERCon), 2016. IEEE, 2016, pp. 90–95.
[4] L. Wang, A. Duran, J. Gonder, and K. Kelly, “Modeling heavy/mediumduty fuel consumption
based on drive cycle properties,” SAE Technical Paper, Tech. Rep., 2015.
[5] Fuel Economy and Greenhouse gas exhaust emissions of motor vehicles Subpart B - Fuel
Economy and Carbon-Related Exhaust Emission Test Procedures, Code of Federal Regulations
Std. 600.111-08, Apr 2014.
[6] SAE International Surface Vehicle Recommended Practice, Fuel Consumption Test Procedure
- Type II, Society of Automotive Engineers Std., 2012.
[7] F. Perrotta, T. Parry, and L. C. Neves, “Application of machine learning for fuel consumption
modelling of trucks,” in Big Data (Big Data), 2017 IEEE International Conference on. IEEE,
2017, pp. 3810–3815.
[8] S. F. Haggis, T. A. Hansen, K. D. Hicks, R. G. Richards, and R. Marx, “In-use evaluation of
fuel economy and emissions from coal haul trucks using modified sae j1321 procedures and
pems,” SAE International Journal of Commercial Vehicles, vol. 1, no. 2008-01-1302, pp. 210–
221, 2008.
[9] A. Ivanco, R. Johri, and Z. Filipi, “Assessing the regeneration potential for a refuse truck over
a real-world duty cycle,” SAE International Journal of Commercial Vehicles, vol. 5, no. 2012-01-
1030, pp. 364–370, 2012.
[10] A. A. Zaidi, B. Kulcsr, and H. Wymeersch, “Back-pressure traffic signal control with fixed
and adaptive routing for urban vehicular networks,” IEEE Transactions on Intelligent
Transportation Systems, vol. 17, no. 8, pp. 2134–2143, Aug 2016.
[11] J. Zhao, W. Li, J. Wang, and X. Ban, “Dynamic traffic signal timing optimization strategy
58
incorporating various vehicle fuel consumption characteristics,” IEEE Transactions on Vehicular
59