0% found this document useful (0 votes)
36 views61 pages

Master of Computer Applications: Data and Analytics

The document discusses challenges in implementing a software measurement and analytics program in industry. It provides lessons learned for overcoming these challenges and best practices for practical and effective data analysis. Some key challenges include lack of resources, difficulty integrating tools, and lack of stakeholder buy-in. The lessons learned provide guidance for both researchers collaborating with industry and practitioners setting up measurement programs. Effective data analytics is important for decision making but real-world implementation poses challenges that must be addressed.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views61 pages

Master of Computer Applications: Data and Analytics

The document discusses challenges in implementing a software measurement and analytics program in industry. It provides lessons learned for overcoming these challenges and best practices for practical and effective data analysis. Some key challenges include lack of resources, difficulty integrating tools, and lack of stakeholder buy-in. The lessons learned provide guidance for both researchers collaborating with industry and practitioners setting up measurement programs. Effective data analytics is important for decision making but real-world implementation poses challenges that must be addressed.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 61

DATA AND ANALYTICS

A Dissertation submitted in partial fulfillment of the requirements for


the award of degree of

MASTER OF COMPUTER APPLICATIONS


By

SACHIN KUMAR
1NZ19MCA21

Under the Guidance of


Prof. Govindaraj M

DEPARTMENT OF MASTER OF COMPUTER APPLICATIONS

Ring Road, Near Marathahalli,


Bengaluru – 560103

2021-
DATA AND ANALYTICS
A Dissertation submitted in partial fulfillment of the requirements for
the award of degree of

MASTER OF COMPUTER APPLICATIONS


By
SACHIN KUMAR
1NZ19MCA21

Under the Guidance of

Internal Guide: External Guide:


Prof. Govindaraj M Mr. M Murthy
Sr. Asst. Professor Technical Lead
Dept. of MCA, NHCE Future X Ready

DEPARTMENT OF MASTER OF COMPUTER APPLICATIONS

Ring Road, Near Marathahalli,


Bengaluru – 560 103

2021-
Ring Road, Near Marathahalli,
Bengaluru – 560 103

DEPARTMENT OF MASTER OF COMPUTER APPLICATIONS

CERTIFICATE

This is to certify that SACHIN KUMAR, bearing USN


1NZ19MCA21 has successfully completed his/her final year VI
semester Internship Project entitled DATA AND ANALYTICS as
a partial fulfillment of the requirements for the award of MASTER
OF COMPUTER APPLICATIONS degree, during the Academic
Year 2021-22 under my supervision. This report has not been
submitted to any other Organization/University for any award of
degree.

Signature of the Internal Guide Head of the Department Principal

External Viva
Internal Examiner External Examiner

Date:
I, SACHIN KUMAR, student of VI Semester MCA, bearing USN
1NZ19MCA21 hereby declare that the Internship Project entitled DATA AND
ANALYTICS has been carried out by me under the supervision of Internal Guide
Prof. Govindaraj M, Sr. Asst. Professor and External Guide K. NAGENDRA
KUMAR, TECHNICAL LEAD AT ATS GLOBAL COMPANY and submitted in
partial fulfillment of the requirements for the award of the Degree of Master of
Computer Applications by the Department of Master of Computer Applications,
New Horizon College of Engineering, an Autonomous Institution, Affiliated to
Visvesvaraya Technological University during the academic year 2021-22. This
report has not been submitted to any other Organization/University for any award
of degree.

Name : SACHIN KUMAR

Signature :
Date : 09-06-2022
ACKNOWLEDGEMEN

I would like to thank Dr. Mohan Manghnani, Chairman of New Horizon College of
Engineering for providing good infrastructure and Hi-Tech lab facilities to develop
and improve student’s skills.

I sincerely express my gratitude to the college Principal Dr. Manjunatha for


supporting the students in all their technical activities and giving guidance to them. I
would like to thank Dr. V. Asha, HoD, Department of MCA, New Horizon College
of Engineering for granting permission to undertake this project. I would like to
express my gratitude to the project guide Prof. Govindaraj M, for giving all the
instructions and guidelines at every stage of the Project work.

I thank all the staff members of the Department of Master of Computer Applications,
for extending their constant support to complete the project. I express my heartfelt
thanks to my parents and friends who were a constant source of support and
inspiration throughout the project.
COMPANY

Future X Ready:

Future X Ready aims to provide a platform for “Future-Readiness” to Industry and Academia
ecosystems for Software, IT Engineering, Products, Consulting, Internet Tech and Digital
Professional Services.Execute quality internships to stand-out and get recruited to top MNC
companies. Get access to problem statements, mentoring and hand-holding to complete
Internship on industry-relevant topics and Get Certificate of completion from companies.

Companies value Project experience more than any other student attribute. Digital Garage
helps to get access to real-life projects from leading companies across domains. Super-charge
your resume with Industry-Projects.Future X Ready aims to provide a platform for “Future-
Readiness” to Industry and Academia ecosystems for Software, IT Engineering, Products,
Consulting, Internet Tech and Digital Professional Services.

Future X provides the services / solution of its customers that help to put IT savings to
business advantage. Seek to please our consumers by changing the operation and constantly
enhancing them. ATS recognizes the disruptive technology required to support sustainable
market development through open sourcing and similar innovations and therefore provides its
consumers with the latest in product innovation.
Our vision:

We aspire to grow and attract customers through the implementation of value-driven solutions
and the establishment of a long-term partnership centered on trust. A work around for you of
open source technologies. I look forward to hearing from you and eventually entering our
valued customer service. • Focus on strong track record Open source technologies. •
KSMBOA SMEs of the year in IT & ITES business happiness • Lifestyle integrators for
consultancy, growth, training and externalization • Named in the leading 25 firms in web
growth. 1.2.3 Our Service ➢ Portals ➢ Mobile solution ➢ Business intelligence and
Analytics ➢ Consulting services Portals.
A wide variety of multi-portal development

HTML editing and XML publishing features including Content Management System (CMS),
document identifiers, database, search and analytics.
Mobile Solution:

As we are all conscious, the latest digital technology transition is attributed to the widespread
usage, in particular, of cell telephones. Today, many of the structured and conventional
processes of data entry and purchases are going on to an extend where several businesses have
established mobile first strategy.
Business intelligence and Analytics: Business intelligence assists businesses in the
compilation, management and administration of results. It provides a description of company
activities, history, current and future. Internet reporting and BI are valuable for evaluating
company data quickly, generating informative analyses and dashboard programs that are
beneficial for leaders in decision taking sector
TABLE OF
Chapter
Title Page No
No.
ABSTRACT (i)
LIST OF TABLES (ii)
LIST OF FIGURES (iii)
1 INTRODUCTION 1
1.1 General Introduction
1.2 Problem Statement
1.3 Existing System
1.4 Objective of the Work
1.5 Proposed System with Methodology
1.6 Feasibility Study

2 REVIEW OF LITERATURE 8
2.1 Review Summary
3 SYSTEM CONFIGURATION 11
3.1 Hardware requirements
3.2 Software requirements
4 MODULE DESCRIPTION 20
4.1 Module 1
4.2 Module 2
5 SYSTEM DESIGN
5.1 DFD / UML Diagrams 25
5.2 Data Base Design
6 SYSTEM IMPLEMENTATION
6.1 Implementation 32
6.1.1 Pre – Implementation Technique
6.1.2 Post – Implementation Technique
6.2 Screen Shots
7 SYSTEM TESTING 45
7.1 Test Cases
7.2 Maintenance
8 RESULTS AND DISCUSSIONS 49
8.1 Conclusion
8.2 Limitations
8.3 Future Enhancements
9 REFERENCES 50
9.1 Text References
9.2 Web References
ABSTRACT

Software data analytics is key for helping stakeholders make decisions, and thus establishing a
measurement and data analysis program is a recognized best practice within the software
industry. However, practical implementation of measurement programs and analytics in
industry is challenging. In this chapter, we discuss real-world challenges that arise during the
implementation of a software measurement and analytics program. We also report lessons
learned for overcoming these challenges and best practices for practical, effective data
analysis in industry. The lessons learned provide guidance for researchers who wish to
collaborate with industry partners in data analytics, as well as for industry practitioners
interested in setting up and realizing the benefits of an effective measurement program.Data
analytics projects exist on a spectrum. At one end of this spectrum we have projects that are
close to traditional software engineering projects. By traditional software engineering I mean
the production of websites and web applications, desktop software applications, and data
warehouses. To develop these analytics applications, a data model is carefully specified,
coded, tested, and rolled out through development, user acceptance, and production
environments. A presentation layer or application layer is programmed to sit on top of this
data and present it to users so they can interact with it. Users may be customers on a website
who see recommendations that match their purchasing habits. Users might be online banking
customers who see analytics summarizing the performance of their investments or internal
business employees who need insights related to their business’s operations. Typical projects
are those that manage data feeds, populate data warehouses or implement analytics and
management reporting layers on top of warehouses. The development team involved in these
projects typically has a variety of roles including database developers, application layer
developers, testers as well as data analysts determining how best to extract value from the data.
These projects produce software applications in the general sense that we all encounter and
use every day on our computers and mobile devices.
LIST OF FIGURES

Sl.
No. Figure No. Title Page No.

1 1.5 Waterfall Model 8

2 1.6 Feasibility study 10

3 2.1 Representing Data in the form of Columns 14


in the power BI
4 2.2 Representing Data modeling in form of 15
power BI
5 2.3 Representing Data in the form of power BI 16

6 2.4 Representing Data in the form of power 17


view
7 2.5 Representing Data in the form of power 18
Map
8 2.6 Representing Data in the form of power BI 19
Website
9 2.7 Representing Data in the form of power 20
Q&A
10 2.8 Representing Data in the form of power BI 21
Mobile App
11 5.1 Representing the flowchart of the project 25

12 5.2 Representing the Data flow diagram of the 26


project
13 6.1 Representing the last 12 months trend by 27
taxi type
14 6.2 Representing the revenue comparison 28
between by mouth franchise
15 6.3 Representing the average of revenue by 29
monthly per taxi
16 6.4 Representing top 10 driver’s current vs last 30
year
17 6.5 Representing the region wise revenue in 31
form of power Map
18 6.7 Representing revenue detailed report 32

19 6.8 Representing revenue detailed report with 33


company name and taxi
Data and 1NZ19MCA

CHAPTER 1

INTRODUCTION

1.1 General Introduction

Prediction in data mining is to identify data points purely on the description of another
related data value. It is not necessarily related to future events but the used variables are
unknown. Prediction derives the relationship between a thing you know and a thing you need
to predict for future reference. For example, prediction models in data mining are used by a
marketing manager who predict that how much amount a particular customer will spend
during a sale, so that upcoming sale amount can be planned accordingly. The prediction in
data mining is known as Numeric Prediction. Generally regression analysis is used for
prediction. In Data Mining, the term “Prediction” refers to calculated assumptions of certain
turns of events made on the basis of available processed data. It is a cornerstone of predictive
analytics. The prediction itself is calculated from the available data and modeled in
accordance with the existing dynamics. The nature of prediction varies from the nature of the
project. It can be simple correlation of sentiments and conversions out of which you can
understand whether the user will engage with your piece of content in a productive manner or
not. Prediction is nothing but finding out the knowledge or some pattern from the large
amounts of data. For example, in credit card fraud detection, history of data for a particular
person’s credit card usage has to be analyzed. If any abnormal pattern was detected, then it
should be reported as ‘fraudulent action’. In the case of regression, For example, consider
you have to predict the future revenue for a company .For this case, history of data has to be
analyzed for predicting what will be the future revenue. Predictive data mining is data mining
that is done for the purpose of using business intelligence or other data to forecast or predict
trends. This type of data mining can help business leaders make better decisions and can add
value to the efforts of the analytics team. IT professionals may often talk about predictive
data mining in conjunction with predictive analytics or say that predictive data mining
supports predictive analytics.

In other words, the data may help to project whatwill happen later on in the business,
allowing business leaders to plan accordingly. Predictingthe identity of one thing based
Department of MCA, 2021- 1
Data and 1NZ19MCA
purely on the description of another, related thing. Not necessarily future events, just
unknowns. Based on the relationship between a thing that you can know and a thing you
need to predict. A classification problem could be seen as a predictor of classes, but
Predicted values are usually continuous whereas classifications are discreet.

Predictions are often (but not always) about the future whereas classifications are about
the present. Classification is more concerned with the input than the output. Predicting
levels of sales that will result from a price change or advert. Predicting whether or not it
will rain based on current humidity. Predicting the color of a pottery glaze based on a
mixture of base pigments. Predicting how far up the charts a single will go. Predicting
how much revenue a book of debt will bring. Most prediction techniques are based on
mathematical models:
 Simple statistical models such as regression
 Non-linear statistics such as power series
 Neural networks, RBFs, etc

All based on fitting a curve through the data that is, finding a relationship from the
predictors tothe predicted.
Types of Predictions:

 Inductive
 Deductive
 Abductive

Inductive

Predictions can be generated inductively. Today it is sunny. Yesterday it was sunny. The
day before yesterday was sunny. And it’s been sunny from the last 10 days. What can I
infer, if I just look at? I can predict that tomorrow it will be sunny. Predictions are
generated by projecting past occurrences onto the future. Indeed, there might be way
more sophisticated predictions than this example. However, no matter how complicated
they can be, there is one crucial point that is shared.

Department of MCA, 2021- 2


Data and 1NZ19MCA
Deductive

A second type of prediction is generated deductively. So, imagine that I am waiting for a
colleague of mine. Since I have seen him coming to work, I say to myself: if he has come,
then, I will see his laptop on his desk in his office. This is a different type of prediction.
My prediction is generated by deriving the logical consequences that follows from stating
a hypothesis. This is an important piece of equipment for scientists, because such type of
prediction allows us to put our hypothesis to the test. Interestingly, it does not “predict” a
future event. It just informs us about the epistemic value of a hypothesis. If we are right,
we will see this happening. If we are wrong, we won’t. In this sense such type of
prediction informs us about the truthfulness of our ideas, not about the future. It has, in
other words, an epistemic function.

Abductive

There is a third type of prediction, which is different from the previous two. This is a type
of prediction that like the first type tries to say something about the future. But unlike the
first one it is not generated by projecting past occurrences onto the future. For example, if
I see that a student is not particularly engaged, often forgets to attend classes, shows very
little interest in what he is studying, what I may predict is that this person is going to
drop out. And I do that abductively. Indeed, my prediction is still generated by looking at
whathappened in the past. “. In other words, we need to understand how things “work”. In
the example, we need to have some sort of understanding as to why students drop out.

Types of Prediction Algorithms:

 Linear Regression
 Logistic Regression
 Linear Discriminant Analysis
 Classification and Regression Trees
 Naive Bayes
 K-Nearest Neighbors
 Learning Vector Quantization
 Support Vector Machines

Department of MCA, 2021- 3


Data and 1NZ19MCA
Linear Regression
In linear regression we construct a model (equation) based on our data. We can then use this
model to make predictions about one variable based on particular values of the other
variable. The variable we are making predictions about is called the dependent variable (also
commonly referred to as: y, the response variable, or the criterion variable). The variable that
we are using to make these predictions is called the independent variable (also commonly
referred to as: x, the explanatory variable, or the predictor variable). This is, in fact, the line
that we were eyeballing in the opening section of the module. Using linear regression we will
be able to calculate the best fitting line, called the regression line.
Logistic Regression

Logistic regression is a classification algorithm used to assign observations to a discrete


setof classes. Unlike linear regression which outputs continuous number values, logistic
regression transforms its output using the logistic sigmoid function to return a probability
value which can then be mapped to two or more discrete classes.
Linear Discriminant Analysis

Logistic regression is a classification algorithm traditionally limited to only two-class


classification problems. If you have more than two classes then Linear Discriminant
Analysisis the preferred linear classification technique. In this post you will discover the
Linear Discriminant Analysis (LDA) algorithm for classification predictive modeling
problems.

 The representation of the model that is learned from data and can be saved to file.
 How the model is estimated from your data.
 How to make predictions from a learned LDA model.
 How to prepare your data to get the most from the LDA model.
This is intended for interested in applied machine learning, how the models work and
how to use them well. As such no background in statistics or linear algebra is required,
although it does help if you know about the mean and variance of a distribution.
Classification and Regression Trees

Decision Trees are an important type of algorithm for predictive modeling machine
learning. The classical decision tree algorithms have been around for decades and
modern variations like random forest are among the most powerful techniques available.

Department of MCA, 2021- 4


Data and 1NZ19MCA
In this post you will discover the humble decision tree algorithm known by its more
modern name CARTwhich stands for Classification and Regression Trees.
 The many names used to describe the CART algorithm for machine learning.
 The representation used by learned CART models that are actually stored on disk.

 How a CART model can be learned from training data.


 How a learned CART model can be used to make predictions on unseen data.

Naive Bayes

Naive Bayes is a simple but surprisingly powerful algorithm for predictive modeling.
Therepresentation used by naive Bayes that is actually stored when a model is written to a
file.
 How a learned model can be used to make predictions.
 How you can learn a naive Bayes model from training data.
 How to best prepare your data for the naive Bayes algorithm.
 Where to go for more information on naive Bayes.

K-Nearest Neighbors

The reason we find that much importance is given to classification algorithms and not
much is given to regression algorithms is because a lot of problems faced during our
diurnal routine belongs to the classification task. For example, we would like to know
whether a tumor is malignant or benign, we like to know whether the product we sold was
received positively or negatively by the consumers, etc. K nearest neighbors is another
classification algorithm and it is very simple one too. If you are following this article after
K means algorithm, don’t get confused as these both belong to different domains of
learning. K means is a clustering/unsupervised algorithm whereas K nearest neighbors is
a classification/supervised learning algorithm.

Learning Vector Quantization


LVQ was developed and is best understood as a classification algorithm. It supports both
binary (two-class) and multi-class classification problems. A codebook vector is a list of
numbers that have the same input and output attributes as your training data. For example, if
your problem is a binary classification with classes 0 and 1, and the inputs width, length
height, then a codebook vector would be comprised of all four attributes: width, length,
height and class

Department of MCA, 2021- 5


Data and 1NZ19MCA

Support Vector Machines

The basics of Support Vector Machines and how it works are best understood with a
simple example. Let’s imagine we have two tags red and blue, and our data has two
features x and y. We want a classifier that, given a pair of (x, y) co-ordinates, outputs if
it’s either red or blue. A support vector machine takes these data points and outputs the
hyper plane (which in two dimensions it’s simply a line) that best separates the tags. This
line is the decision boundary: anything that falls to one side of it we will classify as blue,
and anything that falls to the otheras red.

1.2 Problem Statement

The data is all about the information of the driver, how much he drove in the particular
region, how much money he has earned, the type of vehicle is he using, the company he is
working for, whether the vehicle is petrol type or diesel driven, the shifts he is working on
either single shift or double shift, the distance covered by him, the charges paid by the
customer, what will be the driver’s income, the data is totally three years data that is
2016- 2018 where in which the driver performance is good, whether the customer has
given any extra amount to the driver.
1.3 Existing System

Taxies have become part of our every individual in the metropolitan cities. But
when itcomes to booking a cab and getting a cab things aren’t that easy as they appear to
be. Most ofthe times when cabs being booked are either not available or too far from the
pickup location or the type of car required is not available and all. Because of all these it
is creating problem to the costumer. In the existing system of the taxi booking there is no
prediction algorithm is been used. The booking is most of the times done randomly.
Because of which the costumersare not been satisfied with the cab services.
When it comes to the driver’s point of view sometimes few drivers are getting more rides
to take and sometimes few drivers get less number of drives. Because of which there is an
uneven distribution of the shifts and burden of rides on the few drivers. There is no
solution for this that has been provided by any of the existing systems.

Department of MCA, 2021- 6


Data and 1NZ19MCA
1.4 Objective of the Work

The main objective of the Project on “The uses of Microsoft Power BI tool in India Power” is
to collect data about the organization from various data sources and connect them to Power
BI to create some rich reports and after that create dash-boards. . The purpose of the project
is to create and share real time insights for an organization so that business users will find it
easy to understand the status of the organization on the basis of time. Real time Dash boards
help management to take accurate decision as and when required. Management also plan for
future initiatives on the present trends.
1.5 Proposed System with Methodology
This chapter discusses the research methodology that will be used to implement this study.
It includes research design, target vehicle and customer, data collection and analysis of
data collection tools.
Methodology: Waterfall

The waterfall model is a popular version of the systems development life cycle for software
engineering. It is comprised of a series of very definite phases, each one is run and is
intended to be started sequentially only after the last has been completed, with one or more
tangible deliverable produced at the end of each phase. It places emphasis on documentation
as well as source code.
In less thoroughly designed and documented methodologies, knowledge is lost if system
developers leave before the project is completed, and it may be difficult for a project to
recover from the loss. The water fall model describes a development method that is linear
and sequential.
Some waterfall proponents prefer the water fall model for its simple approach and that it is
more disciplined. The water fall model provides a structured approach the model itself
progresses linearly through discreet, easily understandable and explainable phases and thusis
easy to understand.

Department of MCA, 2021- 7


Data and 1NZ19MCA

Fig 1.5 Waterfall Model

Feedback loops exists between each phase so that as the new information is uncovered or
problems are discovered, it is possible to “go back” a phase and make appropriate
modification. Progress flows from one stage to the next, much like the waterfall that gives
the model its name.

Following are the reasons, we chosen the Waterfall methodology-

• User well understands the software requirement.


• Requirements are well documented, clear and fixed.
• Project definition is stable.
• The project is short.
• There are no ambiguous requirements.
• Technology is understood and is not dynamic.

Department of MCA, 2021- 8


Data and 1NZ19MCA
1.6 Feasibility Study

The feasibility of a project can be ascertained in terms of technical factors, economic


factors, or both. A feasibility study is documented with a report showing all the
ramifications of the project Technical Feasibility.
Technical feasibility:

Technical feasibility refers to the ability of the process to take advantage of the current
state of the technology in pursuing further improvement. The technical capability of the
personnel as well as the capability of the available technology should be considered.

Technology transfer between geographical areas and cultures needs to be analyzed to


understand productivity loss (or gain) due to differences (see Cultural Feasibility). So on
sotechnically our project is feasible.

Economic Feasibility:

This involves the feasibility of the proposed project to generate economic benefits. A
benefit-cost analysis and a break even analysis are important aspects of evaluating the
economic feasibility of new industrial projects. The tangible and intangible aspects of a
project should be translated into economic terms to facilitate a consistent basis for
evaluation.
Financial Feasibility:

Financial feasibility should be distinguished from economic feasibility. Financial feasibility


involves the capability of the project organization to raise the appropriate funds needed to
implement the proposed project. Project financing can be a major obstacle in large multi-
party projects because of the level of capital required. Loan availability, credit worthiness,
equity, and loan schedule are important aspects of financial feasibility analysis.

Department of MCA, 2021- 9


Data and 1NZ19MCA
Legal Feasibility:
This assessment investigates whether any aspect of the proposed project conflicts with legal
requirements like zoning laws, data protection acts or social media laws. Let’s say an
organization wants to construct a new office building in a specific location. A feasibility
study might reveal the organization’s ideal location isn’t zoned for that type of business.
That organization has just saved considerable time and effort by learning that their project
was not feasible right from the beginning.

Scheduling Feasibility :
This assessment is the most important for project success; after all, a project will fail if not
completed on time. In scheduling feasibility, an organization estimates how much time the
project will take to complete. When these areas have all been examined, the feasibility
analysis helps identify any constraints the proposed project may face.

Fig 1.6 Feasibility Study

Department of MCA, 2021- 1


Data and 1NZ19MCA
CHAPTER 2

REVIEW OF LITERATURE

In the most recent decade, GPS-area frameworks have pulled in the consideration of the two
analysts and organizations because of the new sort of data accessible. In particular, the
pervasive qualities of this area mindful sensors (for example compact; accessible all over)
and of the data transmitted (for example a stream) expands the test. Also, they are generally
following human conduct (individual or in gathering) and they can be utilized cooperatively
to uncover their portability designs. Prepares, Busses, and Taxi Networks are as of now
effectively investigating these follows. Gonzalez et. al revealed the spatiotemporal
consistency of human portability, which were shown in different exercises, for example,
power load or road traffic stream. As of late, different works have utilized the GPS
chronicled information to break down the spatial structure of the traveler request. Deng et. al
mined this kind of information to manufacture and investigate a cause goal network in the
city of Shanghai, China. Liu et. al utilizes a 3D grouping system to break down the
portability knowledge spatial-designs for both top and common drivers. Yue et. al find the
Level of Attractiveness (LOA) of urban-spatiotemporal groups. The works concentrated on
traveler/taxi-discovering methodologies normally use information from urban areas, where
the interest is to a great extent better than the supply. An imaginative examination was
introduced by Bin et. al . Their objective was to approve the triplet Time-Location-Strategy
as the key highlights to manufacture a decent traveler discovering system. They utilized a
L1-NormSVM as a component choice apparatus to find both proficient and wasteful traveler
discovering systems in an expansive city in China. They made an exact investigation on the
effect of the chose highlights and its decisions were approved by the component choice
instrument. Lee et. al built a system to depict the spatiotemporal structure of the traveler
request on Jeju Island, South Korea. A client centered research was created by Phithak
kitnukoon et. al : they meant to anticipate where the empty cabs will be over reality to help
the customers in their day by day booking and arranging. Ge et. al gave a cost-effective
course suggestion display which had the capacity to prescribe groupings of pickup areas.

Department of MCA, 2021- 1


Data and 1NZ19MCA
Yuan et. al exhibited in an exceptionally total work containing strategies about
 How to isolate 3 the urban region into get zones utilizing spatial bunching.
 How a traveller can discover a taxi
 Which direction is the best to get the following traveller?

In spite of the fact that their outcomes are promising, the two methodologies are centered on
improving the direction of a solitary driver, disposing of the flow organize status (for
example the situation of the rest of the drivers). Little works with respect to the interest
forecast issue exist. Kaltenbrunner et. al identified the geographic and fleeting versatility
designs over information gained from a bike organize running in Barcelona. It likewise
legitimately addresses the forecast issue utilizing an ARMA (Auto Regressive Moving
Average) demonstrate. Their objective was to gauge the quantity of bikes at a station to
improve the stations spatial arrangement. Furthermore, they utilized ARIMA to gauge the get
amount at these hotspots over times of an hour. Thirdly, they introduced an improved
ARIMA subordinate both on schedule and day type. At last, they proposed a suggestion
framework dependent on the accompanying factors:
1) The quantity of cabs effectively situated at every hotspot
2) The separation from the driver area to the hotspot in time
3) The expectation about the quantity of administrations to be requested in every
single one of them.
Regardless of their great outcomes, this methodology has three frail focuses when
analyzed against our own:
1) It just uses the most quick chronicled information, disposing of the mid and long
haul memory of the framework.
2) Their proving ground utilizes least total times of an hour over disconnected
recorded information (for example the following worth forecast task on a period
arrangement goes simpler as long as you increment its collection period) while we
utilize transient times of 30 minutes.
3) The paper does not obviously depict how they update both the ARIMA show and
the loadsutilized by it.

Department of MCA, 2021- 1


Data and 1NZ19MCA
Power BI

Power BI, from Microsoft, is a suite of business analytics tools that is used to analyze data
and share insights in the form of reports and dashboards. User data in various forms –
spreadsheets, text files, databases, etc. form the input for Power BI. Datasets are formed by
transforming the data provided by the users. Data transformations are decided by the users.
This step is used to remove errors and redundant data, correct formatting, and prepare data
for further analysis by organizing them into suitable normalized forms, and so on. Based on
the report and dashboard being developed, filtering the data to only include the relevant bits
enables one to focus on only the data that matters. Once a dataset is ready, reports can be
created from them by adding from a choice of multiple visualization elements. Visualization
elements in Power BI ranges from showing a single number to a gradient colored map. These
visuals help present data in a way that provides context and insights. Filters can be applied on
the reports so that relevant data is surfaced for users interested in analyzing the data. Such
reports can be built manually or by using the Quick Insights feature, which uses various
algorithms to analyze the data and returns a set of reports that it builds automatically. Once
reports are published, elements of the report or the whole report can be included in
dashboards. Power BI dashboards show a 360 degree view of the data by enabling users to
keep their most important metrics in one place. They also allow users to interact with the
reports for filtering or querying the data, even allowing natural language queries.

It is further possible to constantly update the report and dashboard data, in real time, and
make it available on all devices like PCs and smart phones. Power BI is a Data
Visualization and Business Intelligence tool that converts data from different data sources
to interactive dashboards and BI reports. Power BI suite provides multiple software,
connector, and services - Power BI desktop, Power BI service based on Saas, and mobile
Power BI apps available for different platforms. These set of services are used by
business users to consume data and build BI reports.Power BI desktop app is used to
create reports, while Power BI Services (Software as a Service - SaaS) is used to publish
the reports, and Power BI mobile app is used to view the reports and dashboards.

Department of MCA, 2021- 1


Data and 1NZ19MCA
Power Query

Power Query is data transformation and mash up engine. Power Query can be
downloaded as an add-in for Excel or be used as part of Power BI Desktop. Power
Query also uses a powerful formula language as code behind called M. M is much more
powerful than the GUI built for it.
There is much functionality in M that cannot be accessed through graphical user interface.
I would write deeply about Power Query and M in future chapters so you can confidently
write any code and apply complex transformations to the data easily. Screenshot below is
a view of Power Query editor and some of its transformations.

Fig 2.1 Representing a data in the form of columns in the Power BI

Department of MCA, 2021- 1


Data and 1NZ19MCA

Power Pivot
Power Pivot is data modeling engine which works on xVelocity In-Memory based tabular
engine. . Power Pivot uses Data Analysis eXpression language DAX) for building
measures and calculated columns. DAX is a powerful functional language, and there are
heaps of functions for that in the library. We will go through details of Power Pivot
modeling and DAX in future chapters. Screenshot below shows the relationship diagram
of Power Pivot.

Fig 2.2 Representing the data modeling in the Power BI

Department of MCA, 2021- 1


Data and 1NZ19MCA
Power View

The main data visualization component of Power BI is Power View. Power View is an
interactive data visualization that can connect to data sources and fetch the metadata to be
used for data analysis. Power View has many charts for visualization in its list. Power
View gives you ability to filter data for each data visualization element or for the entire
report. Youcan use slicers for better slicing and dicing the data. Power View reports are
interactive, user can highlight part of the data and different elements in Power View talk
with each other. There are many configurations in Power View visualization that I will
explain fully in future chapters.

Fig 2.3 Representing the data in the form of Power View

Department of MCA, 2021- 1


Data and 1NZ19MCA
Power Map

Power Map is for visualizing Geo-spatial information in 3D mode. When visualization


renders in 3D mode it will gives you another dimension in the visualization. You
canvisualize a measure as height of a column in 3D, and another measure as heat map
view. Youcan highlight data based on the Geo-graphical location such as country, city,
state, and street address. Power Map works with Bing maps to get best visualization
based on Geo-graphical either latitude and longitude or country, state, city, and street
address information. Power Map is an add-in for Excel 2013, and embedded in Excel
2016.

Fig 2.4 Representing the data in the form of Power Map

Department of MCA, 2021- 1


Data and 1NZ19MCA
Power BI Desktop

Power BI Desktop is the newest component in Power BI suit. Power BI Desktop is a


holistic development tool for Power Query, Power Pivot and Power View. With Power BI
Desktop you will have everything under a same solution, and it is easier to develop BI
and data analysis experience with that. Power BI Desktop updates frequently and
regularly. This product has been in preview mode for a period of time with name of
Power BI Designer. There are so much great things about Power BI Desktop that cannot
fit in a small paragraph here, you’ll read about this tool in future chapters. Because of
great features of this product I’ll write the a section “Power BI Hello World” with a demo
of this product. You can have a better view of newest features of Power BI Desktop here
in this blog post. Screenshot below shows a view of this tool.

Fig 2.5 Representing the data in the form of Power BI Desktop

Department of MCA, 2021- 1


Data and 1NZ19MCA
Power BI Website

Power BI solution can be published to Power BI website. In Power BI website the data
source can be scheduled to refresh (depends on the source and is it supporting for
schedule data refresh or not). Dashboards can be created for the report, and it can be
shared withothers. Power BI website even gives you the ability to slice and dice the data
online without requiring any other tools, just a simple web browser. You can build report
and visualizations directly on Power BI site as well. Screenshot below shows a view of
Power BI site and dashboards built there.

Fig 2.6 Representing the data in the form of Power BI Website

Department of MCA, 2021- 1


Data and 1NZ19MCA
Power Q&A

Power Q&A is a natural language engine for questions and answers to your data model.
Onceyou’ve built your data model and deployed that into Power BI website, then you or
your users can ask questions and get answers easily. There are some tips and tricks about
how to build your data model so it can answer questions in the best way which will be
covered in future chapters. Power Q&A and works with Power View for the data
visualizations. So users can simply ask questions such as: Number of Customers by
Country, and Power Q&A will answer their question in a map view with numbers as
bubbles, Fantastic, isn’t it?

Fig 2.7 Representing the data in the form of Power Q&A

Department of MCA, 2021- 2


Data and 1NZ19MCA
Power BI Mobile Apps

There are mobile apps for three main mobile OS providers: Android, Apple, and
Windows Phone. These apps give you an interactive view of dashboards and reports in
the Power BI site; you can share them even from mobile app. You can highlight part of
the report, write a note on it and share it to others.

Fig 2.8 Representing the data in the form of Power BI Mobile App

Department of MCA, 2021- 2


Data and 1NZ19MCA
CHAPTER 3

SYSTEM CONFIGURATION

Hardware Requirements

Processor : Pentium – 4
 RAM : 4 GB
 Hard Disk : 500 MB
 Regular PC peripherals
 Internet connection

Software Requirement

 FRONTEND : R Studios, Power BI tool


 BACKEND : MySQL 6.3
 OPERATING SYSTEM : Windows 8 or above

Department of MCA, 2021- 2


Data and 1NZ19MCA

CHAPTER 4

MODULE DESCRIPTION
The main aim of this project is to make predictions based on the existing data about the
company, driver of the cab etc. based on these predictions only the modules have been
divided into four types. They are
 Company prediction
 Types of taxies
 Shift predictions
 Regions Predictions

COMPANY PREDICTIONS

All the data related to the companies that run taxi business in the past three financial
years are been taken. The data will then be examined properly and then the Naïve bayes
algorithm will be applied on the data that has been taken. Naive bayes works on the
probability of occurrence of an event. So based on this strategy only the company that has
the maximum number of taxi rides is been predicted. The predictions that are given can
be customized according to the user requirements. That is based on year wise or the car
that has been usedor based on the month of a particular year etc.

TYPES OF TAXIES

There are different types of taxies that are used by different companies that run the taxi
businesses. The types are 7 seater, airport, family, general, smart taxi, special aid etc. the
datafrom the last three years is been taken. The data will then be examined properly and
then the Naïve bayes algorithm will be applied on the data that has been taken. Naive
bayes works on the probability of occurrence of an event. So based on the algorithm the
prediction of which taxi is the best to be booked according to the user’s requirement.

Department of MCA, 2021- 2


Data and 1NZ19MCA
SHIFTS PREDICTIONS

There are two types of shifts single shift and double shift. Each driver can choose
either ofthe shifts to do. All the data related to the shifts that run taxi business in the past
three financial years are been taken. The data will then be examined properly and then the
Naïve bayes algorithm will be applied on the data that has been taken. Naive bayes works
on the probability of occurrence of an event. This prediction is used to decide, what is
the probability of the driver to take a single shift or a double shift for that particular day?
This helps in reducing the burden on a particular driver.

REGIONS PREDICTIONS

The data that has been taken is from 9 regions of Saudi. Which include Abu dhabi, air
port AlAin, etc. All the data related to the shifts that run taxi business in the past three
financial years are been taken. The data will then be examined properly and then the
Naïve bayes algorithm will be applied on the data that has been taken. Naive bayes works
on the probability of occurrence of an event. The prediction is done based on the number
of taxies been booked in a particular region in to the particular company. These
predictions can be customized according to the requirements such as the particular region,
particular taxi,particular company etc.

Department of MCA, 2021- 2


Data and 1NZ19MCA

CHAPTER 5

SYSTEM DESIGN

Fig 5.1 Representing the flowchart of the project

Department of MCA, 2021- 2


Data and 1NZ19MCA

Fig 5.2 Representing the Data Flow Diagram of the project

Department of MCA, 2021- 2


Data and 1NZ19MCA
CHAPTER 6

SYSTEM IMPLEMENTATION
6.1 Screenshots

Fig 6.1 Representing the Last 12 Months Trend By Taxi Type

Department of MCA, 2021- 2


Data and 1NZ19MCA

Fig 6.2 Representing the revenue comparison between years by month franchise

Department of MCA, 2021- 2


Data and 1NZ19MCA

Fig 6.3 Representing the average of revenue by monthly per taxi

Department of MCA, 2021- 2


Data and 1NZ19MCA

Fig 6.4 Representing the current month trip count

Fig 6.5 Representing top 10 drivers’ current year vs last year

Department of MCA, 2021- 3


Data and 1NZ19MCA

Fig 6.6 Representing the region wise revenue in the form of power map

Fig 6.7 Representing the revenue dashboard

Department of MCA, 2021- 3


Data and 1NZ19MCA

Fig 6.8 Representing the revenue detailed report

Department of MCA, 2021- 3


Data and 1NZ19MCA

Fig 6.9 Representing revenue detailed report with specifying the company
name

Fig 6.10 Representing revenue detailed report with specifying company name and taxi

Department of MCA, 2021- 3


Data and 1NZ19MCA
CHAPTER 7

SYSTEM TESTING
TEST PLAN

System testing is characterized as a movement to check whether the genuine outcomes


coordinate the normal outcomes and to guarantee that the product framework is sans defect.
It includes execution of a product part or framework segment to assess at least one
properties of intrigue. Programming testing additionally distinguishes blunders, holes or
missing prerequisites in spite of the genuine necessities. It very well may be either done
physically or utilizing robotized apparatuses. Some lean toward saying Software testing as a
White Box and Black Box Testing. In basic terms, Software Testing implies Verification of
Application under Test (AUT).

Test plan portrays the goals, degree, approach, and focal point of a venture testing
exertion. The way toward setting up a test plan is a helpful method to thoroughly
consider the endeavors expected to approve the agreeableness of a venture. Complete test
plan assists individuals with understanding the venture approval.

Fig 7.1 Test Plan

Department of MCA, 2021- 3


Data and 1NZ19MCA
Software Testing

Software testing is characterized as a movement to check whether the genuine outcomes


coordinate the normal outcomes and to guarantee that the product framework is sans
defect.It includes execution of a product part or framework segment to assess at least one
properties of intrigue. Programming testing additionally distinguishes blunders, holes or
missing prerequisites in spite of the genuine necessities. It very well may be either done
physically or utilizing robotized apparatuses. Some lean toward saying Software testing
as a White Box and Black Box Testing. . In basic terms, Software Testing implies
Verification of Application under Test (AUT).
Importance of Software Testing -
Testing is imperative since programming bugs could be costly or even risky.
Programming bugs can conceivably cause financial and human misfortune, and history is
brimming with such models.
 In April 2015, Bloomberg terminal in London slammed because of
programming glitch influenced in excess of 300,000 brokers on monetary
markets. It constrained the legislature to defer a 3bn pound obligation deal.
 Nissan vehicles need to review more than 1 million autos from the market
because of programming disappointment in the airbag tangible identifiers.
There has been accounted for two mishaps because of this product
disappointment.
 Starbucks was compelled to close around 60 percent of stores in the U.S and
Canada because of programming disappointment in its POS framework. At a
certain point store served espresso for nothing as they unfit to process the
exchange.
 Some of the Amazon's outsider retailers saw their item cost is diminished to
1p because of a product glitch. They were left with substantial misfortunes.
 Vulnerability in Window 10. This bug empowers clients to escape from
security sandboxes through a blemish in the win32k framework.
 In 2015 military aircraft F-35 succumbed to a product bug, making it unfit to
distinguish targets accurately.
 China Airlines Airbus A300 smashed because of a product bug on April 26,
1994, slaughtering 264 blameless live

Department of MCA, 2021- 3


Data and 1NZ19MCA
 In 1985, Canada's Therac-25 radiation treatment machine broke down
because of programming bug and conveyed deadly radiation portions to
patients, leaving 3individuals dead and basically harming 3 others.
 In April of 1999, a product bug caused the disappointment of a $1.2 billion
military satellite dispatch, the costliest mishap in history
 In May of 1996, a product bug caused the financial balances of 823 clients of
a noteworthy U.S. bank to be credited with 920 million US dollars.

Kinds of Software Testing


Regularly Testing is characterized into three classifications.
 Functional Testing
 Non-Functional Testing or Performance Testing
 Maintenance (Regression and Maintenance)

Black Box Testing

Discovery testing is characterized as a testing method in which usefulness of the


Application Under Test (AUT) is tried without taking a gander at the inside code
structure, execution subtleties and learning of inner ways of the product. This kind of
testing depends altogether on programming prerequisites and determinations.
In Black Box Testing we simply center on information sources and yield of the product
framework without making a fuss over inside learning of the product program. The above
Black-Box can be any product framework you need to test. For example, a working
framework like Windows, a site like Google, a database likes Oracle or even your own
custom application.
Here are the non exclusive advances pursued to do any sort of Black Box Testing-
 Initially, the prerequisites and determinations of the framework are inspected.
 Tester picks substantial sources of info (positive test situation) to check
whether SUT forms them accurately. Additionally, some invalid sources of
info (negative test situation) are picked to confirm that the SUT can
distinguish them.
 Tester decides anticipated yields for every one of those information sources.
 Software analyzer develops experiments with the chose sources of info.
 The experiments are executed.
 Software analyzer contrasts the real yields and the normal yields.
 Defects if any are fixed and re-tried.

Department of MCA, 2021- 3


Data and 1NZ19MCA

Kinds of Black Box Testing

There are numerous kinds of Black Box Testing however coming up next are the
conspicuous ones -
 Functional testing - This discovery testing type is identified with the
utilitarian necessities of a framework; it is finished by programming
analyzers.
 Non-useful testing - This sort of discovery testing isn't identified with testing
of explicit usefulness, however non-practical necessities, for example,
execution, versatility, ease of use.
 Regression testing - Regression Testing is done after code fixes, overhauls or
some other framework upkeep to check the new code has not influenced the
current code.

Apparatuses utilized for Black Box Testing:


Apparatuses utilized for Black box testing to a great extent relies upon the kind of
discovery testing you are doing.
 For Functional/Regression Tests you can utilize - QTP, Selenium
 For Non-Functional Tests, you can utilize - LoadRunner, Jmeter

Department of MCA, 2021- 3


Data and 1NZ19MCA
Comparison of Black box and White box testing

Black Box Testing White Box Testing

The main focus of black box White Box Testing (Unit Testing)
testing is on the validation validates internal structure and
of your functional working of your software code.
requirements.

Black box testing gives


Black box testing gives abstraction
abstraction from code and
from code and focuses on testing
focuses on testing effort on
effort on the software system
the software system
behavior.
behavior.

Black box testing facilitates White box testing does not


testing communication facilitate testing communication
among the modules. among the modules.

Table 7.2 Black box vs White box testing

Department of MCA, 2021- 3


Data and 1NZ19MCA
Functional Testing

Utilitarian Testing is characterized as a sort of testing which checks that each capacity of
the product application works in conformance with the prerequisite detail. This testing
mostly includes discovery testing and it isn't worried about the source code of the
application. Every single usefulness of the framework is tried by giving suitable info,
checking the yield and contrasting the real outcomes and the normal outcomes. This
testing includes checking of User Interface, APIs, Database, security, customer/server
applications and usefulness of the Application Under Test. The testing should be possible
either physically or utilizing computerization

What do you test in Functional Testing?


The prime goal of Functional testing is checking the functionalities of the product
framework. It for the most part focuses on -
 Mainline capacities: Testing the primary elements of an application
 Basic Usability: It includes fundamental ease of use testing of the
framework. Itchecks whether a client can openly explore through the
screens with no challenges.
 Accessibility: Checks the availability of the framework for the client
 Error Conditions: Usage of testing procedures to check for blunder

Fig 7.3 Function Testing

Department of MCA, 2021- 3


Data and 1NZ19MCA

White Box Testing

White Box Testing (WBT) is otherwise called Code-Based Testing or Structural Testing.
White box testing is the product testing strategy in which inner structure is being known
to analyzer who is going to test the product. Testing dependent on an investigation of the
inner structure of the part or framework. In this technique for testing the experiments
aredetermined dependent on investigation inside structure of the framework dependent on
Code inclusion, branches inclusion, ways inclusion, condition Coverage and so on.

White box testing includes the testing by taking a gander at the interior structure of the
code and when you totally mindful of the inside structure of the code then you can run
your experiments and check whether the framework meet necessities referenced in the
determination report. In light of determined experiments the client practiced the
experiments by giving the contribution to the framework and checking for anticipated
yields with genuine yield. In this is trying strategy client needs to go past the UI to
discover the rightness of the framework.
In the White box testing following advances are executed to test the product code:

 Basically confirm the security gaps in the code.


 Verify the broken or fragmented ways in the code.
 Verify the progression of structure notice in the detail record
 Verify the Expected yields
 Verify the every restrictive circle in the code to check the total
usefulness of theapplication.
 Verify the line by line or Section by Section in the code and spread the
100% testing.

Department of MCA, 2021- 4


Data and 1NZ19MCA

MAINTANENCE
Software maintenance is widely accepted part of SDLC now a days. It stands for all the
modifications and updating done after the delivery of software product. There are
number of reasons, why modifications are required, some of them are briefly mentioned
below

MARKET CONDITIONS

Policies, which changes over the time, such as taxation and newly introduced constraints
like, how to maintain bookkeeping, may trigger need for modification.

CLIENT REQUIREMENTS

Over the time, customer may ask for new features or functions in the software.

HOST MODIFICATIONS

If any of the hardware and/or platform (such as operating system) of the target host
changes, software changes are needed to keep adaptability.

ORGANIZATION CHANGES

If there is any business level change at client end, such as reduction of organization
strength, acquiring another company, organization venturing into new business, need to
modify in the original software may arise.

TYPES OF MAINTENANCE
In a software lifetime, type of maintenance may vary based on its nature. It may be just a
routine maintenance tasks as some bug discovered by some user or it may be a large
event in itself based on maintenance size or nature. Following are some types of
maintenance based on their characteristics:

CORRECTIVE MAINTENANCE

This includes modifications and updating done in order to correct or fix problems, which
are either discovered by user or concluded by user error reports.

Department of MCA, 2021- 4


Data and 1NZ19MCA
ADAPTIVE MAINTENACE

This includes modifications and updating applied to keep the software product up-to
date and tuned to the ever changing world of technology and business environment.

PERFECTIVE MAINTENANCE

This includes modifications and updates done in order to keep the software usable over
long period of time. It includes new features, new user requirements for refining the
software and improve its reliability and performance.

PREVENTIVE MAINTENANCE

This includes modifications and updating to prevent future problems of the software. It
aims to attend problems, which are not significant at this moment but may cause serious
issues in future.

Project maintenance is the process of tracking and enabling project activities in


accordance with the project plan and is an essential factor in overall project success. After
spending so much time in the planning phase, many project managers have a tendency to
take a step back once the other members of the project team start their work, but
experienced project managers know that project monitoring is every bit as important as
project planning.

Projects, in particular, require a steady commitment to project maintenance, simply


because they tend to have much longer duration than projects undertaken. No matter what
size projects you’re managing today, an appreciation for project maintenance can only
improve your chances of success. These strategies below can help you keep your eye on
the prize even during the longest-running projects.

Department of MCA, 2021- 4


Data and 1NZ19MCA
CHAPTER 8

RESULTS AND DISCUSSION

8.1 CONCLUSION

The architectural implications of the many options available for data refresh are, how to
manage security, and what the options are to customize Power BI and/or integrate it with
existing applications

 Available options to update data with scheduled refresh or live connections


 Integration of Power BI with Microsoft Office
 Control data access for specific users with row-level security.
 Possible extensibility options using the Power BI REST API

Power BI is an open ecosystem that is constantly growing, thanks to the features added by
Microsoft and those additional options provided by third-party groups, which use the
same API you can use to customize and extend Power BI according to your specific
needs.

Department of MCA, 2021- 4


Data and 1NZ19MCA

8.2 ADVANTAGES AND LIMITATIONS

Advantages of using Power BI-

Moderate
Power BI is a cloud-based business investigation benefit that gives you a solitary view of
the dashboard to see your basic business information. Their paid adaptation begins at just
$9.99 per client every month. At this cost, you can’t locate a superior BI instrument
availabletoday.

Department of MCA, 2021- 4


Data and 1NZ19MCA

Microsoft Brand Combination


Since Power BI is created by Microsoft, it is firmly combined with Microsoft item suite and
henceforth joining with Excel, Azure and SQL server is a breeze. On the off chance that you
are a current Microsoft business client of Azure or SQL server at that point incorporating
Power BI will a moment

Reliable Re-designs

Power BI was propelled in 2013 and from that point forward it has picked up such a large
number of new functionalities. Microsoft is intense about Power BI and has as of late
presented a scope of new functionalities. In spite of the fact that Power BI may do not
have every one of the fancy odds and ends of developed BI devices in the market, it will
before long pick up fulfilment in future.
Simple Coordination with Excel

Power BI base on an indistinguishable interface from Excel control, so in case you’re an


Excel control client at that point learning and utilizing Power BI will basic and clear.

Accessibility of Colossal Learning Assets

Microsoft Power BI has a rich arrangement of bloggers who always distribute new
instructional exercises and tips for utilizing BI adequately. There are additionally various
alternatives for recordings and slides offering instructional exercises on Power BI.
Notwithstanding these rich assets, there is a developing network of Power BI clients with
whom you can work together to discover answers to Power BI issues.

Good Report Perception


Microsoft Power BI has a broad scope of outlining alternatives for picturing your
information. With different graph composes, Power BI offers itemized revealing. Clients
can likewise create tweaked representations.

Department of MCA, 2021- 4


Data and 1NZ19MCA
Broad Database Availability

Power BI can associate and concentrate information from an assortment of information


sources like Excel, Access, Adobe Analytics, SQL Server, Azure, Github, Google
Analytics, Oracle, PostgreSQL, Salesforce, Teradata, and so on. This is just an example
list with an everincreasing number of information sources getting included every month.

Upgraded Collaboration

Power BI dashboards and reports can get to crosswise over stages whenever live. It takes
a shot at all stages like Windows, Android and iOS. It is anything but difficult to share
your modified dashboards with different clients in a split second.

Department of MCA, 2021- 4


Data and 1NZ19MCA
LIMITATIONS

 Not the best decision for dealing with mass information.


 Power BI will hang ordinarily while taking care of immense arrangements of
information. The best answer to defeat this issue is to utilize live association
which will make it substantially quicker.

Department of MCA, 2021- 4


Data and 1NZ19MCA
 Like most other Microsoft items Power BI has a vast arrangement of item
alternatives which make it complex to get it. There are numerous unpredictable
parts like Power BI desktop, Gateway, Power BI Services, and so forth and it is
hard to comprehend which alternative is most appropriate for a Business.
 Power BI reports and dashboards can’t acknowledge or pass client, account
or other substance parameters. This makes it difficult to make substance
particular dashboards, for example, a dashboard for a record, opportunity, case, or
battle. Rather, dashboards constrain to total perspectives of element information.
 Dashboards and reports must impart to clients who have similar email spaces
or emailareas recorded with your Office 365 inhabitant.

 While a dataset can incorporate various information composes, Power BI


reports and dashboards can just source information from a solitary dataset.
 Power BI won’t acknowledge records bigger than 250 MB.
 There is a 1GB utmost for each dataset. As a workaround, you can make
numerous datasets. There is likewise a greatest of 100,000 records in
PowerBI.com.
 The arrangement can convey on-introduce utilizing the Power BI Report
Server, in any case, the cost rises drastically.

Department of MCA, 2021- 4


Data and 1NZ19MCA
8.3 FUTURE ENHANCEMENT

Visually analysed data can give users the ability to monitor KPIs in real time.
Power BI integration enables businesses with excellent data visualization that can help
them gather reports using intuitive and sharable dashboards.

Integrating Azure ML Studio and R script visual (Rviz) with Power BI enables users with
what-if analysis. This helps businesses to get predictive models which enable them to
predict revenue, marketing information, future prices and much more. Organizations can
analyze historical data and predict outcomes based on the analysis report.

You can add multiple bookmarks, rename them, view and use them as many times as you
require. If you want to use your bookmarks as a story or in a presentation, you can use the
‘View’ option to enter into the view mode for bookmarks. You will be in the view mode
where you can preview all your bookmarks in the Power BI Desktop. It’s a great way to
navigate bookmarks in the Power BI tool.

Department of MCA, 2021- 4


Data and 1NZ19MCA
CHAPTER 9

REFERENCES

8.1 TEXT REFERENCES

[1] Alberto Ferrari and Marco Russo : Introducing Microsoft Power Bi.
[2] Roger S. Pressman : Software Engineering – A Practitioners approach, 7th Edition,
Mc-Graw Hill Education,2012
[3] Anthony S Williams : Analyzing data with Power BI
[4] Power BI.com
[5] SAP Analytics

8.2 WEB REFERENCES

[1] www.Github.com
[2] www.Thousandsproject.com
[3] www.Slideshare.net
[4] www.demoadda.com
[5] www.Geeksforgeeks.com

Department of MCA, 2021- 5


Data and 1NZ19MCA
SIMILARITY INDEX

Department of MCA, 2021- 5

You might also like