0% found this document useful (0 votes)
13 views18 pages

Business Analytics

Business analytics is essential for organizations to interpret vast amounts of data and make informed decisions using three primary methods: descriptive, predictive, and prescriptive analysis. The benefits of business analytics include improved decision-making, increased revenue, and enhanced operational efficiency. Understanding data collection, validation, and presentation methods, along with database management systems, is crucial for effective business analytics.

Uploaded by

atinthrina03
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views18 pages

Business Analytics

Business analytics is essential for organizations to interpret vast amounts of data and make informed decisions using three primary methods: descriptive, predictive, and prescriptive analysis. The benefits of business analytics include improved decision-making, increased revenue, and enhanced operational efficiency. Understanding data collection, validation, and presentation methods, along with database management systems, is crucial for effective business analytics.

Uploaded by

atinthrina03
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 18

BUSINESS ANALYTICS

- Business analytics is a powerful tool in today’s marketplace. Across industries,


organizations are generating vast amounts of data which heightened the need for
professionals who know how to interpret and analyze the information.
- Business analytics is the process of using quantitative methods to derive
meaning from data in order to make informed business decisions.

THREE PRIMARY METHODS OF BUSINESS ANALYSIS

1. Descriptive – the interpretation of historical data to identify trends and patterns


2. Predictive – the use of statistics to forecast future outcomes
3. Prescriptive – the application of testing and other techniques to determine which
outcome will yield the best result in a given scenario

Now, deciding which method to employ will depend on the business situation at
hand.

BENEFITS OF BUSINESS ANALYSIS

1. More Informed Decision-Making - when it comes to making critical strategic


decision, business analytics can be useful tool.
2. Greater Revenue - company that invest in data and analytics project will reap
substantial financial benefits.
3. Improved Operational Efficiency - analytics can be used to fine-tune company
processes in addition to financial benefits

WHY STUDY BUSINESS ANALYTICS?

Business will need a lot of research skills or business research skills. You will
develop an analytical framework that can be implemented in your daily decision making
and help your company succeed by learning how to identify patterns, test hypothesis
and draw conclusions from population samples.

Business analytics as one of the skills companies need most in 2019. Learning
how to recognize trends, test hypotheses, and draw conclusions from population
samples, you can build an analytical framework that can be applied in your everyday
decision-making and help your organization thrive.

LESSON 1: BUSINESS ANALYTICS FUNDAMENTALS


Information Processing Concepts

INFORMATION PROCESSING refers to the manipulation of digitalized information by


computers and other digital electronic equipment, known collectively as information
technology (IT). Business applications, operating systems, computers, networks and
mainframes are examples of INFORMATION PROCESSING SYSTEMS.

The term information processing refers to any time data needs to be transmitted or
processed in any way.

TYPES OF INFORMATION PROCESSING

1. Transactional
 Focus on data item processing
2. Analytical
 Focus on reporting, analysis, transformation and decision support.

DIKW PYRAMID

The DIKW MODEL is used for data value extraction and information management. The
DIKW pyramid is a common method for explaining how we move from data to
information, knowledge and wisdom with a portion of behavior and decisions. It has its
origins in knowledge management.
1. Data – raw facts, value or elements. Data is conceived of as symbols or signs,
representing stimuli or signals.
2. Information – the results of organizing data that provides context and meaning.
3. Knowledge – information that provides insights thus making it useful and
actionable. Knowledge is valuable and actionable because it offers insights. It is
a fluid blend of frame experience, principles, contextual information, expert
perspective and grounded intuition that offers structure and atmosphere for
analyzing and integrating new experiences and information. Knowledge begins
and ends in the minds of those who know. It is often embedded in organizational
routines, systems, activities and norms in addition to documents and repositories.
4. Wisdom – the soundness of an action or decision with regard to the application
of experience, information, knowledge, and good judgement. Wisdom adds a
value which requires the mental function that we call judgement. The ethical and
aesthetic values that this implies are inherent to the actor and are unique and
personal. It is the ability to increase effectiveness.

DIKW PYRAMID

 The DIKW hierarchy depicts relationships between data, information, knowledge


and wisdom.
 The DIKW (Data, Information, Knowledge, Wisdom) model shows how toe
human mind can move raw data up to higher planes by progressive
organizations.
 Relationships between data elements enable bits and bytes to gain meaning and
thus informative to us.
 As we move up the hierarchy, looking for patterns and deploying principles, we
impose structure and organization, often by classifying or categorizing the
information and knowledge.

TACIT AND EXPLICIT KNOWLEDGE

Just like DIKW model, the tacit and explicit knowledge are both knowledge
management tools used for data value extraction and information management.
 Tacit Knowledge (knowing-how): knowledge embedded in the human mind
through experience and jobs. Know-how and learning embedded within the
minds of peoples. Personal wisdom and experience, context-specific, more
difficult to extract and codify. Tacit knowledge includes insights, intuitions.
 Explicit Knowledge (knowing-that): knowledge codified and digitized in books,
documents, reports, memos, etc. Documented information that can facilitate
action. Knowledge what is easily identified, articulated, shared and employed.
LESSON 2: DATA COLLECTION

WHAT IS DATA?

 Factual information (such as measurements or statistics) used as a basis for


reasoning, discussion, or calculation.
 Is information processed or stored by a computer.
 This information may be in the form of text documents, images, audio clips,
software programs, or other types of data.
 Computer data may be processed by the computer’s CPU and is store in files
and folders on the computer’s hard disk.

TYPES OF DATA

1. Quantitative Data
 is any data that is in numerical form like statistics and percentages.
2. Qualitative Data
 is descriptive data like color, smell, appearance and quality.
 Secondary data is typically quantitative in nature and has already been
collected by another party for different purpose.

DATA COLLECTION

 Is the systematic approach to gathering and measuring information from a variety


of sources to get a complete and accurate picture of an area of interest.
 Data collection enables a person or organization to answer relevant questions,
evaluate outcomes and make predictions about future probabilities and trends.
 Method of collecting and analyzing data from a variety of sources in order to
obtain a full and accurate picture of a subject.

IMPORTANCE OF COLLECTING ACCURATE DATA

Accurate data collection is essential to maintaining the integrity of research,


making informed business decisions and ensuring quality assurance. Without an
accurate data, business are doomed to fail.
DATA COLLECTION METHODS

MOST POPULAR METHODS: surveys, interviews, focus groups

Depending on the project companies can now collect data from mobile devices, website
traffic, server activity and other related sources with the aid of web and analytics
software.

DATA VALIDATION

 Data validation primarily helps in ensuring that the data sent to connected
applications is complete, accurate, secure and consistent.
 This is achieved through data validation’s checks and rules that routinely check
for the validity of data. These rules are generally defined in a data dictionary or
are implemented through data validation software.

To determine the validity of the data, it should be the following:

 Accurate
 Coherent and Comparable
 Clarity and Accessible
 Timely

DATA PRESENTATION

 Data presentation is defined as the process of using various graphical formats


to visually represent the relationship between two or more data sets so that
an informed decision can be made based on them.
 Presenting data effectively and efficiently will help your audience quickly
understand each and every point that you wanted to showcase.
THREE METHODS OF DATA PRESENTATION

1. Textual - basically putting the results into a logical order. The disadvantage of
this approach is that in order to get a clear image, one must read the entire text.
2. Tabular - this is where data is presented using tables and graphs. It is a format
for presenting data in rows and columns.
3. Diagrammatic - this method of data presentation and interpretation conveys a
great deal in a limited period of time. The types of diagrammatic presentation are
geometric diagram, bar diagram, pie chart, frequency diagram and histogram.

PRIMARY SCALES

1. Nominal Scale
 Used to mark variables that have no numerical significance.
 Used to describe categories in which there is no specific order.
 For example, green, yellow, and red are three colors that in general are not
bound by an inherent order.
2. Ordinal Scale
 Ordinal scale is used to describe categories in which there is an inherent
order.
 Is where order of the values is important and meaningful in ordinal scales but
the variations between them are not well understood
3. Interval Scale
 Interval scale is used to convey relative magnitude information such as
temperature. The term “Interval” comes about because rulers (and rating
scales) have intervals of uniform lengths.
 Numeric scales in which we know both the order and the exact differences
between the values. The classic example of an interval scale is Celsius
temperature because the difference between each value is the same.
4. Ratio
 Convey information on an absolute scale.
 Informs us about the order, the exact value between units and have an
absolute zero. They are the ultimate data measurement scale. They can be
used for a wide variety of descriptive and inferential statistics.
 When it comes to statistical analysis, ratio scale opens up a world of
possibilities. These variables can be added, subtracted, multiplied and
separated in meaningful ways or in short, ratio.
 The mode, median and mean can be used to determine the central
tendencies. Ratio scales can also be used to quantify measures of dispersion
such as standard deviation and coefficient of a variation.

LESSON 3: RELATIONAL DATABASE MANAGEMENT SYSTEM (RDBMS)

WHAT IS DATABASE?

 A database is an organized collection of structured information, or data,


typically stored electronically in a computer system.
 A database usually controlled by a database management system (DBMS).
 Together, the data and the DBMS, along with the applications that are
associated with them, are referred to as a database system, often shortened
to just database.

Since their start in the early 1960s, databases have evolved considerably. The
first system to store and outer data are navigational databases such as the hierarchical
database which utilized a tree-like architecture and only permitted one-to-many
interactions or a one-to-many relationship. And the network database which use more
flexible model that permitted many associations or many-to-many. These early systems
are inflexible despite their simplicity. Relational database gained popularity in the 1980s
followed by object-oriented databases in the 1990s.

Now, SQL database were created more recently in response to the rapid growth
of the internet and the demand for faster processing of unstructured data. When it
comes to how data is acquired, stored, managed and used, cloud databases and self-
driving databases are forging new ground today.
TYPES OF DATABASES

Relational Database - data is arranged in a relational database as a series of tables


with columns and rows. The more effective and versatile approach to access structured
data is through relational database technology.

Object-Oriented Database - objects are used to represent data just as they are in
object-oriented programming using classes and objects.

Distributed Database - made up of two or more files that are stored at various
locations. The database could be spread across numerous machines in the same
physical area or across numerous networks.

Data Warehouses - specifically designed for fast query and analysis. It serves as a
central repository for data.

NoSQL Database - stores and manipulates unstructured and semi-structured data in


contrast to a relational database which defines how all data inserted into the database
must be composed.

Graph Database - stores data in terms of entity and their relationships.

WHAT IS DATABASE MANAGEMENT SYSTEM?

 Factual information (such as measurements or statistics) used as a basis for


reasoning, discussion, or calculation.
 Is information processed or stored by a computer.
 This information may be in the form of text documents, images, audio clips,
software programs, or other types of data.
 Computer data may be processed by the computer’s CPU and is store in files
and folders on the computer’s hard disk.

WHAT IS RELATIONAL DATABASE MANAGEMENT SYSTEM (RDBMS)?

 A relational database management system (RDBMS or just RDB) is a common


type of database that stores data in tables, so it can be used relation to other
stored datasets.
 Most databases used by businesses these days are relational databases, as
opposed to a flat file or hierarchical database.
 The majority of current IT systems and applications are based on a relational
DBMS.

WHAT IS STRUCTURE QUERY LANGUAGE (SQL)?

 SQL (Structured Query Language) is a standardized programming language


that’s used to manage relational databases and perform various operations on
the data in them.
 Initially created in the 1970s, SQQL is regularly used not only by database
administrators, but also by developers writing data integration scripts and data
analysts looking to set up and run analytical queries.

WHAT IS MYSQL DATABASE?

 MySQL is an open-source relational database management system based on


SQL It was designed and optimized for web applications and can run on any
platform.
 As new and different requirements emerged with the internet, MySQL became
the platform of choice for web developers and web-based applications.

FOUR CATEGORIES OF SQL COMMANDS


ASSESSMENT EXAM 2

Textual - is basically putting the results into logical order. The disadvantage of this
approach is that in order to get a clear image, one must read the entire text.

Data Validation - primarily helps in ensuring that the data sent to connected
applications is complete, accurate, secure and consistent.

Data Presentation - is described as the process of visualizing the relationship between


two or more data sets using various graphical formats in order to make an informed
decision based on them.

Nominal Scales - are used to mark variables that have no numerical significance.

Ratio Scale - informs us about the order, the exact value between units, and have an
absolute zero, they are the ultimate data measurement scale.

Quantitative Data - any data in numerical form, such as numbers and percentages.

Diagrammatic - this method of data presentation and interpretation conveys a great


deal in limited period of time.

Qualitative Data - secondary data and had already been obtained for a separate
reason by another group.

Tabular - is where data is presented using tables and graphs. It is a format for
presenting data in rows and columns.
Data Collection - is a method of collecting and analyzing data from variety of sources
in order to obtain a full and accurate picture of a subject.

Data - facts, values, or components are all examples of raw data. Data is thought of as
a collection of symbols or signs that reflect stimuli or signals.

Ordinal Scale - is where order of the values is important and meaningful in ordinal
scales, but the variations between them are not well understood.

Interval Scale - are numeric scales in which we know both the order and the exact
differences between the values.

LESSON 4: BUSINESS ANALYTICS AND BIG DATA

WHAT IS ANALYTICS?

 Analytics is the scientific process of discovering and communicating the


meaningful patterns which can be found in data.
 It is concerned with the turning raw data into insight for making better decisions.
 Analytics relies on the application of statistics, computer programming, and
operations research in order to quantify and gain insights to the meanings of
data. It is especially useful in areas which record a lot of data or information.

WHAT IS BUSINESS ANALYTICS?

 Is the iterative, methodical exploration of an organization’s data, with an


emphasis on statistical analysis.
 Business analytics is used by companies that use data-driven decision-making.
 The business goal of the analysis is determined an analysis methodology is
selected and business data is required to support the analysis.
 Data acquisition often involves extraction from multiple business systems and
data sources, then cleansing and integrating data into a single repository such as
data warehouse or data mart.

TYPES OF BUSINESS ANALYTICS

1. Descriptive Analytics
 Tells us what happened, which describes how things have changed over
time.
2. Diagnostic Analytics
 Answers the question why did it happen, which focuses on the explanation
of an events occurrence. It necessitates, hypothesizing and utilizes a large
and diverse data set.
3. Predictive Analytics
 Answers the question what could happen in the future. It focuses on
incidents that are likely to happen in the near future.
4. Prescriptive Analytics
 Answers the question how should we respond to those potential future
events. This indicates that there is a strategy in place.

FUNCTIONS OF TYPES OF BUSINESS ANALYTICS

DESCRIPTIVE ANALYTICS

 Descriptive analytics or reporting analytics is a preliminary stage of data


processing that creates a summary of historical data to yield useful information
and possibly prepare the data for further analysis.
 Descriptive analysis is sometimes said to provide information about what
happened.
 To discover historical data, descriptive analytics deploys two main methods: data
aggregation and data mining, they are also known as data discovery.
 The process of gathering and arranging data to create manageable data sets is
known as data aggregation. These data sets are then used in the data mining
process which identifies patterns threads and context before being presented in a
comprehensible manner.
 Remember that descriptive analytics does not seek to go beyond the surface
data and analysis. Further research is beyond scope of descriptive analytics and
descriptive analytics observations are not used to make inferences or predictions.
INFORMATION DERIVED FROM DESCRIPTIVE ANALYTICS

Reports

 Inventory
 Workflow
 Sales
 Revenue

Social Analytics

 Average number of replies per post


 Number of page views
 Average response time
 Attitudes

PREDICTIVE ANALYTICS

 Predictive analytics is used to identify future probabilities and trends, is said


provide information about what might happen in the future.
 Predictive analytics can be used for a variety of use cases.
 Predictive analytics is a form of advanced analytics that uses both new and
historical data to forecast behavior and trends.
 It involves constructing predictive models that assign in numerical value or
ranking to the likelihood of a particular event occurring using statistical analysis
techniques, analytical questions and automated machine learning algorithms.
 Probabilities are used in predictive analytics. Predictive analytics aims to forecast
potential future outcomes and the probability of those events using a range of
techniques including data mining, statistical modeling and machine learning
algorithms. For example, take existing data and try to fill in the gaps with the best
possible guesses in order to make predictions.
INFORMATION DERIVED FROM PREDICTIVE ANALYTICS

Allows executive and managers to make a more strategic data-driven approach


to business planning and decision making because it can tell them what will happen in
the future. Predictive analytics may be use in variety of purposes including predicting
customer behavior and buying habits as well as detecting sales trends. It can also help
in food chain processes and inventory demand forecasting

 Tell a business what could happen in the future


 Forecast customer behavior and purchasing patterns to identifying sales trends.
 Forecast supply chain, operations and inventory demands.

PRESCRIPTIVE ANALYTICS

 Prescriptive analytics is applied to try identify the best outcome to events, given
the parameters, and suggest decision options to best take advantage of a future
opportunity or mitigate a future risk.
 Prescriptive analytics is used to determine the best outcome or events given the
parameters and to propose decision strategies for taking advantage of potential
opportunities or mitigating future risk.
 This method is the third, final and most advanced stage in the business analysis
process and that calls business to action helping executives, managers and
operational employees make the best possible decisions based on the data
available to them.
 Prescriptive analytics expand on what has been learned through descriptive and

predictive analysis by proposing the best possible courses of action for a


business.
 A variety of techniques and methods such as guidelines, statistics and machine
learning algorithms can be applied to available data from both internal and
external data in order to make predictions and recommendations.
 Machine learning’s capabilities far exceeds what a person would do while
attempting to achieve the same performance.

INFORMATION DERIVED FROM PRESCRIPTIVE ANALYTICS

 Make recommendations in regard to which decisions will best take advantage of


future opportunities or mitigate future risks.
 Makes it possible to consider the possible outcomes for each before any
decisions are made.
 Cam have a real impact on business strategy and decision making to improve
things such as production, customer experience and business growth.

BIG DATA

 Big data is a combination of structured, semi structured and unstructured data


collected by organizations that can be mined for information and used in
machine learning projects, predictive modeling and other advanced analytics
applications.
 Big data is a collection of structured, semi structured and unstructured data
collected by the organizations and used for information mining, machine
learning projects, predictive modeling and other advanced analytics
applications.
 Big data is a massive collection of data that is growing exponential potentially
over time. It is a data set that is so large and complex that traditional data
management tools cannot store or process it efficiently.
 Big data is a type of data that is extremely large in size.

EXAMPLES OF BIG DATA

1. New York Stock Exchange


 Generates about one terabyte of new trade data per day.
2. Social Media
 Statistics shows that 500 terabytes or 500+ terabytes of new data get
ingested into the database of social media sites like Facebook every day.
This data is mainly generated in terms of photos and video uploads,
messages exchange message exchanges and putting comments.
3. Single Jet Engine
 It can generate 10+ terabytes of data in 30 minutes of flight time with
many thousand flights per day. Generation of data reaches up to
petabytes.

TYPES OF BIG DATA

1. Structured Data
 Defined as any data that can be stored, accessed, and processed in a
fixed format.
2. Unstructured Data
 Defined as any data with an unknown form or structure.
3. Semi-Structured Data
 Both types of data that can be found in semi-structured data. Semi-
structured data appears structured but it is not defined in the same way
that a table definition in a rational database management system is.
CHARACTERISTICS OF BIG DATA

 Volume
 The term data refers to a massive amount of information. The size of the
data is very important in determining the value of the data. Furthermore,
whether a particular data set can be considered as big data or not is
determined by the volume of data.
 As a result, volume is one of the characteristics that must be considered
when dealing with big data.
 Variety
 The next feature of big data which is diverse. The term variety refers to
wide range of data sources and data types both structured and
unstructured.
 Previously, spreadsheets and databases were the only data considered by
most applications.
 Velocity
 Refers to the rate at which data is generated. The true potential of the
data is determined by how quickly it is generated and processed to meet
the demands.
 Big data velocity is concerned with the rate at which data is ingested from
sources such as business processes, application logs, networks and
social media sites, sensors mobile devices and so on.
 The data flow is massive in and continuous.
 Variability
 Refers to the inconsistency that data can exhibit at times. Impending the
process of effective handling and managing data.

You might also like