Chapter Two
Chapter Two
Data science continues to evolve as one of the most promising and in-demand
career paths for skilled professionals
An Overview of Data Science
It is multidisciplinary
What is data?
A representation of facts, concepts, or instructions in a formalized manner,
which should be suitable for communication, interpretation, or
processing by human or electronic machine
Processed data on which decisions and actions are based. Plain collected
data as raw facts cannot help much in decision-making
Input
Processing
Output
Data Processing Cycle
Input
For example, when electronic computers are used, the input data can be
recorded on any one of the several types of input medium, such as flash
disks, hard disk, and so on
Data Processing Cycle
Processing
In this step, the input data is changed to produce data in a more useful
form
For example, a summary of sales for a month can be calculated from the
sales orders data
Data Processing Cycle
Output
The particular form of the output data depends on the use of the data
This data type defines the operations that can be done on the data, the
meaning of the data, and the way values of that type can be stored
Classifications of Data
Data can be classified into the following categories
Structured
Unstructured
Semi-structured
Meta-data
Structured Data
Data that adheres to a predefined data model and is therefore
straightforward to analyze
Common examples
It is typically text-heavy, but may contain data such as dates, numbers, and
facts as well
Common examples
The ability to extract value from unstructured data is one of main drivers
behind the quick growth of Big Data
Semi-structured Data
A form of structured data that does not conform with the formal structure
of data models associated with relational databases or other forms of data
tables
Many Big Data solutions and tools have the ability to ‘read’ and process either
JSON or XML
Metadata – Data about Data
It provides additional information about a specific set of data
For example
Metadata of a photo could describe when and where the photos were
taken
The metadata then provides fields for dates and locations which, by
themselves, can be considered structured data
Because of this reason, metadata is frequently used by Big Data solutions for
initial analysis.
What Is Big Data?
Large datasets together with the category of computing strategies and
technologies that are used to handle them
Data Value Chain
Describe the information flow within a big data system as a series of steps
needed to generate value and useful insights from data
The Big Data Value Chain identifies the following key high-level activities
A key trend for the curation of big data utilizes community and crowd
sourcing approaches
Data Storage
It is the persistence and management of data in a scalable way that satisfies
the needs of applications that require fast access to the data
Relational Database Management Systems (RDBMS) have been the main, and
almost unique, solution to the storage paradigm for nearly 40 years
Data Storage
The ACID (Atomicity, Consistency, Isolation, and Durability) properties of the
relational database that guarantee database transactions, lack flexibility
with regard to schema changes and the performance and fault tolerance
when data volumes and complexity grow, making them unsuitable for big
data scenarios
NoSQL technologies have been designed with the scalability goal in mind
and present a wide range of solutions based on alternative data models
Data Usage
Covers the data-driven business activities that need access to data, its
analysis, and the tools needed to integrate the data analysis within the
business activity
Data is frequently flowing into the system from multiple sources and is
often expected to be processed in real time to gain insights and update the
current understanding of the system
This focus on near instant feedback has driven many big data practitioners
away from a batch-oriented approach and closer to a real-time streaming
system
Velocity
Data is constantly being added, massaged, processed, and analyzed in
order to keep up with the influx of new information and to surface valuable
information early when it is most relevant
Data can be ingested from internal systems like application and server
logs, from social media feeds and other external APIs, from physical
device sensors, and from other providers
Big data seeks to handle potentially useful data regardless of where it’s
coming from by consolidating all information into a single system
Variety
The formats and types of media can vary significantly as well
Rich media like images, video files, and audio recordings are ingested
alongside text files, structured logs, etc
Veracity
"Veracity" in the context of data refers to the accuracy, reliability, and
trustworthiness of the data.
The variety of sources and the complexity of the processing can lead to
challenges in evaluating the quality of the data ( biases, noise and
abnormality in data)
Variability
Variability in data refers to the extent to which data points in a dataset deviate
or differ from each other.
Sometimes, the systems and processes in place are complex enough that
using the data and extracting actual value can become difficult
Clustered Computing
Because of the qualities of big data, individual computers are often
inadequate for handling the data at most stages
To better address the high storage and computational needs of big data,
computer clusters are a better fit
https://www.geeksforgeeks.
org/hadoop-ecosystem/
Life Cycle Big Data
Ingesting data into the system
The complexity of this operation depends heavily on the format and quality of
the data sources and how far the data is from the desired state prior to
processing
During the ingestion process, some level of analysis, sorting, and labelling
usually takes place
This process is sometimes called ETL, which stands for extract, transform,
and load
Persisting the Data in Storage
The ingestion processes typically hand the data off to the components that
manage storage, so that it can be reliably persisted to disk
The volume of incoming data, the requirements for availability, and the
distributed computing layer make more complex storage systems necessary
Batch Processing
Batch processing is most useful when dealing with very large datasets that
require quite a bit of computation
Batch processing
This is the strategy used by Apache Hadoop’s MapReduce
Real-time processing
Demands that data be processed and made ready immediately and
requires the system to react as new data becomes available
Apache Storm, Apache Flink, and Apache Spark provide different ways of
achieving real-time or near real-time processing
Visualizing the Results
Visualizing data is one of the most useful ways to spot trends and make
sense of a large number of data points