0% found this document useful (0 votes)
13 views24 pages

Ch2 Emerging

Chapter Two provides an overview of data science, defining it as a multi-disciplinary field focused on extracting insights from various data types. It discusses the data processing cycle, data types from both computer programming and analytics perspectives, and the importance of data curation, storage, and usage. Additionally, it introduces big data concepts, including its characteristics and the Hadoop ecosystem for managing large datasets.

Uploaded by

AZ
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views24 pages

Ch2 Emerging

Chapter Two provides an overview of data science, defining it as a multi-disciplinary field focused on extracting insights from various data types. It discusses the data processing cycle, data types from both computer programming and analytics perspectives, and the importance of data curation, storage, and usage. Additionally, it introduces big data concepts, including its characteristics and the Hadoop ecosystem for managing large datasets.

Uploaded by

AZ
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 24

Chapter Two

1 An Overview of Data Science


Outline
 Data Curation
 An Overview of Data Science  Data Storage
 What are data and  Data Usage
information?
 Data Processing Cycle  Basic concepts of big data
 Data types and their  What Is Big Data?
representation
 Clustered Computing and
 Data types from Computer
Hadoop Ecosystem
programming perspective
 Clustered Computing
 Data types from Data
Analytics perspective  Hadoop and its Ecosystem
 Data value Chain  Big Data Life Cycle with
Hadoop
 Data Acquisition
 Data Analysis
An Overview of Data Science

 Data science is a multi-disciplinary field that uses scientific


methods, processes, algorithms, and systems to extract
knowledge and insights from structured, semi-structured
and unstructured data.

 Data science is much more than simply analyzing data.

 It offers a range of roles and requires a range of skills.


What are data and information?

Data can be defined as a representation of facts, concepts, or


instructions in a formalized manner, which should be suitable for
communication, interpretation, or processing, by human or
electronic machines.

 It can be described as unprocessed facts and figures.

Information is the processed data on which decisions and actions


are based.

 It is data that has been processed into a form that is meaningful


to the recipient and is of real or perceived value in the current or
the prospective action or decision of recipient.
Data Processing Cycle

 Data processing is the re-structuring or re-ordering of data


by people or machines to increase their usefulness and add
values for a particular purpose.

 These three steps constitute the data processing cycle.

Figure 1 Data Processing Cycle


Cont..
Input − in this step, the input data is prepared in some
convenient form for processing.

For example, when electronic computers are used, the input


data can be recorded on any one of the several types of storage
medium, such as hard disk, CD, flash disk and so on.

Processing − in this step, the input data is changed to


produce data in a more useful form.

For example, interest can be calculated on deposit to a bank, or


a summary of sales for the month can be calculated from the
sales orders.
Cont..

Output − at this stage, the result of the proceeding


processing step is collected.

 The particular form of the output data depends on the use


of the data.

For example, output data may be payroll for employees.


Data types and their representation

 Data types can be described from diverse perspectives.

 In computer science and computer programming, for instance, a data type is


simply an attribute of data that tells the compiler or interpreter how the
programmer intends to use the data.

Data types from Computer programming perspective

Common data types include:


 Integers(int)-
 Booleans(bool)- is used to represent restricted to one of two values: true or false
 Characters(char)- is used to store a single character
 Floating-point numbers(float)- is used to store real numbers
 Alphanumeric strings(string)- used to store a combination of characters and numbers
Data types from Data Analytics perspective

 From a data analytics point of view, it is important to


understand that there are three common types of data types
or structures: Structured, Semi-structured, and Unstructured
data types.
Cont.….

Structured Data is data that adheres to a pre-defined data model


and is therefore straightforward to analyze.

examples Excel files or SQL databases.

Semi-structured Data is a form of structured data that does not


conform with the formal structure of data models associated with
relational databases or other forms of data tables,

Examples JSON and XML are forms of semi-structured data.

Unstructured data is information that either does not have a


predefined data model or is not organized in a pre-defined manner.

examples audio, video files or No-SQL databases.


Metadata – Data about Data

 The last category of data type is metadata.

 From a technical point of view, this is not a separate data


structure, but it is one of the most important elements for
Big Data analysis and big data solutions.

 Metadata is data about data.

 It provides additional information about a specific set of


data.
Data value Chain

 The Data Value Chain is introduced to describe the


information flow within a big data system as a series of
steps needed to generate value and useful insights from
data.
Data Acquisition

 It is the process of gathering, filtering, and cleaning data before it is


put in a data warehouse or any other storage solution on which data
analysis can be carried out.

 It is one of the major big data challenges in terms of infrastructure


requirements.

Data Analysis

 It is concerned with making the raw data acquired amenable to use


in decision-making as well as domain-specific usage.
Cont..

 Data analysis involves exploring, transforming, and modelling data with


the goal of highlighting relevant data, synthesizing and extracting useful
hidden information with high potential from a business point of view.

 Related areas include data mining, business intelligence, and machine


learning.

Data Curation is the active management of data over its life cycle to ensure
it meets the necessary data quality requirements for its effective usage.

 It is processes can be categorized into different activities such as content


creation, selection, classification, transformation, validation, and
preservation.
Cont..

Data Storage is the persistence and management of data in


a scalable way that satisfies the needs of applications that
require fast access to the data.

 Relational Database Management Systems (RDBMS) have


been the main, and almost unique, a solution to the
storage paradigm for nearly 40 years.
Cont..
Data Usage- covers the data-driven business activities that
need access to data, its analysis, and the tools needed to
integrate the data analysis within the business activity.
Basic concepts of big data

 Big data is a blanket term for the non-traditional strategies


and technologies needed to gather, organize, process, and
gather insights from large datasets.

 While the problem of working with data that exceeds the


computing power or storage of a single computer is not new,
the pervasiveness, scale, and value of this type of computing
have greatly expanded in recent years.
What Is Big Data?

Big data is the term for a collection of data sets so large


and complex that it becomes difficult to process using on-
hand database management tools or traditional data
processing applications.

 In this context, a “large dataset” means a dataset too


large to reasonably process or store with traditional
tooling or on a single computer
Cont..

Big data is characterized by 4V and more:


 Volume: large amounts of data Zeta bytes/Massive datasets
 Velocity: Data is live streaming or in motion
 Variety: data comes in many different forms from diverse
sources
 Veracity: can we trust the data? How accurate is it? etc.
Clustered Computing and Hadoop Ecosystem

Clustered Computing

 Because of the qualities of big data, individual computers are often


inadequate for handling the data at most stages.

 To better address the high storage and computational needs of big data,
computer clusters are a better fit.

Benefits:-

 Resource Pooling: Combining the available storage space to hold data is


a clear benefit, but CPU and memory pooling are also extremely important.

 High Availability: Clusters can provide varying levels of fault tolerance


and availability guarantees to prevent hardware or software failures from
affecting access to data and processing.
Cont.…

Easy Scalability: Clusters make it easy to scale horizontally by adding additional


machines to the group.

Hadoop and its Ecosystem

 Hadoop is an open-source framework intended to make interaction with big data


easier. It is a framework that allows for the distributed processing of large
datasets across clusters of computers using simple programming models.
 The four key characteristics of Hadoop are:
 Economical: Its systems are highly economical as ordinary computers can be
used for data processing.
 Reliable: It is reliable as it stores copies of the data on different machines and
is resistant to hardware failure.
 Scalable: It is easily scalable both, horizontally and vertically. A few extra
nodes help in scaling up the framework.
Cont.…

 Hadoop has an ecosystem that has evolved from its four core
components: data management, access, processing, and
storage.
Big Data Life Cycle with Hadoop

 Ingesting data into the system- the 1st stage of Big Data processing is
Ingest. The data is ingested or transferred to Hadoop from various sources
such as relational databases, systems, or local files.

 Processing the data in storage- the 2nd stage is Processing. In this


stage, the data is stored and processed.

 Computing and analyzing data- the 3rd stage is to Analyze. Here, the
data is analyzed by processing frameworks such as Pig, Hive, and Impala.

 Visualizing the results- the 4th stage is Access, which is performed by


tools such as Hue and Cloud era Search. In this stage, the analyzed data
can be accessed by users
24

an k
Th
u ! !
Yo
Q & A

You might also like