0% found this document useful (0 votes)
39 views

Unit I Introduction To Data Science Syllabus

This document provides an introduction to data science and big data analytics. It discusses key concepts like the 3Vs of big data (volume, variety, velocity), different data structures (structured, semi-structured, quasi-structured, unstructured), and challenges with traditional analytical architectures. The role of data scientists is also outlined, including their involvement in data architecture, acquisition, analysis, and presentation.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views

Unit I Introduction To Data Science Syllabus

This document provides an introduction to data science and big data analytics. It discusses key concepts like the 3Vs of big data (volume, variety, velocity), different data structures (structured, semi-structured, quasi-structured, unstructured), and challenges with traditional analytical architectures. The role of data scientists is also outlined, including their involvement in data architecture, acquisition, analysis, and presentation.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

IT1110 DATA SCIENCE AND BIG DATA ANALYTICS UNIT-I

UNIT I
INTRODUCTION TO DATA SCIENCE
Syllabus
Introduction of Data Science – Basic Data Analytics using R – R Graphical User Interfaces – Data
Import and Export – Attribute and Data Types – Descriptive Statistics – Exploratory Data
Analysis – Visualization Before Analysis – Dirty Data – Visualizing a Single Variable – Examining
Multiple Variables – Data Exploration Versus Presentation..

Introduction to Data Science


Big Data Overview
Data is created constantly, and at an ever-increasing rate. Mobile phones, social media, imaging
technologies to determine a medical diagnosis—all these and more create new data, and that
must be stored somewhere for some purpose. Devices and sensors automatically generate
diagnostic information that needs to be stored and processed in real time. Merely keeping up
with this huge influx of data is difficult, but substantially more challenging is analyzing vast
amounts of it, especially when it does not conform to traditional notions of data structure, to
identify meaningful patterns and extract useful information. These challenges of the data deluge
present the opportunity to transform business, government, science, and everyday life.

Several industries have led the way in developing their ability to gather and exploit data:
● Credit card companies monitor every purchase their customers make and can identify
fraudulent purchases with a high degree of accuracy using rules derived by processing billions of
transactions.
● Mobile phone companies analyze subscribers’ calling patterns to determine, for example,
whether a caller’s frequent contacts are on a rival network. If that rival network is offering an
attractive promotion that might cause the subscriber to defect, the mobile phone company can
proactively offer the subscriber an incentive to remain in her contract.
● For companies such as LinkedIn and Facebook, data itself is their primary product. The
valuations of these companies are heavily derived from the data they gather and host, which
contains more and more intrinsic value as the data grows.

Ms. Selva Mary. G Page 1


IT1110 DATA SCIENCE AND BIG DATA ANALYTICS UNIT-I

Three attributes stand out as defining Big Data characteristics:


● Huge volume of data (Volume): Rather than thousands\ or millions of rows, Big Data can be
billions of rows and millions of columns.
● Complexity of data types and structures (Variety): Big Data reflects the variety of new data
sources, formats, and structures, including digital traces being left on the web and other digital
repositories for subsequent analysis.
● Speed of new data creation and growth (Velocity): Big Data can describe high velocity data,
with rapid data ingestion and near real time analysis.
Although the volume of Big Data tends to attract the most attention, generally the variety and
velocity of the data provide a more apt definition of Big Data. (Big Data is sometimes described
as having 3 Vs: volume, variety, and velocity.) Due to its size or structure, Big Data cannot be
efficiently analyzed using only traditional databases or methods. Big Data problems require new
tools and technologies to store, manage, and realize the business benefit. These new tools and
technologies enable creation, manipulation, and management of large datasets and the storage
environments that house them.

Definition
Definition of Big Data comes from the McKinsey Global report from 2011:
Big Data is data whose scale, distribution, diversity, and/or timeliness require the use of new
technical architectures and analytics to enable insights that unlock new sources of business
value.
McKinsey’s definition of Big Data implies that organizations will need new data architectures and
analytic sandboxes, new tools, new analytical methods, and an integration of multiple skills into
the new role of the data scientist, Figure 1-1 highlights several sources of the Big Data deluge.

Ms. Selva Mary. G Page 2


IT1110 DATA SCIENCE AND BIG DATA ANALYTICS UNIT-I

Different Data Structures


 Structured data: Data containing a defined data type, format, and structure (that is,
transaction data, online analytical processing [OLAP] data cubes, traditional RDBMS, CSV
files, and even simple spreadsheets).

FIGURE Example of structured data

Ms. Selva Mary. G Page 3


IT1110 DATA SCIENCE AND BIG DATA ANALYTICS UNIT-I

 Semi-structured data: Textual data files with a discernible pattern that enables parsing
(such as Extensible Markup Language [XML] data files that are self-describing and defined
by an XML schema).

Figure Semi-structured data

Ms. Selva Mary. G Page 4


IT1110 DATA SCIENCE AND BIG DATA ANALYTICS UNIT-I

 Quasi-structured data: Textual data with erratic data formats that can be formatted with
effort, tools, and time (for instance, web clickstream data that may contain inconsistencies
in data values and formats).

Figure Quasi-structured data


 Unstructured data: Data that has no inherent structure, which may include text
documents, PDFs, images, and video.

Figure Unstructured data

Ms. Selva Mary. G Page 5


IT1110 DATA SCIENCE AND BIG DATA ANALYTICS UNIT-I

Current Analytical Architecture


Data Science projects need workspaces that are purpose-built for experimenting with data, with
flexible and agile data architectures. Most organizations still have data warehouses that provide
excellent support for traditional reporting and simple data analysis activities but unfortunately
have a more difficult time supporting more robust analyses. A typical analytical data architecture
that may exist within an organization is explained below

Figure shows typical data architecture and several of the challenges it presents to data scientists
and others trying to do advanced analytics. This section examines the data flow to the Data
Scientist and how this individual fits into the process of getting data to analyze on projects.
1. For data sources to be loaded into the data warehouse, data needs to be well understood,
structured, and normalized with the appropriate data type definitions. Although this kind
of centralization enables security, backup, and failover of highly critical data, it also
means that data typically must go through significant preprocessing and checkpoints
before it can enter this sort of controlled environment, which does not lend itself to data
exploration and iterative analytics.
2. As a result of this level of control on the EDW, additional local systems may emerge in the
form of departmental warehouses and local data marts that business users create to
accommodate their need for flexible analysis. These local data marts may not have the
same constraints for security and structure as the main EDW and allow users to do some
level of more in-depth analysis. However, these one-off systems reside in isolation, often
are not synchronized or integrated with other data stores, and may not be backed up.

Ms. Selva Mary. G Page 6


IT1110 DATA SCIENCE AND BIG DATA ANALYTICS UNIT-I

3. Once in the data warehouse, data is read by additional applications across the enterprise
for BI and reporting purposes. These are high-priority operational processes getting
critical data feeds from the data warehouses and repositories.
4. At the end of this workflow, analysts get data provisioned for their downstream analytics.
Because users generally are not allowed to run custom or intensive analytics on
production databases, analysts create data extracts from the EDW to analyze data offline
in R or other local analytical tools. Many times these tools are limited to in-memory
analytics on desktops analyzing samples of data, rather than the entire population of a
dataset. Because these analyses are based on data extracts, they reside in a separate
location, and the results of the analysis—and any insights on the quality of the data or
anomalies—rarely are fed back into the main data repository.

Role of Data Scientists


Data scientists play the most active roles in the four A’s of data:
Data architecture
A data scientist would help the system architect by providing input on how the data would need
to be routed and organized to support the analysis, visualization, and presentation of the data to
the appropriate people.

Data acquisition
Representing, transforming, grouping, and linking the data are all tasks that need to occur before
the data can be profitably analyzed, and these are all tasks in which the data scientist is actively
involved.

Data analysis
The analysis phase is where data scientists are most heavily involved. In this context we are
using analysis to include summarization of the data, using portions of data (samples) to make
inferences about the larger context, and visualization of the data by presenting it in tables,
graphs, and even animations.

Data archiving
Finally, the data scientist must become involved in the archiving of the data. Preservation of
collected data in a form that makes it highly reusable - what you might think of as "data

Ms. Selva Mary. G Page 7


IT1110 DATA SCIENCE AND BIG DATA ANALYTICS UNIT-I

duration"- is a difficult challenge because it is so hard to anticipate all of the future uses of the
data.

The data science process


The data science process typically consists of six steps, as you can see in the mind map in figure

1. Setting the research goal


Data science is mostly done in the context of an organization. When the business asks you to
perform a data science project, you’ll first prepare a project charter. This charter contains
information such as what you’re going to research, how the company benefits from that, what
data and resources you need, a timetable, and deliverables.
2. Retrieving data
The second step is to collect data. You’ve stated in the project charter which data you need and
where you can find it. In this step you ensure that you can use the data in your program, which

Ms. Selva Mary. G Page 8


IT1110 DATA SCIENCE AND BIG DATA ANALYTICS UNIT-I

means checking the existence, quality, and access to the data. Data can also be delivered by third-
party companies and take many forms ranging from Excel spreadsheets to different types of
databases.
3. Data cleansing
Data collection is an error-prone process; in this phase you enhance the quality of the data and
prepare it for use in subsequent steps. This phase consists of three subphases: data cleansing
removes false values from a data source and inconsistencies across data sources, data
integration enriches data sources by combining information from multiple data sources, and
data transformation ensures that the data is in a suitable format for use in your models.
4. Data exploration
Data exploration is concerned with building a deeper understanding of your data. You try to
understand how variables interact with each other, the distribution of the data, and whether
there are outliers. To achieve this you mainly use descriptive statistics, visual techniques, and
simple modeling. This step often goes under the abbreviation EDA for Exploratory Data Analysis.
5. Data modeling or model building
In this phase you use models, domain knowledge, and insights about the data you found in the
previous steps to answer the research question. You select a technique from the fields of
statistics, machine learning, operations research, and so on. Building a model is an iterative step
between selecting the variables for the model, executing the model, and model diagnostics.
6. Presentation and automation
Finally, you present the results to your business. These results can take many forms, ranging
from presentations to research reports. Sometimes you’ll need to automate the execution of the
process because the business will want to use the insights you gained in another project or
enable an operational process to use the outcome from your model.
AN ITERATIVE PROCESS
The previous description of the data science process gives you the impression that you walk
through this process in a linear way, but in reality you often have to step back and rework
certain findings. For instance, you might find outliers in the data exploration phase that point to
data import errors. As part of the data science process you gain incremental insights, which may
lead to new questions. To prevent rework, make sure that you scope the business question
clearly and thoroughly at the start.

Ms. Selva Mary. G Page 9


IT1110 DATA SCIENCE AND BIG DATA ANALYTICS UNIT-I

Challenges in Data Science


• Preparing Data (Noisy, Incomplete, Diverse, Streaming …)
• Analyze Data (Scalable, Accurate, Realtime, Advanced Methods, Probabilities & Uncertainties)
• Represent Analysis Results (i.e. data product) (Story-telling, Interactive, explainable…)

****************************

Ms. Selva Mary. G Page 10

You might also like