Big Data Analytics Mod1
Big Data Analytics Mod1
Module-1
Introduction to Big Data
1.1 Need of Big Data
The rise in technology has led to the production and storage of voluminous amounts of data.
Earlier megabytes (106 B) were used but nowadays petabytes (1015 B) are used for
processing, analysis, discovering new facts and generating new knowledge. Conventional
systems for storage, processing and analysis pose challenges in large growth in volume of
data, variety of data, various forms and formats, increasing complexity, faster generation of
data and need of quickly processing, analyzing and usage.
Figure 1.1 shows data usage and growth. As size and complexity increase, the proportion of
unstructured data types also increase.
An example of a traditional tool for structured data storage and querying is RDBMS.
Volume, velocity and variety (3Vs) of data need the usage of number of programs and tools
for analyzing and processing at a very high speed.
1
Big Data Analytics-21CS71
Web Data
Web data is the data present on web servers (or enterprise servers) in the form of text, images,
videos, audios and multimedia files for web users. A user (client software) interacts with this
data. A client can access (pull) data of responses from a server. The data can also publish
(push) or post (after registering subscription) from a server. Internet applications including
web sites, web services, web portals, online business applications, emails, chats, tweets and
social networks provide and consume the web data.
Structured Data
Structured data conform and associate with data schemas and data models. Structured data
2
Big Data Analytics-21CS71
are found in tables (rows and columns). Nearly 15-20% data are in structured or semi-
structured form.
Examples of semi-structured data are XML and JSON documents. Semi-structured data
contain tags or other markers, which separate semantic elements and enforce hierarchies
of records and fields within the data. Semi-structured form of data does not conform and
associate with formal data model structures. Data do not associate data models, such as the
relational database and table models.
Unstructured Data
• Mobile data: Text messages, chat messages, tweets, blogs and comments
• Website content data: YouTube videos, browsing data, e-payments, web store
data, user-generated maps
▪ Data of a very large size, typically to the extent that its manipulation and management
present significant logistical challenges-oxford English dictionary.
4
Big Data Analytics-21CS71
▪ Big Data refers to data sets whose size is beyond the ability of typical database
software tool to capture, store, manage and analyze
• Veracity: quality of data captured, which can vary greatly, affecting its
accurate analysis
5
Big Data Analytics-21CS71
Data storage, distributed file system, Operational Data Store (ODS), data marts, data
Big Data warehouse, NoSQL database (MongoDB, Cassandra), sensors data, audit trail of
sources financial transactions, external data such as web, social media, weather
data, health records
Big Data
formats
Unstructured, semi-structured and multi-structured data
Data Stores Web, enterprise or cloud servers, data warehouse, row-oriented data for
structure OLTP, column-oriented for OLAP, records, graph database, hashed entries for
key/value pairs
Processing
data rates
Batch, near-time, real-time, streaming
Processing Big High volume, velocity, variety and veracity, batch, near real-time and streaming
Data rates data processing,
Analysis types Batch, scheduled, near real-time datasets analytics
Big Data
processin Batch processing (for example, using MapReduce, Hive or Pig), real-time
processing (for example, using SparkStreaming, SparkSQL, Apache Drill)
g methods
Data analysis Statistical analysis, predictive analysis, regression analysis, Mahout, machine
methods learning algorithms, clustering algorithms, classifiers, text analysis, social
network analysis, location-based analysis, diagnostic analysis, cognitive
analysis
Data Usage
Human, business process, knowledge discovery, enterprise applications, Data
6
Big Data Analytics-21CS71
Big Data can be classified on the basis of its characteristics that are used for designing data
architecture for processing and analytics.
Following are the techniques deployed for Big Data storage, applications, data
• Huge data volumes storage, data distribution, high-speed networks and high-performance
computing
• Applications scheduling using open source, reliable, scalable, distributed file system, distributed
database, parallel and distributed computing systems, such as Hadoop or Spark
• Open source tools which are scalable, elastic and provide virtualized environment, clusters of
data nodes, task and thread management
• memory data management using columnar or Parquet formats during program execution
• Data mining and analytics, data retrieval, data reporting, data visualization and machine-
learning Big Data tools.
• Big Data needs processing of large data volume, and therefore needs intensive
computations.
• Processing complex applications with large datasets (terabyte to petabyte datasets) need
hundreds of computing nodes.
• Processing of this much distributed data within a short time and at minimum cost is
problematic.
• Scalability is the capability of a system to handle the workload as per the magnitude of the
work.
• System capability needs increment with the increased workloads.
• When the workload and complexity exceed the system capacity, scale it up and scale it out.
• Scalability enables increase or decrease in the capacity of data storage, processing & analytics.
7
Big Data Analytics-21CS71
Analytical Scalability
Vertical scalability means scaling up the given system’s resources and increasing the system’s
analytics, reporting and visualization capabilities. This is an additional way to solve problems
of greater complexities. Scaling up means designing the algorithm according to the architecture
that uses resources efficiently.
x terabyte of data take time t for processing, code size with increasing complexity increase
by factor n, then scaling up means that processing takes equal, less or much less than (n * t).
Horizontal scalability means increasing the number of systems working in coherence and
scaling out the workload. Processing different datasets of a large dataset deploys horizontal
scalability. Scaling out means using more resources and distributing the processing and storage
tasks in parallel. The easiest way to scale up and scale out execution of analytics software is to
implement it on a bigger machine with more CPUs for greater volume, velocity, variety and
complexity of data. The software will definitely perform better on a bigger machine.
A distributed computing model uses cloud, grid or clusters, which process and analyze big and
large datasets on distributed computing nodes connected by high-speed networks.
Big Data processing uses a parallel, scalable and no-sharing program model, such as
MapReduce, for computations on it.
Distributed Computing on multiple nodes Big data Large data Small to Medium data
8
Big Data Analytics-21CS71
⏵ One of the best approach for data processing is to perform parallel and distributed
computing in a cloud-computing environment
⏵ Cloud resources can be Amazon Web Service (AWS) Elastic Compute Cloud (EC2),
Microsoft Azure or Apache CloudStack.
⏵ on-demand service
⏵ resource pooling,
⏵ scalability,
⏵ accountability,
⏵ Cloud services can be accessed from anywhere and at any time through the Internet.
Cloud Services
⏵ Providing access to resources, such as hard disks, network connections, databases storage,
data center and virtual server spaces is Infrastructure as a Service (IaaS).
⏵ Some examples are Tata Communications, Amazon data centers and virtual servers.
⏵ Apache CloudStack is an open source software for deploying and managing a large
network of virtual machines, and offers public cloud services which provide highly
scalable Infrastructure as a Service (IaaS).
Platform as a Service
⏵ Software at the clouds support and manage the services, storage, networking, deploying,
testing, collaborating, hosting and maintaining applications.
⏵ Examples are Hadoop Cloud Service (IBM BigInsight, Microsoft Azure HD Insights,
Oracle Big Data Cloud Services).
Software as a service
⏵ Software applications are hosted by a service provider and made available to customers
over the Internet.
⏵ Some examples are SQL Google SQL, IBM BigSQL, Microsoft Polybase and Oracle Big
Data SQL.
10
Big Data Analytics-21CS71
Cluster Computing
11
Big Data Analytics-21CS71
Data analytics need the number of sequential steps. Big Data architecture design task simplifies
when using the logical layers approach. Figure 1.2 shows the logical layers and the functions
which are considered in Big Data architecture
⏵ data-processing,
Figure 1.2 Design of logical layers in a data processing architecture, and functions in the layers
12
Big Data Analytics-21CS71
Logical layer 1 (L1) is for identifying data sources, which are external, internal or both. The layer 2
(L2) is for data-ingestion.Data ingestion means a process of absorbing information, just like the
process of absorbing nutrients and medications into the body by eating or drinking them
.Ingestion is the process of obtaining and importing data for immediate use or transfer. Ingestion may
be in batches or in real time using pre- processing or semantics.
Layer 1
⏵ Ingestion and ETL processes either in real time, which means store and use the data as
generated, or in batches.
⏵ Data storage using Hadoop distributed file system or NoSQL data stores—HBase,
Cassandra, MongoDB.
Layer 4
⏵ Data processing software such as MapReduce, Hive, Pig, Spark, Spark Mahout, Spark
Streaming
13
Big Data Analytics-21CS71
Layer 5
⏵ Data integration
⏵ Analytics (real time, near real time, scheduled batches), BPs, BIs, knowledge discovery
Data managing means enabling, controlling, protecting, delivering and enhancing the value
of data and information asset. Reports, analysis and visualizations need well- defined data.
Data management functions include:
2. Data governance, which includes establishing the processes for ensuring the availability,
usability, integrity, security and high-quality of data. The processes enable trustworthy
data availability for analytics, followed by the decision making at the enterprise.
5. Managing data security, data access control, deletion, privacy and security
9. Creation of reference and master data, and data control and supervision
11. Integrated data management, enterprise-ready data creation, fast access and analysis,
automation and simplification of operations on the data,
14
Big Data Analytics-21CS71
Applications, programs and tools use data. Sources can be external, such as sensors, trackers,
web logs, computer systems logs and feeds. Sources can be machines, which source data from
data-creating programs.
A source can be internal. Sources can be data repositories, such as database, relational
database, flat file, spreadsheet, mail server, web server, directory services, even text or files
such as comma-separated values (CSV) files. Source may be a data store for applications
⏵ structured
⏵ semi-structured
⏵ multi-structured or unstructured
Structured Data Source
⏵ Data source for ingestion, storage and processing can be a file, database or streaming
data.
⏵ The source may be on the same computer running a program or a networked computer
⏵ Structured data sources are SQL Server, MySQL, Microsoft Access database, Oracle
DBMS, IBM DB2, Informix, Amazon SimpleDB or a file-collection directory at a
server.
⏵ The data need high velocity processing. Sources are from distributed file systems.
15
Big Data Analytics-21CS71
⏵ The sources are of file types, such as .txt (text file), .csv (comma separated values file).
Data may be as key value pairs, such as hash key-values pairs
The data sources can be sensors, sensor networks, signals from machines, devices,
controllers and intelligent edge nodes of different types in the industry M2M communication
and the GPS systems.
Sensors are electronic devices that sense the physical environment. Sensors are devices
which are used for measuring temperature, pressure, humidity, light intensity, traffic in
proximity, acceleration, locations, object(s) proximity, orientations and magnetic intensity,
and other physical states and parameters. Sensors play an active role in the automotive
industry.
RFIDs and their sensors play an active role in RFID based supply chain management, and
tracking parcels, goods and delivery.
High quality means data, which enables all the required operations, analysis, decisions,
planning and knowledge discovery correctly. Five R's as follows:
⏵ Relevancy,
⏵ recency,
⏵ range,
⏵ robustness
⏵ reliability.
Data Integrity
Data integrity refers to the maintenance of consistency and accuracy in data over its usable
life. Software, which store, process, or retrieve the data, should maintain the integrity of data.
16
Big Data Analytics-21CS71
⏵ Data Noise
⏵ Outlier
⏵ Missing Value
⏵ Duplicate value
Data Noise
⏵ Noise in data refers to data giving additional meaningless information besides true
(actual/required) information.
⏵ Noise is random in character, which means frequency with which it occurs is variable
over time.
Outlier
⏵ An outlier in data refers to data, which appears to not belong to the dataset.For example,
data that is outside an expected range.
⏵ Actual outliers need to be removed from the dataset, else the result will be effected by a
small or large amount.
⏵ Missing Values Another factor effecting data quality is missing values. Missing value
implies data not appearing in the data set.
⏵ Duplicate Values Another factor effecting data quality is duplicate values. Duplicate
value implies the same data appearing two or more times in a dataset.
Data pre-processing is an important step at the ingestion layer. Pre-processing is a must before
data mining and analytics. Pre-processing is also a must before running a Machine Learning
(ML) algorithm. Pré-processing needs are:
17
Big Data Analytics-21CS71
⏵ ELT processing
Data Cleaning
⏵ Data cleaning is done before mining of data. Incomplete or irrelevant data may result into
misleading decisions.
⏵ Data cleaning tools help in refining and structuring data into usable data. Examples of
such tools are OpenRefine and DataCleaner.
Data Enrichment
⏵ "Data enrichment refers to operations or processes which refine, enhance or improve the
raw data. “
⏵ Data editing refers to the process of reviewing and adjusting the acquired datasets.
⏵ Editing methods are (i) interactive, (ii) selective, (iii) automatic, (iv) aggregating and (v)
distribution.
Data Reduction
⏵ Data wrangling refers to the process of transforming and mapping the data. Results from
analytics are then appropriate and valuable.
⏵ mapping enables data into another format, which makes it valuable for analytics and data
visualizations
18
Big Data Analytics-21CS71
⏵ Java Script Object Notation (JSON) as batches of object arrays or resource arrays
⏵ Key-value pairs
⏵ Hash-key-value pair
Figure 1.3 shows resulting data pre-processing, data mining, analysis, visualization and data store.
The data exports to cloud services. The results integrate at the enterprise server or data
warehouse.
Cloud Services
Cloud offers various services. These services can be accessed through a cloud client (client
19
Big Data Analytics-21CS71
application), such as a web browser, SQL or other client. Figure 1.4 shows data-store export from
machines, files, computers, web servers and web services. The data exports to clouds, such
as IBM, Microsoft, Oracle, Amazon, Rackspace, TCS, Tata Communications or Hadoop cloud
services.
Figure 1.4 Data store export from machines, files, computers, web servers and web
services
20
Big Data Analytics-21CS71
Google cloud platform provides a cloud service called BigQuery Figure 1.5 shows BigQuery
cloud service at Google cloud platform. The data exports from a table or partition schema,JSON,
CSV or AVRO files from data sources after the pre-processing.
Data Store first pre-processes from machine and file data sources. Pre-processing transforms
the data in table or partition schema or supported data formats. For example, JSON, CSV and
AVRO. Data then exports in compressed or uncompressed data formats.
21
Big Data Analytics-21CS71
This section describe data storage and analysis, and comparison between Big Data
management and analysis with traditional database management systems.
SQL
An RDBMS uses SQL (Structured Query Language). SQL is a language for viewing or
changing (update, insert or append or delete) databases.
1. Create schema, Create schema, which is a structure which contains description of objects
(base tables, views, constraints) created by a user. The user can describe the data and define
the data in the database.
2. Create catalog, which consists of a set of schemas which describe the database.
3. Data Definition Language (DDL) for the commands which depicts a database, that include
creating, altering and dropping of tables and establishing the constraints.A user can create
and drop databases and tables, establish foreign keys, create view, stored procedure,
functions in the database etc.
4. Data Manipulation Language (DML) for commands that maintain and query the database.
A user can manipulate (INSERT/UPDATE) and access (SELECT) the data.
5. Data Control Language (DCL) for commands that control a database, and include
administering of privileges and committing. A user can set (grant, add or revoke)
permissions on tables, procedures and views.
22
Big Data Analytics-21CS71
⏵ A columnar format in-memory allows faster data retrieval when only a few columns in a
table need to be selected during query processing or aggregation.
⏵ The CPU accesses all columns in a single instance of access to the memory in columnar
format in memory data-storage.
⏵ A row format in-memory allows much faster data processing during OLTP
⏵ Each row record has corresponding values in multiple columns and the on-line values store
at the consecutive memory addresses in row format.
Enterprise Data-Store Server and Data Warehouse
⏵ Enterprise data server use data from several distributed sources which store data using
various technologies.
23
Big Data Analytics-21CS71
Figure 1.6 Steps 1 to 5 in Enterprise data integration and management with Big- Data for high
performance computing using local and cloud resources for the analytics, applications and
services
NO SQL
⏵ NoSQL databases are considered as semi-structured data. Big Data Store uses NoSQL.
NOSQL stands for No SQL or Not Only SQL.
⏵ The stores do not integrate with applications using SQL. NoSQL is also used in cloud data
store.
24
Big Data Analytics-21CS71
25
Big Data Analytics-21CS71
26
Big Data Analytics-21CS71
Figure 1.7 Coexistence ofRDBMS for traditional server data, NoSQL and Hadoop, Spark
and compatible Big Data Clusters
A Big Data platform supports large datasets and volume of data. The data generate at a
higher velocity, in more varieties or in higher veracity. Managing Big Data requires large
resources of MPPs, cloud, parallel processing and specialized tools. Bigdata platform
should provision tools and services for:
Data management, storage and analytics of Big data captured at the companies
and services require the following:
5. Massive parallelism
13. Big data sources: Data storages, data warehouse, Oracle Big Data, MongoDB
NoSQL,Cassandra NoSQL
14. Data sources: Sensors, Audit trail of Financial transactions data, external data
suchas Web, Social Media, weather data, health records data.
Hadoop
Big Data platform consists of Big Data storage(s), server(s) and data management and business
intelligence software. Storage can deploy Hadoop Distributed File System (HDFS), NoSQL data
stores, such as HBase, MongoDB, Cassandra. HDFS system is an open source storage system.
HDFS is a scaling, self-managing and self-healing file system.
28
Big Data Analytics-21CS71
The Hadoop system packages application-programming model. Hadoop is a scalable and reliable
parallel computing platform. Hadoop manages Big Data distributed databases. Figure 1.8 shows
Hadoop based Big Data environment. Small height cylinders represent MapReduce and big ones
represent the Hadoop.
A stack consists of a set of software components and data store units. Applications, machine-
learning algorithms, analytics and visualization tools use Big Data Stack (BDS) at a cloud
service, such as Amazon EC2, Azure or private cloud. The stack uses cluster of high
performance machines.
Types Examples
MapReduce Hadoop, Apache Hive, Apache Pig, Cascading, Cascalog, mrjob (Python MapReduce
library), Apache S4, MapR, Apple Acunu, Apache Flume, Apache Kafka
NoSQL
Databases MongoDB, Apache CouchDB, Apache Cassandra, Aerospike, Apache HBase, Hypertable
Processing Spark, IBM BigSheets, PySpark, R, Yahoo! Pipes, Amazon Mechanical Turk, Datameer,
Apache Solr/Lucene, ElasticSearch
Servers Amazon ECZ, S3, GoogleQuery, Google App Engine, AWS Elastic Beanstalk, Salesforce
Heroku
29
Big Data Analytics-21CS71
Data Analytics can be formally defined as the statistical and mathematical data analysis that
clusters, segments, ranks and predicts future possibilities. An important feature of data
analytics is its predictive, forecasting and prescriptive capability. Analytics uses historical
data and forecasts new values or results. Analytics suggests techniques which will provide
the most efficient and beneficial results for an enterprise
Analysis of data is a process of inspecting, cleaning, transforming and modeling data with
the goal of discovering useful information, suggesting conclusions and supporting decision
making
Phases in analytics
Analytics has the following phases before deriving the new facts, providing business
intelligence and generating new knowledge.
4. Cognitive analytics enables derivation of the additional value and undertake better
decision.
30
Big Data Analytics-21CS71
Figure 1.9 shows an overview of a reference model for analytics architecture. The
figure also shows on the right-hand side the Big Data file systems, machine
learning algorithmsand query languages and usage of the Hadoop ecosystem
Data are important for most aspect of marketing, sales and advertising. Customer Value
(CV) depends on three factors - quality, service and price. Big data analytics deploy
large volume of data to identify and derive intelligence using predictive models about
the individuals. The facts enable marketing companies to decide what products to sell.
32
Big Data Analytics-21CS71
Big Data analytics enable fraud detection. Big Data usages has the following
features-for enabling detection and prevention of frauds:
⏵ Fusing of existing data at an enterprise data warehouse with data from sources
such as social media, websites, biogs, e-mails, and thus enriching existing data
⏵ Providing high volume data mining, new innovative applications and thus
leading to new business intelligence and knowledge discovery
Large volume and velocity of Big Data provide greater insightsbut also associate risks
with the data used. Data included may be erroneous, less accurate or far from reality.
Analytics introduces new errors due to such data.
Five data risks, described by Bernard Marr are data security, data privacy breach, costs
affecting
Financial institutions, such as banks, extend loans to industrial and household sectors.
These institutions in many countries face credit risks, mainly risks of (i) loan defaults,
(ii) timely return of interests and principal amount. Financing institutions are keen to
get insights into the following:
33
Big Data Analytics-21CS71
Big Data analytics in health care use the following data sources:clinical records, (ii)
pharmacy records, (3) electronic medical records (4) diagnosis logs and notes and (v)
additional data, such as deviations from person usual activities, medical leaves from
job, social interactions. Healthcare analytics using Big Data can facilitate the
following:
7. Big Data analytics deploys large volume of data to identify and derive
intelligence using predictive models about individuals. Big Data driven
approaches help in research in medicine which can help patients
10. Deploying wearable devices data, the devices data records during active as
well as inactive periods, provide better understanding of patient health, and
better risk profiling the user for certain diseases.
12. The impact of Big Data is tremendous on the digital advertising industry.
The digital advertising industry sends advertisements using SMS, e-mails,
WhatsApp, Linkedln, Facebook, Twitter and other mediums.
13. Big Data captures data of multiple sources in large volume, velocity and
variety of data unstructured and enriches the structured data at the enterprise
data warehouse. Big data real time analytics provide emerging trends and
patterns, and gain actionable insights for facing competitions from similar
products. The data helps digital advertisers to discover new relationships,
lesser competitive regions and areas.
15. Advertising on digital medium needs optimization. Too much usage can also
effect negatively. Phone calls, SMSs, e-mail-based advertisements can be
nuisance if sent without appropriate researching on the potential targets. The
analytics help in this direction. The usage of Big Data after appropriate
filtering and elimination is crucial enabler of BigData Analytics with
appropriate data, data forms and data handling in the right manner.
35
Big Data Analytics-21CS71
36