0% found this document useful (0 votes)
11 views70 pages

Lecture 2

Uploaded by

bojikarlan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views70 pages

Lecture 2

Uploaded by

bojikarlan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 70

Big Data Analytics Platforms

1
Reading Reference for Lecture 2

2 E6893 Big Data Analytics – Lecture 2: Big Data Platform © 2021 CY Lin, Columbia University
Remind -- Apache Hadoop

The Apache™ Hadoop® project develops open-source software for reliable, scalable,
distributed computing.

The Apache Hadoop software library is a framework that allows for the distributed processing
of large data sets across clusters of computers using simple programming models. It is
designed to scale up from single servers to thousands of machines, each offering local
computation and storage. Rather than rely on hardware to deliver high-availability, the
library itself is designed to detect and handle failures at the application layer, so delivering
a highly-available service on top of a cluster of computers, each of which may be prone to
failures.

The project includes these modules:


• Hadoop Common: The common utilities that support the other Hadoop modules.
• Hadoop Distributed File System (HDFS™): A distributed file system that provides high-
throughput access to application data.
• Hadoop YARN: A framework for job scheduling and cluster resource management.
• Hadoop MapReduce: A YARN-based system for parallel processing of large data sets.

http://hadoop.apache.org
3
Remind -- Hadoop-related Apache Projects
• Ambari™: A web-based tool for provisioning, managing, and monitoring Hadoop
clusters.It also provides a dashboard for viewing cluster health and ability to view
MapReduce, Pig and Hive applications visually.
• Avro™: A data serialization system.
• Cassandra™: A scalable multi-master database with no single points of failure.
• Chukwa™: A data collection system for managing large distributed systems.
• HBase™: A scalable, distributed database that supports structured data storage for
large tables.
• Hive™: A data warehouse infrastructure that provides data summarization and ad hoc
querying.
• Mahout™: A Scalable machine learning and data mining library.
• Pig™: A high-level data-flow language and execution framework for parallel
computation.
• Spark™: A fast and general compute engine for Hadoop data. Spark provides a simple
and expressive programming model that supports a wide range of applications,
including ETL, machine learning, stream processing, and graph computation.
• Tez™: A generalized data-flow programming framework, built on Hadoop YARN, which
provides a powerful and flexible engine to execute an arbitrary DAG of tasks to
process data for both batch and interactive use-cases.
• ZooKeeper™: A high-performance coordination service for distributed applications.

4
Four distinctive layers of Hadoop

5
Common Use Cases for Big Data in Hadoop

• Log Data Analysis


– most common, fits perfectly for HDFS scenario: Write once & Read
often.
• Data Warehouse Modernization
• Fraud Detection
• Risk Modeling
• Social Sentiment Analysis
• Image Classification
• Graph Analysis
• Beyond

D. deRoos et al, Hadoop for Dummies, John Wiley & Sons, 2014
6
Example: Business Value of Log Analysis – “Struggle Detection”

D. deRoos et al, Hadoop for Dummies, John Wiley & Sons, 2014
7 E6893 Big Data Analytics – Lecture 2: Big Data Platform © 2021 CY Lin, Columbia University
Remind -- MapReduce example

http://www.alex-hanna.com
8 E6893 Big Data Analytics – Lecture 2: Big Data Platform © 2021 CY Lin, Columbia University
MapReduce Process on User Behavior via Log Analysis

D. deRoos et al, Hadoop for Dummies, John Wiley & Sons, 2014

9 E6893 Big Data Analytics – Lecture 2: Big Data Platform © 2021 CY Lin, Columbia University
Setting Up the Hadoop Environment

• Local (standalone) mode


• Pseudo-distributed mode
• Fully-distributed mode

10 E6893 Big Data Analytics – Lecture 2: Big Data Platform © 2021 CY Lin, Columbia University
Data Storage Operations on HDFS

• Hadoop is designed to work best with a modest number of extremely large files.
• Average file sizes ➔ larger than 500MB.

• Write Once, Read Often model.


• Content of individual files cannot be modified, other than appending new data at
the end of the file.

• What we can do:


– Create a new file
– Append content to the end of a file
– Delete a file
– Rename a file
– Modify file attributes like owner

11 E6893 Big Data Analytics – Lecture 2: Big Data Platform © 2021 CY Lin, Columbia University
Remind -- Hadoop Distributed File System (HDFS)

http://hortonworks.com/hadoop/hdfs/
12 E6893 Big Data Analytics – Lecture 2: Big Data Platform © 2021 CY Lin, Columbia University
HDFS blocks

• File is divided into blocks (default: 64MB) and duplicated in multiple places (default: 3)

• Dividing into blocks is normal for a file system. E.g., the default block size in Linux is 4KB.
The difference of HDFS is the scale.
• Hadoop was designed to operate at the petabyte scale.
• Every data block stored in HDFS has its own metadata and needs to be tracked by a
central server.

13 E6893 Big Data Analytics – Lecture 2: Big Data Platform © 2021 CY Lin, Columbia University
HDFS blocks

• Replication patterns of data blocks in HDFS.

• When HDFS stores the replicas of the original blocks across the Hadoop cluster, it tries to
ensure that the block replicas are stored in different failure points.

14 E6893 Big Data Analytics – Lecture 2: Big Data Platform © 2021 CY Lin, Columbia University
HDFS is a User-Space-Level file system

15 E6893 Big Data Analytics – Lecture 2: Big Data Platform © 2021 CY Lin, Columbia University
Interaction between HDFS components

16 E6893 Big Data Analytics – Lecture 2: Big Data Platform © 2021 CY Lin, Columbia University
HDFS Federation

• Before Hadoop 2.0, NameNode was a single point of failure and operation
limitation.
• Before Hadoop 2, Hadoop clusters usually have fewer clusters that were able to
scale beyond 3,000 or 4,000 nodes.
• Multiple NameNodes can be used in Hadoop 2.x. (HDFS High Availability
feature – one is in an Active state, the other one is in a Standby state).

http://hadoop.apache.org/docs/r2.3.0/hadoop-yarn/hadoop-yarn-site/HDFSHighAvailabilityWithNFS.html
17 E6893 Big Data Analytics – Lecture 2: Big Data Platform © 2021 CY Lin, Columbia University
High Availability of the NameNodes

• Active NameNode
• Standby NameNode – keeping the state of the block locations and block metadata in memory ->
HDFS checkpointing responsibilities.

• JournalNode – if a failure occurs, the Standby Node reads all completed journal entries to
ensure the new Active NameNode is fully consistent with the state of cluster.
• Zookeeper – provides coordination and configuration services for distributed systems.
18 E6893 Big Data Analytics – Lecture 2: Big Data Platform © 2021 CY Lin, Columbia University
Several useful commands for HDFS

• All hadoop commands are invoked by the bin/hadoop script.

• % hadoop fsck / -files –blocks:


➔ list the blocks that make up each file in HDFS.

• For HDFS, the schema name is hdfs, and for the local file system, the schema name is
file.
• A file or director in HDFS can be specified in a fully qualified way, such as:
hdfs://namenodehost/parent/child or hdfs://namenodehost

• The HDFS file system shell command is similar to Linux file commands, with the following
general syntax: hadoop hdfs –file_cmd

• For instance mkdir runs as:


$hadoop hdfs dfs –mkdir /user/directory_name

19 E6893 Big Data Analytics – Lecture 2: Big Data Platform © 2021 CY Lin, Columbia University
Several useful commands for HDFS -- II

20 E6893 Big Data Analytics – Lecture 2: Big Data Platform © 2021 CY Lin, Columbia University
YARN

• YARN – Yet Another Resource Negotiator:

– A Tool that enables the other processing frameworks to run on Hadoop.

– A general-purpose resource management facility that can schedule and


assign CPU cycles and memory (and in the future, other resources, such as
network bandwidth) from the Hadoop cluster to waiting applications.

➔YARN has converted Hadoop from simply a batch


processing engine into a platform for many different modes
of data processing, from traditional batch to interactive
queries to streaming analysis.

21 E6893 Big Data Analytics – Lecture 2: Big Data Platform © 2021 CY Lin, Columbia University
Four distinctive layers of Hadoop

22 E6893 Big Data Analytics – Lecture 2: Big Data Platform © 2021 CY Lin, Columbia University
Hadoop execution

1. The client application submits an application request to the JobTracker.


2. The JobTracker determines how many processing resources are needed to execute the entire
application.
3. The JobTracker looks at the state of the slave nodes and queues all the map tasks and reduce tasks
for execution.
4. As processing slots become available on the slave nodes, map tasks are deployed to the slave nodes.
Map tasks are assigned to nodes where the same data is stored.
5. The JobTracker monitors task progress. If failure, the task is restarted on the next available slot.
6. After the map tasks are finished, reduce tasks process the interim results sets from the map tasks.
7. The result set is returned to the client application.

23 E6893 Big Data Analytics – Lecture 2: Big Data Platform © 2021 CY Lin, Columbia University
Limitation of original Hadoop 1

• MapReduce is a successful batch-oriented programming model.

• A glass ceiling in terms of wider use:


– Exclusive tie to MapReduce, which means it could be used only for batch-
style workloads and for general-purpose analysis.

• Triggered demands for additional processing modes:


– Graph Analysis
– Stream data processing
– Message passing
➔ Demand is growing for real-time and ad-hoc analysis
➔ Analysts ask many smaller questions against subsets of data
and need a near-instant response.
➔ Some analysts are more used to SQL & Relational databases

YARN was created to move beyond the limitation


of a Hadoop 1 / MapReduce world.
24 E6893 Big Data Analytics – Lecture 2: Big Data Platform © 2021 CY Lin, Columbia University
Hadoop Data Processing Architecture

25 E6893 Big Data Analytics – Lecture 2: Big Data Platform © 2021 CY Lin, Columbia University
YARN’s application execution

• Client submits an application to Resource Manager.


• Resource Manager asks a Node Manager to create an Application Master instance and starts up.
• Application Manager initializes itself and register with the Resource Manager
• Application manager figures out how many resources are needed to execute the application.
• The Application Master then requests the necessary resources from the Resource Manager. It sens
heartbeat message to the Resource Manager throughout its lifetime.
• The Resource Manager accepts the request and queue up.
• As the requested resources become available on the slave nodes, the Resource Manager grants the
Application Master leases for containers on specific slave nodes.
• …. ➔ only need to decide on how much memory tasks can have.
26 E6893 Big Data Analytics – Lecture 2: Big Data Platform © 2021 CY Lin, Columbia University
Remind -- MapReduce Data Flow

http://www.ibm.com/developerworks/cloud/library/cl-openstack-deployhadoop/
27 E6893 Big Data Analytics – Lecture 2: Big Data Platform © 2021 CY Lin, Columbia University
MapReduce Use Case Example – flight data

• Data Source: Airline On-time Performance data set (flight data set).
– All the logs of domestic flights from the period of October 1987 to April 2008.
– Each record represents an individual flight where various details are
captured:
• Time and date of arrival and departure
• Originating and destination airports
• Amount of time taken to taxi from the runway to the gate.

– Download it from Statistical Computing: http://stat-computing.org/dataexpo/


2009/

28 E6893 Big Data Analytics – Lecture 2: Big Data Platform © 2021 CY Lin, Columbia University
Other datasets available from Statistical Computing

http://stat-computing.org/dataexpo/

29 E6893 Big Data Analytics – Lecture 2: Big Data Platform © 2021 CY Lin, Columbia University
Flight Data Schema

30 E6893 Big Data Analytics – Lecture 2: Big Data Platform © 2021 CY Lin, Columbia University
MapReduce Use Case Example – flight data

• Count the number of flights for each carrier

• Serial way (not MapReduce):

31 E6893 Big Data Analytics – Lecture 2: Big Data Platform © 2021 CY Lin, Columbia University
MapReduce Use Case Example – flight data

• Count the number of flights for each carrier

• Parallel way:

32 E6893 Big Data Analytics – Lecture 2: Big Data Platform © 2021 CY Lin, Columbia University
MapReduce application flow

33 E6893 Big Data Analytics – Lecture 2: Big Data Platform © 2021 CY Lin, Columbia University
MapReduce
steps for flight
data
computation

34 E6893 Big Data Analytics – Lecture 2: Big Data Platform © 2021 CY Lin, Columbia University
FlightsByCarrier application

Create FlightsByCarrier.java:

35 E6893 Big Data Analytics – Lecture 2: Big Data Platform © 2021 CY Lin, Columbia University
FlightsByCarrier application

36 E6893 Big Data Analytics – Lecture 2: Big Data Platform © 2021 CY Lin, Columbia University
FlightsByCarrier Mapper

37 E6893 Big Data Analytics – Lecture 2: Big Data Platform © 2021 CY Lin, Columbia University
FlightsByCarrier Reducer

38 E6893 Big Data Analytics – Lecture 2: Big Data Platform © 2021 CY Lin, Columbia University
Run the code

39 E6893 Big Data Analytics – Lecture 2: Big Data Platform © 2021 CY Lin, Columbia University
See Result

40 E6893 Big Data Analytics – Lecture 2: Big Data Platform © 2021 CY Lin, Columbia University
HBase

HBase is modeled after Google’s BigTable and written in Java. It is developed on top of
HDFS.

It provides a fault-tolerant way of storing large quantities of sparse data (small amounts of
information caught within a large collection of empty or unimportant data, such as finding
the 50 largest items in a group of 2 billion records, or finding the non-zero items
representing less than 0.1% of a huge collection).

HBase features compression, in-memory operation, and Bloom filters on a per-column basis

An HBase system comprises a set of tables. Each table contains rows and columns, much
like a traditional database. Each table must have an element defined as a Primary Key,
and all access attempts to HBase tables must use this Primary Key. An HBase column
represents an attribute of an object

41 E6893 Big Data Analytics – Lecture 2: Big Data Platform © 2021 CY Lin, Columbia University
Characteristics of data in HBase

Sparse data

HDFS lacks random read and write access. This is where HBase comes into picture. It's a
distributed, scalable, big data store, modeled after Google's BigTable. It stores data as
key/value pairs.
42 E6893 Big Data Analytics – Lecture 2: Big Data Platform © 2021 CY Lin, Columbia University
HBase Architecture

43 E6893 Big Data Analytics – Lecture 2: Big Data Platform © 2021 CY Lin, Columbia University
HBase Example -- I

44 E6893 Big Data Analytics – Lecture 2: Big Data Platform © 2021 CY Lin, Columbia University
HBase Example -- II

45 E6893 Big Data Analytics – Lecture 2: Big Data Platform © 2021 CY Lin, Columbia University
HBase Example -- III

46 E6893 Big Data Analytics – Lecture 2: Big Data Platform © 2021 CY Lin, Columbia University
HBase Example - IV

47 E6893 Big Data Analytics – Lecture 2: Big Data Platform © 2021 CY Lin, Columbia University
Apache Hive

48 E6893 Big Data Analytics – Lecture 2: Big Data Platform © 2021 CY Lin, Columbia University
Creating, Dropping, and Alternating DB in Hive

49 E6893 Big Data Analytics – Lecture 2: Big Data Platform © 2021 CY Lin, Columbia University
Another Hive Example
Hive’s operation modes

79
50 E6893E6893
Big Data
Big Analytics – Lecture
Data Analytics 7: Spark
– Lecture andData
2: Big DataPlatform
Analytics © 2021
2015 CY Lin, Columbia University
Reference

51 E6893 Big Data Analytics – Lecture 2: Big Data Platform © 2021 CY Lin, Columbia University
Spark Stack

52 E6893 Big Data Analytics – Lecture 2: Big Data Platform © 2021 CY Lin, Columbia University
Spark Core

Basic functionality of Spark, including components for:


• Task Scheduling
• Memory Management
• Fault Recovery
• Interacting with Storage Systems
• and more

Home to the API that defines resilient distributed datasets (RDDs) - Spark’s main
programming abstraction.

RDD represents a collection of items distributed across many compute nodes that can be
manipulated in parallel.

53 E6893 Big Data Analytics – Lecture 2: Big Data Platform © 2021 CY Lin, Columbia University
First language to use — Python

54 E6893 Big Data Analytics – Lecture 2: Big Data Platform © 2021 CY Lin, Columbia University
Spark’s Python Shell (PySpark Shell)
bin/pyspark

55 E6893 Big Data Analytics – Lecture 2: Big Data Platform © 2021 CY Lin, Columbia University
Test installation

56 E6893 Big Data Analytics – Lecture 2: Big Data Platform © 2021 CY Lin, Columbia University
Core Spark Concepts

• At a high level, every Spark application consists of a driver program that launches various
parallel operations on a cluster.

• The driver program contains your application’s main function and defines distributed
databases on the cluster, then applies operations to them.

• In the preceding example, the driver program was the Spark shell itself.

• Driver programs access Spark through a SparkContext object, which represents a


connection to a computing cluster.

• In the shell, a SparkContext is automatically created as the variable called sc.

57 E6893 Big Data Analytics – Lecture 2: Big Data Platform © 2021 CY Lin, Columbia University
Driver Programs

Driver programs typically manage a number of nodes called executors.

If we run the count() operation on a cluster, different machines might count lines in different
ranges of the file.

58 E6893 Big Data Analytics – Lecture 2: Big Data Platform © 2021 CY Lin, Columbia University
Example filtering

lambda —> define functions inline in Python.

59 E6893 Big Data Analytics – Lecture 2: Big Data Platform © 2020 CY Lin, Columbia University
Example — word count

60 E6893 Big Data Analytics – Lecture 2: Big Data Platform © 2020 CY Lin, Columbia University
Resilient Distributed Dataset (RDD) Basics

• An RDD in Spark is an immutable distributed collection of objects.

• Each RDD is split into multiple partitions, which may be computed on different nodes of the
cluster.

• Users create RDDs in two ways: by loading an external dataset, or by distributing a


collection of objects in their driver program.

• Once created, RDDs offer two types of operations: transformations and actions.

<== create RDD

<== transformation

<== action

Transformations and actions are different because of the way Spark computes RDDs.
==> Only computes when something is, the first time, in an action.
61 E6893 Big Data Analytics – Lecture 2: Big Data Platform © 2021 CY Lin, Columbia University
Persistance in Spark

• By default, RDDs are computed each time you run an action on them.
• If you like to reuse an RDD in multiple actions, you can ask Spark to persist it using
RDD.persist().
• RDD.persist() will then store the RDD contents in memory and reuse them in future actions.
• Persisting RDDs on disk instead of memory is also possible.
• The behavior of not persisting by default seems to be unusual, but it makes sense for big
data.

62 E6893 Big Data Analytics – Lecture 2: Big Data Platform © 2021 CY Lin, Columbia University
Spark SQL

63 E6893 Big Data Analytics – Lecture 2: Big Data Platform © 2021 CY Lin, Columbia University
Spark SQL

64 E6893 Big Data Analytics – Lecture 2: Big Data Platform © 2021 CY Lin, Columbia University
Using Spark SQL — Steps and Example

65 E6893 Big Data Analytics – Lecture 2: Big Data Platform © 2021 CY Lin, Columbia University
Query testtweet.json
Get it from Learning Spark Github ==> https://github.com/databricks/learning-spark/tree/master/files

66 E6893 Big Data Analytics – Lecture 2: Big Data Platform © 2021 CY Lin, Columbia University
Machine Learning Library in Spark — MLlib

An example of using MLlib for text classification task, e.g., identifying spammy emails.

67 E6893 Big Data Analytics – Lecture 2: Big Data Platform © 2021 CY Lin, Columbia University
Example: Spam Detection

68 E6893 Big Data Analytics – Lecture 2: Big Data Platform © 2021 CY Lin, Columbia University
Feature Extraction Example — TF-IDF

69 E6893 Big Data Analytics – Lecture 2: Big Data Platform © 2021 CY Lin, Columbia University
Questions?

70 E6893 Big Data Analytics – Lecture 8: Big Data Analytics Algorithms © 2020 CY Lin, Columbia University

You might also like