0% found this document useful (0 votes)
24 views9 pages

Big Data Unit-5

The document provides an overview of three big data tools: Pig, Hive, and HBase. Pig is a high-level platform for processing large datasets using Pig Latin scripts, while Hive is used for managing structured data with a SQL-like language called HiveQL. HBase is a non-relational database designed for real-time updates and handling sparse datasets, making it suitable for high-traffic applications.

Uploaded by

guptaraman600
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views9 pages

Big Data Unit-5

The document provides an overview of three big data tools: Pig, Hive, and HBase. Pig is a high-level platform for processing large datasets using Pig Latin scripts, while Hive is used for managing structured data with a SQL-like language called HiveQL. HBase is a non-relational database designed for real-time updates and handling sparse datasets, making it suitable for high-traffic applications.

Uploaded by

guptaraman600
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 9

UNIT 5

Application of Big Data using :


1. Pig :
Pig is a high-level platform or tool which is used to process large datasets.
It provides a high level of abstraction letting you write simple data analysis
code.
It provides a high-level scripting language, known as Pig Latin which is used to
develop the data analysis codes.
Applications :
1. Used for analyzing large datasets by writing Pig Latin scripts
2. Common in web companies for tracking user behavior or errors
3. Removing duplicates, nulls, formatting inconsistencies.
4. Useful for summarizing big data (e.g., total sales per region)
5. Analyzing user interactions, hashtags, or trending topics.
6. Pig simplifies ETL (Extract, Transform, Load) operations.

2. Hive :
Hive is often used to store and manage structured data in data warehouses built on Hadoop..
It makes querying and analyzing easy.
It allows querying and managing large datasets using a SQL-like language called HiveQL.
It translates HiveQL into MapReduce, Tez, or Spark jobs under the hood.
It supports structured and semi-structured data
It is used by different companies. For example, Amazon uses it in Amazon Elastic
MapReduce.
Applications :

1. Ease of use
2. Streamlined security
3. Low overhead
4. Ideal for batch processing and aggregated data analysis.
5. Perform tasks like:Count, max, min, avg over large datasets and generate summary
statistics for decision making
6. BI tools like Tableau, Power BI, or QlikView can connect to Hive for visualization
and reporting.

3. HBase :
HBase is a column-oriented non-relational database management system that runs on
top of the Hadoop Distributed File System (HDFS).
HBase provides a fault-tolerant way of storing sparse data sets, which are common in
many big data use cases
1|Page
HBase does support writing applications in Apache Avro, REST and Thrift.
Application :

1 Used when you need real-time updates, unlike Hive which is batch-oriented.
2 Perfect for storing sensor data, logs, or metrics with timestamps.
3 Stores user profiles, posts, likes, shares, and comments.
4 Handles fast reads/writes for high-traffic platforms like Facebook or Twitter.
5 chat history storage, delivery receipts, notification logs.
PIG
Introduction to PIG :
o Pig is a high-level platform or tool which is used to process large datasets.
It provides a high level of abstraction for processing over MapReduce.
(High abstraction in Pig means you don’t write the logic for low-level
execution (like in MapReduce). Instead, you write simple, SQL-like
commands and Pig does the rest for you — translating them into efficient
parallel jobs.)
o It provides a high-level scripting language, known as Pig Latin which is
used to develop the data analysis codes.
o Pig Latin and Pig Engine are the two main components of the Apache Pig
tool. The result of Pig is always stored in the HDFS.
 One limitation of MapReduce is that the development cycle is very
long. Writing the reducer and mapper, compiling packaging the
code, submitting the job and retrieving the output is a time-
consuming task.
o Apache Pig reduces the time of development using the multi-query
approach.
o Pig is beneficial for programmers who are not from Java backgrounds. 200
lines of Java code can be written in only 10 lines using the Pig Latin
language.
o Programmers who have SQL knowledge needed less effort to learn Pig
Latin.

Execution Modes of Pig :


Apache Pig scripts can be executed in three ways :
Interactive Mode (Grunt shell) :
You can run Apache Pig in interactive mode using the Grunt shell. In this shell, you
can enter the Pig Latin statements and get the output (using the Dump operator).
Batch Mode (Script) :
You can run Apache Pig in Batch mode by writing the Pig Latin script in a single file
with the .pig extension.

Embedded Mode (UDF) :


Apache Pig provides the provision of defining our own functions (User Defined
Functions) in programming languages such as Java and using them in our script.

Comparison of Pig with Databases :


PIG SQL

Pig Latin is a procedural language SQL is a


declarative
language

Works well with semi-structured and unstructured Supports strictly structured


data data
.

In Apache Pig, the schema is optional. We can store


data without designing a schema (values are stored Schema is mandatory
as $01, $02 etc.) in SQL

The data model in Apache Pig is The data model used


nested relational. in SQL is flat
relational.

Apache Pig provides limited opportunity for There is more opportunity


Query optimization. for query optimization in
SQL.

Not suitable for real time Designed for real time


querying

Grunt :

 The Grunt Shell is the interactive command-line interface of Apache Pig.


 Grunt shell is a shell command.
 The Grunt shell of the Apace pig is mainly used to write pig Latin scripts.
 Pig script can be executed with grunt shell which is a native shell provided by
Apache pig to execute pig queries.
 When you start Pig (by running pig command in terminal), it opens a shell.That
shell is called Grunt shell.

The prompt you see is:


grunt>

 This is where you can type commands like LOAD, DUMP, DESCRIBE,
ILLUSTRATE, etc.

 You can also run shell commands using sh or fs.

Syntax of sh command :
 grunt> sh ls

Syntax of fs command :
grunt>fs -ls

Pig Latin :
The Pig Latin is a data flow language used by Apache Pig to analyze the data in
Hadoop.
It is a textual language that abstracts the programming from the Java MapReduce
idiom into a notation.
The Pig Latin statements are used to process the data.
It is an operator that accepts a relation as an input and generates another relation as an
output.
· It can span multiple lines.
· Each statement must end with a semi-colon.
· It may include expression and schemas.
· By default, these statements are processed using multi-query execution

User-Defined Functions :
 Apache Pig provides extensive support for User Defined
Functions(UDF’s).
 Using these UDF’s, we can define our own functions and use them. The
UDF support is provided in six programming languages:
· Java
· Jython
· Python
· JavaScript
· Ruby
· Groovy
 For writing UDF’s, complete support is provided in Java and limited
support is provided in all the remaining languages.
 Using Java, you can write UDF’s involving all parts of the processing like
data load/store, column transformation, and aggregation.
 Since Apache Pig has been written in Java, the UDF’s written using Java
language work efficiently compared to other languages.
Types of UDF’s in Java :
Filter Functions :

• The filter functions are used as conditions in filter statements. • These functions
accept a Pig value as input and return a Boolean value.

Eval Functions :

• The Eval functions are used in FOREACH-GENERATE statements. • These


functions accept a Pig value as input and return a Pig result.
Algebraic Functions :

• The Algebraic functions act on inner bags in a FOREACHGENERATE


statement.
• These functions are used to perform full MapReduce operations on an inner bag.

Data Processing Operators :


The Apache Pig Operators is a high-level procedural language for querying large data
sets using Hadoop and the Map-Reduce Platform.
A Pig Latin statement is an operator that takes a relation as input and produces another
relation as output.
These operators are the main tools for Pig Latin provides to operate on the data. They
allow you to transform it by sorting, grouping, joining, projecting, and filtering. The
Apache Pig operators can be classified as :
Relational Operators :
Relational operators are the main tools Pig Latin provides to operate on the data.
Some of the Relational Operators are :
LOAD: The LOAD operator is used to loading data from the file system or HDFS
storage into a Pig relation.
FOREACH: This operator generates data transformations based on columns of data. It
is used to add or remove fields from a relation.
FILTER: This operator selects tuples from a relation based on a condition. JOIN:
JOIN operator is used to performing an inner, equijoin join of two or more relations
based on common field values
ORDER BY: Order By is used to sort a relation based on one or more fields in either
ascending or descending order using ASC and DESC keywords.
GROUP: The GROUP operator groups together the tuples with the same group key
(key field).
COGROUP: COGROUP is the same as the GROUP operator. For readability,
programmers usually use GROUP when only one relation is involved and COGROUP
when multiple relations are reinvolved.
Diagnostic Operator :
The load statement will simply load the data into the specified relation in Apache Pig.
To verify the execution of the Load statement, you have to use the Diagnostic
Operators.
Some Diagnostic Operators are :
DUMP: The DUMP operator is used to run Pig Latin statements and display the
results on the screen.
DESCRIBE: Use the DESCRIBE operator to review the schema of a particular
relation. The DESCRIBE operator is best used for debugging a script. ILLUSTRATE:
ILLUSTRATE: This operator is used to review how data is transformed through a
sequence of Pig Latin statements. ILLUSTRATE command is your best friend when it
comes to debugging a script.
EXPLAIN: The EXPLAIN operator is used to display the logical, physical, and
MapReduce execution plans of a relation.

Hive
Apache Hive Architecture :
The above figure shows the architecture of Apache Hive and its major components.
The major components of Apache Hive are :
1. Hive Client
2. Hive Services
3. Processing and Resource Management
4. Distributed Storage
HIVE CLIENT :
Hive supports applications written in any language like Python, Java, C++, Ruby, etc
using JDBC, ODBC, and Thrift drivers, for performing queries on the Hive. Hence, one
can easily write a hive client application in any language of its own choice.
Hive clients are categorized into three types :
1. Thrift Clients : The Hive server is based on Apache Thrift so that it can serve the
request from a thrift client.
2. JDBC client : Hive allows for the Java applications to connect to it using the JDBC
driver. JDBC driver uses Thrift to communicate with the Hive Server. 3. ODBC client :
Hive ODBC driver allows applications based on the ODBC protocol to connect to Hive.
Similar to the JDBC driver, the ODBC driver uses Thrift to communicate with the Hive
Server.
HIVE SERVICE :
To perform all queries, Hive provides various services like the Hive server2, Beeline,
etc.
The various services offered by Hive are :
1. Beeline
2. Hive Server 2
3. Hive Driver
4. Hive Compiler
5. Optimizer
6. Metastore

PROCESSING AND RESOURCE MANAGEMENT :


Hive internally uses a MapReduce framework as a defacto engine for executing the
queries.
MapReduce is a software framework for writing those applications that process a
massive amount of data in parallel on the large clusters of commodity hardware.
MapReduce job works by splitting data into chunks, which are processed by map
reduce tasks.

DISTRIBUTED STORAGE :
Hive is built on top of Hadoop, so it uses the underlying Hadoop Distributed File
System for the distributed storage.

Hive Shell :
 Hive shell is a primary way to interact with hive.
 It is a default service in the hive.
 It is also called CLI (command line interference).
 Hive shell is similar to MySQL Shell.
 Hive users can run HQL queries in the hive shell.
 In hive shell up and down arrow keys are used to scroll previous
commands. HiveQL is case-insensitive (except for string comparisons).
 The tab key will autocomplete (provides suggestions while you type into the
field) Hive keywords and functions.
Hive Shell can run in two modes :
Non-Interactive mode :
Non-interactive mode means run shell scripts in administer zone.
Hive Shell can run in the non-interactive mode, with the -f option.
Example:
$hive -f script.q, Where script. q is a file.
Interactive mode :
The hive can work in interactive mode by directly typing the command “hive” in the
terminal.
Example:
$hive
Hive> show databases;

Hive Services :
The following are the services provided by Hive :
Hive CLI (Beeline ): The Hive CLI (Command Line Interface) is a shell where we
can execute Hive queries and commands.

• Hive Web User Interface: The Hive Web UI is just an alternative of Hive CLI. It

provides a web-based GUI for executing Hive queries and commands. • Hive metastore: It
is a central repository that stores all the structure information of various tables and
partitions in the warehouse. It also includes metadata of column and its type information,
the serializers and deserializers which is used to read and write data and the corresponding

HDFS files where the data is stored. • Hive Server: It is referred to as Apache Thrift
Server. It accepts the request from different clients and provides it to Hive Driver.

• Hive Driver: It receives queries from different sources like web UI, CLI, Thrift, and
JDBC/ODBC driver. It transfers the queries to the compiler.

• Hive Compiler: The purpose of the compiler is to parse the query and perform
semantic analysis on the different query blocks and expressions. It converts HiveQL
statements into MapReduce jobs.

You might also like