0% found this document useful (0 votes)
23 views7 pages

Working of Hive 2

The document outlines the working process of Apache Hive, detailing the sequence of operations from executing a query to fetching results. It describes the role of the Hive interface, driver, compiler, metastore, and execution engine in processing queries and executing MapReduce jobs. Additionally, it includes examples of Hive CLI commands for database and table management.

Uploaded by

arpitasri1305
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views7 pages

Working of Hive 2

The document outlines the working process of Apache Hive, detailing the sequence of operations from executing a query to fetching results. It describes the role of the Hive interface, driver, compiler, metastore, and execution engine in processing queries and executing MapReduce jobs. Additionally, it includes examples of Hive CLI commands for database and table management.

Uploaded by

arpitasri1305
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 7

Working of Apache Hive

 Working of Hive

 Hive CLI
Hive Working

3
Step No. Operation
1 Execute Query The Hive interface such as Command Line or Web UI sends
query to Driver (any database driver such as JDBC, ODBC, etc.) to execute.
2 Get Plan The driver takes the help of query compiler that parses the query to
check the syntax and query plan or the requirement of query.
3 Get Metadata The compiler sends metadata request to Metastore
(any database).
4 Send Metadata Metastore sends metadata as a response to the compiler.
5 Send Plan The compiler checks the requirement and resends the plan to the
driver. Up to here, the parsing and compiling of a query is complete.
6 Execute Plan The driver sends the execute plan to the execution engine.

4
Step No. Operation
7 Execute Job Internally, the process of execution job is a MapReduce job. The
execution engine sends the job to JobTracker, which is in Name node and it
assigns this job to TaskTracker, which is in Data node. Here, the query
executes MapReduce job.
7.1 Metadata Ops Meanwhile in execution, the execution engine can execute
metadata operations with Metastore.
8 Fetch Result The execution engine receives the results from Data nodes.
9 Send Results The execution engine sends those resultant values to the
driver.
10 Send Results The driver sends the results to Hive Interfaces.

5
Hive CLI
•DDL:
•create table/drop table/rename table
•alter table add column
•Browsing:
•show tables
•describe table
•cat table
•Loading Data
•Queries

6
Create Database Statement
CREATE DATABASE|SCHEMA [IF NOT EXISTS] <database name>;
hive> CREATE DATABASE [IF NOT EXISTS] userdb; OR
hive> CREATE SCHEMA userdb;
hive> SHOW DATABASES;
hive> DROP DATABASE IF EXISTS userdb;

7
hive> CREATE TABLE IF NOT EXISTS employee ( eid int, name String, salary
String , destination String)
> COMMENT ‘Employee details’
> ROW FORMAT DELIMITED
> FIELDS TERMINATED BY ‘\t’
> LINES TERMINATED BY ‘\n’
> STORED AS TEXTFILE;

hive> LOAD DATA LOCAL INPATH '/home/user/sample.txt'

You might also like