Elasticsearch For Hadoop - Sample Chapter
Elasticsearch For Hadoop - Sample Chapter
Elasticsearch for
Hadoop
Vishal Shukla
$ 39.99 US
25.99 UK
P U B L I S H I N G
pl
C o m m u n i t y
Sa
m
ee
E x p e r i e n c e
D i s t i l l e d
Elasticsearch for
Hadoop
Integrate Elasticsearch into Hadoop to effectively visualize and
analyze your data
Vishal Shukla
Preface
The core components of Hadoop have been around from 2004-2006 as MapReduce.
Hadoop's ability to scale and process data in a distributed manner has resulted in its
broad acceptance across industries. Very large organizations are able to realize the
value that Hadoop brings in: crunching terabytes and petabytes of data, ingesting
social data, and utilizing commodity hardware to store huge volume of data.
However, big data solutions must fulfill its appetite for the speed, especially when you
query across unstructured data.
This book will introduce you to Elasticsearch, a powerful distributed search and
analytics engine, which can make sense of your massive data in real time. Its rich
querying capabilities can help in performing complex full-text search, geospatial
analysis, and detect anomalies in your data. Elasticsearch-Hadoop, also widely
known as ES-Hadoop, is a two-way connector between Elasticsearch and Hadoop.
It opens the doors to flow your data easily to and from the Hadoop ecosystem and
Elasticsearch. It can flow the streaming data from Apache Storm or Apache Spark to
Elasticsearch and let you analyze it in real time.
The aim of the book is to give you practical skills on how you can harness the
power of Elasticsearch and Hadoop. I will walk you through the step-by-step
process of how to discover your data and find interesting insights out of massive
amount of data. You will learn how to integrate Elasticsearch seamlessly with
Hadoop ecosystem tools, such as Pig, Hive, Cascading, Apache Storm, and Apache
Spark. This book will enable you to use Elasticsearch to build your own analytics
dashboard. It will also enable you to use powerful analytics and the visualization
platform, Kibana, to give different shapes, size, and colors to your data.
I have chosen interesting datasets to give you the real-world data exploration
experience. So, you can quickly use these tools and techniques to build your domainspecific solutions. I hope that reading this book turns out to be fun and a great
learning experience for you.
Preface
Getting Started
with ES-Hadoop
Hadoop provides you with a batch-oriented distributed storage and a computing
engine. Elasticsearch is a full-text search engine with rich aggregation capabilities.
Getting the data from Hadoop to Elasticsearch can open doors to run some data
discovery tools to find out interesting patterns and perform full-text search or
geospatial analytics. ES-Hadoop is a library that bridges Hadoop with Elasticsearch.
The goal of this book is to get you up-and-running with ES-Hadoop and enable you
to solve real-world analytics problems.
Our goal in this chapter is to develop MapReduce jobs to write/read the data to/from
Elasticsearch. You probably already know how to write basic MapReduce jobs using
Hadoop that writes its output to HDFS. ES-Hadoop is a connector library that provides
a dedicated InputFormat and OutputFormat that you can use to read/write data
from/to Elasticsearch in Hadoop jobs. To take the first step in this direction, we will
start with how to set up Hadoop, Elasticsearch, and the related toolsets, which you will
use throughout the rest of the book.
We encourage you to try the examples in the book to speed up the learning process.
We will cover the following topics in this chapter:
[ 21 ]
Understanding Mapper
Here is how WordsMapper.java looks:
package com.packtpub.esh;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
import java.io.IOException;
import java.util.StringTokenizer;
public class WordsMapper extends Mapper<Object, Text, Text,
IntWritable> {
private final static IntWritable one = new IntWritable(1);
public void map(Object key, Text value, Context context) throws
IOException, InterruptedException {
StringTokenizer itr = new StringTokenizer(value.toString());
while (itr.hasMoreTokens()) {
Text word = new Text();
word.set(itr.nextToken());
context.write(word, one);
}
}
}
To all the MapReduce developers, this Mapper is very well known and trivial. We
will get the input line in the value and tokenize it with white space to extract word
tokens. For each word, we will then write a count of 1 with the word as a key. This is
no different than other WordsMapper you may have come across so far.
[ 22 ]
Chapter 2
We want the Elasticsearch document to look similar to the following JSON structure:
{
"word":"Elasticsearch-hadoop",
"count":14
}
We need to represent this document in Java. This should be written to the Context
object. We can easily map the key/value-based JSON document format to the
MapWritable class. The keys of MapWriteable entries will represent the JSON field
key, and the values will represent the respective JSON field value. In the JSON
example we saw earlier, word and count becomes the keys of MapWritable.
Once the reducer is run and the context output is generated, it is transformed into
the JSON format and passed to the RESTful endpoint of Elasticsearch for indexing as
the bulk indexing request.
Let's take a look at how the WordsReducer class looks in the WordCount job:
package com.packtpub.esh;
import
import
import
import
org.apache.hadoop.io.IntWritable;
org.apache.hadoop.io.MapWritable;
org.apache.hadoop.io.Text;
org.apache.hadoop.mapreduce.Reducer;
import java.io.IOException;
[ 23 ]
WordsReducer implements the Reducer interface with the input key value types as
<Text, Iterable<IntWritable>> and the output types as <Text, MapWritable>.
To implement the reduce method, we iterated through all the values for key to
derive the final sum value. The final sum value and key are added in the newly
constructed MapWritable object. Finally, the result is written to the context for
further processing.
org.apache.hadoop.conf.Configuration;
org.apache.hadoop.fs.Path;
org.apache.hadoop.io.IntWritable;
org.apache.hadoop.io.Text;
org.apache.hadoop.mapreduce.Job;
org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
org.elasticsearch.hadoop.mr.EsOutputFormat;
[ 24 ]
Chapter 2
public class Driver {
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
// Elasticsearch Server nodes to point to
conf.set("es.nodes", "localhost:9200");
// Elasticsearch index and type name in {indexName}/{typeName}
format
conf.set("es.resource", "eshadoop/wordcount");
The es.resource setting specifies the target Elasticsearch index and type. This
means that the data will be read or written to the index and type specified here.
This configuration takes the <index>/<type> format.
[ 25 ]
Once the job is created with the required configurations, other job properties are
set as typical MapReduce jobs, such as jarByClass, mapperClass, reducerClass,
outputKeyClass, and outputValueClass.
The ES-Hadoop library provides a dedicated EsInputFormat and EsOutputFormat.
These act as an adapter between the Hadoop and JSON document format expected
by Elasticsearch. By default, EsOutputFormat expects the MapWritable object to
be written as the output in the context object. EsOutputFormat then converts this
MapWritable object to the JSON object.
You can write the existing JSON String to Elasticsearch with
EsOutputFormat without applying any kind of transformation.
In order to do this, set the "es.input.json"="yes" parameter.
Setting this configuration makes the ES-Hadoop library look for
the BytesWritable or Text object to be provided as the job's
output. If none of these are found, it expects the class configured
as outputValueClass to provide a toString() method that
returns the JSON String.
[ 26 ]
Chapter 2
[ 27 ]
We can see that the data contains a few useful pieces of information, such as the
date and the source IP address, from which the request originated the destination
IP address. id represents the category of the website, followed by act and msg that
represents the action taken for the request, and the target URL of the request.
Chapter 2
There can be many more such problems you may want to get answered out of
these log files.
Solution approaches
Now, we have got the data and the problems that can be solved by performing
some aggregations and analysis. We also know how to write MapReduce jobs with
Hadoop. Most of the preceding problems are mainly centered around getting the top
results for some specific fieldsfor example, finding the top categories, top domains,
or top attackers.
At a very high level, there are two possible approaches to performing Top N queries:
To implement the use case with the MapReduce paradigm, both of these steps will
need a dedicated MapReduce job. This is required because we will perform two
separate aggregation operations when we calculate Top Nthe count occurrences
for each categoryand then find the Top N categories.
First, the MapReduce job takes the raw log files as the input and produces the results
that look similar to our WordCount job. For example, the job to count the categories
may produce an output similar to the following code:
advertisements
500
informationtechnology
newsandmedia
304
none
12316
portals
2945
searchengines
1323
spywaresandp2p 1175
8028
This output can be written to the temporary file in HDFS, is provided as an input to
the second MapReduce job that calculates Top N, and writes the result to the HDFS
file. By performing this, you will obtain the Top N results of one field, which in this
case is category. Great! Similarly, counter jobs have to be created for each field that
you are interested in, such as attacker, IP range, blocked domains, viruses, and so on.
That's not so cool!
The following diagram shows how the solution would look with this approach to
obtain the answers for all the preceding Top N questions:
[ 30 ]
Chapter 2
[ 31 ]
Let's see how we can write the NetworkLogsMapper job to get the network logs data
to Elasticsearch.
The preceding code reads the input log line and divides it into two easily parsable
segments that contain the key/value pairs, parts and keyVals:
int i = 0;
StringTokenizer part1tokenizer = new
StringTokenizer(keyVals);
while (part1tokenizer.hasMoreTokens()) {
String token = part1tokenizer.nextToken();
String keyPart = getKeyValue(token)[0];
[ 32 ]
Chapter 2
String valuePart = getKeyValue(token)[1];
switch (keyPart) {
case "src":
srcIp = valuePart;
break;
case "dst":
destIp = valuePart;
break;
case "id":
category = valuePart;
break;
case "act":
action = valuePart != null ?
valuePart.toUpperCase() : null;
break;
case "msg":
target = valuePart;
break;
}
i++;
}
The preceding code iterates through each of the key/value pair token of keyVals to
extract the sourceIp, destIp, category, action, and msg fields.
i = 0;
if (parts.length > 1) {
StringTokenizer part2Tokenizer = new
StringTokenizer(parts[1], ",");
while (part2Tokenizer.hasMoreTokens()) {
String token = part2Tokenizer.nextToken();
String keyPart = getKeyValue(token)[0];
String valuePart = getKeyValue(token)[1];
switch (keyPart) {
case "sn":
serial = valuePart;
break;
case "ip":
ip = valuePart;
break;
case "tz":
timezone = valuePart;
break;
case "time":
String timeStr = valuePart;
[ 33 ]
Here is the parse parts segment to extract the serial, ip, timezone, and
time fields:
map.put(new Text("srcIp"), getWritableValue(srcIp));
map.put(new Text("destIp"), getWritableValue(destIp));
map.put(new Text("action"), getWritableValue(action));
map.put(new Text("category"), getWritableValue(category));
map.put(new Text("target"), getWritableValue(target));
map.put(new Text("serial"), getWritableValue(serial));
map.put(new Text("timezone"), getWritableValue(timezone));
map.put(new Text("ip"), getWritableValue(ip));
map.put(new
Text("domain"),getWritableValue(getDomainName(target)));
map.put(new Text("@timestamp"), time != null ? new
LongWritable(time) : new LongWritable(new Date().getTime()));
context.write(value, map);
Once we have parsed all the required fields from the log line, we can put all these
values into the MapWritable object. Note that MapWritable needs to have all its keys
and values of the Writable type. The preceding code gets the Writable versions of
each field and writes them to the MapWritable object. Finally, the map is written in
the context object, as shown in the following code:
private static WritableComparable getWritableValue(String
value) {
return value != null ? new Text(value) :
NullWritable.get();
}
[ 34 ]
Chapter 2
The getWritableValue() method takes the String parameter and returns a nullsafe Writable value as follows:
public static String getDomainName(String url) {
if(url==null)
return null;
return DomainUtil.getBaseDomain(url);
}
The getDomainName() method extracts the base domain from the URL that is
provided as a parameter. The indexing domain name can help in knowing the top
domains and their related analysis.
Overall, in the preceding program, we will get the input network logs in the
unstructured format. We just developed a simple program that allows you
to parse these log files and tries to extract the relevant information for our
Elasticsearch documents.
We will also perform the required data type transformations that are expected by
Elasticsearch. We parsed the timestamp from the logs from the end of the input line
and converted it to the LongWritable object.
You may have seen that we never told Elasticsearch about
the fields we will index and the data type of the fields. This is
handled by the automatic type mapping support provided by
Elasticsearch. This automatic type mapping triggers when the
first document is indexed. Elasticsearch automatically detects
and creates the data types for the fields based on the value
that is passed to the object.
To understand how Elasticsearch treats different data types passed from Hadoop,
consider the following table as a reference:
Hadoop Class
Elasticsearch Type
NullWritable, null
BooleanWritable
IntWritable, VInt
LongWritable, VLongWritable
FloatWritable
ShortWritable
[ 35 ]
Hadoop Class
ArrayWritable
Elasticsearch Type
AbstractMapWritable
Writing Driver
The Driver class looks mostly the same as the one in the WordCount job.
The Driver class snippet looks similar to the following code:
// Create configuration object for the jobConfiguration conf = new
Configuration();
// Elasticsearch Server nodes to point to
conf.set("es.nodes", "localhost:9200");
// Elasticsearch index and type name in {indexName}/{typeName}
format
conf.set("es.resource", "esh_network/network_logs_{action}");
Now, let's create the Job instance with the conf object. We no longer need to set
outputKeyClass and outputValueClass. As expected, we didn't declare any
reducer. Also, the number of reducer tasks must be set to 0.
[ 36 ]
Chapter 2
You may have use casessuch as network logswhere you get millions of logs
indexed on a daily basis. In such cases, it may be desirable to have the data split
into several indices. Elasticsearch supports the SimpleDateFormat date patterns
to be provided as a template for the index name or the type name, as shown in the
following code:
es.resource =
esh_network_{@timestamp:YYYY.MM.dd}/network_logs_{action}
This creates a new index whenever a new date is encountered in the document
being indexed.
The <dependencies> section declares the required hadoop and elasticsearchhadoop dependencies, as shown in the following code:
<plugin>
<artifactId>maven-assembly-plugin</artifactId>
<version>2.2.1</version>
<executions>
<execution>
<id>make-network-logs-job</id>
<configuration>
<descriptors>
<descriptor>assembly.xml</descriptor>
</descriptors>
<archive>
<manifest>
<mainClass>com.packtpub.esh.nwlogs.
Driver</mainClass>
</manifest>
</archive>
<finalName>${artifactId}-${version}nwlogs</finalName>
</configuration>
<phase>package</phase>
<goals>
<goal>single</goal>
</goals>
</execution>
</executions>
</plugin>
The preceding code snippet shows the plugin declaration for maven-assembly-plugin.
This plugin provides a way to customize how the JAR file is assembled. The highlighted
text declares the Driver class as the main class for the JAR file. This assembly is
explained in the configured assembly.xml descriptor as follows:
<assembly>
<id>job</id>
<formats>
<format>jar</format>
[ 38 ]
Chapter 2
</formats>
<includeBaseDirectory>false</includeBaseDirectory>
<dependencySets>
<dependencySet>
<unpack>false</unpack>
<scope>runtime</scope>
<outputDirectory>lib</outputDirectory>
<excludes>
<exclude>${groupId}:${artifactId}</exclude>
</excludes>
</dependencySet>
<dependencySet>
<unpack>true</unpack>
<includes>
<include>${groupId}:${artifactId}</include>
</includes>
</dependencySet>
</dependencySets>
</assembly>
The assembly.xml descriptor instructs Maven that the third-party dependency JARs
should be assembled in the JAR under the lib directory.
Alternatively, you can also specify the JAR files to be added in the
classpath by setting HADOOP_CLASSPATH environment variable as shown
below, as shown in the following code:
| HADOOP_CLASSPATH="<colon-separatedpaths-to-custom-jars-including-elasticsearch-hadoop>"
These JARs can also be made available at runtime from the command line
as follows:
| $ hadoop jar your-jar.jar -libjars elasticsearch-hadoop.
jar
Once you have your pom.xml and assembly.xml configured, you are ready to build
the job JAR file. Switch to the parent directory of the pom.xml file and execute the
following command:
$ mvn package
This step assumes that you have the Maven binaries configured in the $PATH
environment variable. This command generates the required JAR file with
the ch02-0.0.1-nwlogs-job.jar name under the target folder of your
project directory.
[ 39 ]
The ES-Hadoop metrics of the output execution looks similar to the following code
(this shows that a total of 31,583 documents were sent from ES-Hadoop and accepted
by Elasticsearch):
Elasticsearch Hadoop Counters
Bulk Retries=0
Bulk Retries Total Time(ms)=0
Bulk Total=32
Bulk Total Time(ms)=3420
Bytes Accepted=10015094
Bytes Received=128000
Bytes Retried=0
Bytes Sent=10015094
Documents Accepted=31583
Documents Received=0
Documents Retried=0
Documents Sent=31583
Network Retries=0
Network Total Time(ms)=3515
Node Retries=0
Scroll Total=0
Scroll Total Time(ms)=0
[ 40 ]
Chapter 2
The following screenshot shows the Elasticsearch Head screen with the
indexed document:
We can see that the job execution generated two types in the index: network_logs_
ALLOW and network_logs_BLOCK. Here, ALLOW and DENY refers to the values of the
action field used in the multiresource naming.
[ 41 ]
To view the top five categories across all the network logs that we have indexed,
execute the following query:
$ curl -XPOST http://localhost:9200/esh_network/_search?pretty -d '{
"aggs": {
"top-categories": {
"terms": {
"field": "category",
"size": 5
}
}
},
"size": 0
}'
Chapter 2
"key" : "searchengines",
"doc_count" : 1323
} ]
}
}
}
We can just focus on the aggregations part of the result as of now. You will learn
about the Elasticsearch query and its response structures in detail in the next chapter.
[ 43 ]
The data represents the fields, such as id, text, timestamp, user, source, and so on,
in the CSV format.
Trying it yourself
Now, let's implement what you have learned so far with the Twitter dataset. Try
and load the preceding Twitter dataset into Elasticsearch using the MapReduce job.
This is similar to what we did for the WordCount and NetworkLogsMapper job in
the last section.
The Elasticsearch document should look similar to the following code:
{
"text": "RT @elastic: .@PrediktoIoT gives users realtime sensor
data analysis w/ #elasticsearch & #Spark. Here's how
http://t.co/oECqzBWZvh http://t",
"@timestamp": 1420246531000,
"user": "Mario Montag",
"tweetId": "601914318221750272"
}
Chapter 2
You may want to perform complex analysisfor example, sentiment analysis of the
tweets that relate to a specific criteria on the Twitter data. Let's consider that you
want to do so on the data that relates to any two of the elasticsearch, kibana,
analysis, visualize, and realtime terms. To be able to perform such analysis, you
may want to run the MapReduce tasks on the relevant data.
Now, let's create a job that queries the data as required from Elasticsearch and
imports it to HDFS.
When we will write a MapReduce job that reads from Elasticsearch, remember that
we will get MapWritable as the input in our Mapper class. Our output will be Text
that will eventually be written to a CSV file in HDFS:
StringBuilder mappedValueBuilder = new StringBuilder();
mappedValueBuilder.append(getQuotedValue(value.get(new
Text("tweetId")))+", ");
mappedValueBuilder.append(getQuotedValue(value.get(new
Text("text")))+", ");
mappedValueBuilder.append(getQuotedValue(value.get(new
Text("user")))+", ");
mappedValueBuilder.append(getQuotedTimeValue(value.get(new
Text("@timestamp"))));
Text mappedValue = new
Text(mappedValueBuilder.toString());
context.write(mappedValue, mappedValue);
}
In the preceding snippet, we will just build a String that represents a single line
of the CSV file. The code appends the quoted tweetId, text, user, and timestamp
fields in the CSV format. Here is an example:
"601078026596687873", "RT @elastic: This week in @Elastic: all
things #Elasticsearch, #Logstash, #Kibana, community & ecosystem
https://t.co/grAEffXek1", "Leslie Hawthorn", "Wed Dec 31 11:02:23
IST 2015"
[ 45 ]
The preceding two methods are handy to quickly get the quoted String or Date
values for Writable values.
Now, let's write the Driver class, as shown in the following code:
public static void main(String args[]) throws IOException,
ClassNotFoundException, InterruptedException {
// Create Configuration instance
Configuration conf = new Configuration();
// Elasticsearch Server nodes to point to
conf.set("es.nodes", "localhost:9200");
// Elasticsearch index and type name in
{indexName}/{typeName} format
conf.set("es.resource", "esh/tweets");
conf.set("es.query", query);
First, the Driver class sets the Elasticsearch node and resource to the index where
the tweets are already indexed. Then, the es.query configuration specifies the query
to be used when you fetch the data from the Elasticsearch index. If the es.query
configuration is missing, the ES-Hadoop library queries all the documents from the
index and provides them as the MapWritable input to the Mapper class, as shown in
the following code:
// Create Job instance
Job job = new Job(conf, "tweets to hdfs mapper");
// set Driver class
job.setJarByClass(Driver.class);
job.setMapperClass(Tweets2HdfsMapper.class);
// set IntputFormat to EsInputFormat provided by
Elasticsearch-Hadoop jar
[ 46 ]
Chapter 2
job.setInputFormatClass(EsInputFormat.class);
job.setNumReduceTasks(0);
FileOutputFormat.setOutputPath(job, new Path(args[0]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
[ 47 ]
[ 48 ]
Chapter 2
This command execution should display the output for the provided Twitter sample
data, as shown in the following code:
Elasticsearch Hadoop Counters
Bulk Retries=0
Bulk Retries Total Time(ms)=0
Bulk Total=0
Bulk Total Time(ms)=0
Bytes Accepted=0
Bytes Received=138731
Bytes Retried=0
Bytes Sent=4460
Documents Accepted=0
Documents Received=443
Documents Retried=0
Documents Sent=0
Network Retries=0
Network Total Time(ms)=847
Node Retries=0
Scroll Total=10
Scroll Total Time(ms)=195
The console output for the execution shows that the number of documents received
from Elasticsearch is 443.
[ 49 ]
Now, the successful execution of the command should print the console output, as
shown in the following code:
Found 6 items
-rw-r--r-1 eshadoop supergroup
ch02/_SUCCESS
The job generates as many number of output files as the mapper instances. This is
because we didn't have any reducer for the job.
To verify the content of the file, execute the following command:
$ hadoop fs -tail /output/ch02/part-m-00000
Now, verify that the output is similar to the one shown in the following code:
"914318221750272", "RT @elastic: .@PrediktoIoT gives users realtime
sensor data analysis w/ #elasticsearch & #Spark. Here's how http://t.co/
oECqzBWZvh http://t", "Mario Montag", "Sat Jan 03 06:25:31 IST 2015"
"601914318221750272", "RT @elastic: .@PrediktoIoT gives users realtime
sensor data analysis w/ #elasticsearch & #Spark. Here's how http://t.co/
oECqzBWZvh http://t", "Mario Montag", "Sat Jan 03 06:25:31 IST 2015"
"602223267194208256", "Centralized System and #Docker Logging with
#ELK Stack #elasticsearch #logstash #kibana http://t.co/MIn7I52Okl",
"Jol Vimenet", "Sun Dec 28 02:53:10 IST 2015"
"602223267194208256",
"Centralized System and #Docker Logging with #ELK Stack #elasticsearch
#logstash #kibana http://t.co/MIn7I52Okl", "Jol Vimenet", "Sun Dec 28
02:53:10 IST 2015"
"603461899456483328", "#Elasticsearch using in near realtime", "Ilica
Brnadi ", "Wed Dec 31 12:55:03 IST 2015" "603461899456483328",
"#Elasticsearch using in near realtime", "Ilica Brnadi ", "Wed Dec 31
12:55:03 IST 2015"
[ 50 ]
Chapter 2
Summary
In this chapter, we discussed the MapReduce programs by going through the
WordCount program. We checked how to develop the MapReduce jobs with the new
and old map-reduce APIs. We delved into the details of a real-world network logs
monitoring problem. You learned how to solve the problem in a better way by using
the aggregation capabilities of Elasticsearch.
Further, you learned how to write and build the Hadoop MapReduce job that
leverages ES-Hadoop to get the network logs monitoring data to Elasticsearch.
Finally, we explored how to get the data out from Elasticsearch in the MapReduce
job for the Twitter dataset. Overall, we got a complete understanding of how to get
the data in and out between Elasticsearch and Hadoop.
In the next chapter, we will be dive deeper into Elasticsearch to understand
Elasticsearch mappings, how the indexing process works, and how to query the
Elasticsearch data in order to perform full-text search and aggregations.
[ 51 ]
www.PacktPub.com
Stay Connected: