Real-Time Data Stream Processing - Challenges and
Real-Time Data Stream Processing - Challenges and
net/publication/326985824
CITATIONS READS
18 9,963
1 author:
SEE PROFILE
All content following this page was uploaded by Mohamed Amine Talhaoui on 24 December 2018.
2
Hassan II University, Faculty Of Sciences Ben m'Sik,
Laboratoire Mathématiques Informatique et Traitement de
l’Information MITI, Casablanca, Morocco
3
Hassan II University, Faculty Of Sciences Ben m'Sik,
Laboratoire Mathématiques Informatique et Traitement de
l’Information MITI, Casablanca, Morocco
4
Hassan II University, Faculty Of Sciences Ben m'Sik,
Laboratoire Mathématiques Informatique et Traitement de
l’Information MITI, Casablanca, Morocco
5
Hassan II University, Faculty Of Sciences Ben m'Sik,
Laboratoire Mathématiques Informatique et Traitement de
l’Information MITI, Casablanca, Morocco
1. Introduction
Nowadays, the world we live in generates a large volume of
information and data from different sources, namely search
engines, social networks, computer logs, e-mail clients, sensor
networks...etc All of these data are called masses of data or Fig. 1 Mapreduce Jobs
Big Data. For instance, a minute generates 347,222 new
Mapreduce is fundamentally suitable for parallelize processing Stream processing is a technology that enables the data to be
on a large amount of data, but it’s not the best tool for collected, integrated, analyzed and visualized in real time
processing the latest version of data. This framework is based while the data is being produced[8]. Stream processing
on disk approach and each iteration output is written to disk solutions are designed to handle big data in real time with a
making it slow. Figure 1 represents MapReduce jobs; highly scalable, highly available, and highly fault tolerant
MapReduce reads data from the disk and writes them again in architecture. This empowers to analyze the data in motion[9].
the disk four times which means that the complete flow The goal of real time processing is to provide solutions that
becomes very slow which degrades the performance. can process continuous infinite stream of data integrated from
both live and historical sources in very fast and interactive
The rest of this paper is organized as follows: in section II, we way.
define the basics of big data, ecosystem and stream
processing. In section III, we present a survey of data
processing tools. In section IV, we focus on a comparative 3. Data stream processing tools
study of the different systems of processing of data stream. In
section V, we present an overview of two real time processing Ancient methods used to process data, including Hadoop
architectures. And last but not least, in section VI we suggest a precisely MapReduce jobs, are not adequate for real time
model that was based on the previous comparisons. processing. Real time data stream processing keeps you up to
date with what is happening at the moment whatever is the
speed or the volume of data needless of the storage system. In
2. Big data: Theoretical Foundation order to understand well the system at hand, we are going to
present a brief overview of the other platforms namely
This section is devoted to some of the main concepts used in Hadoop, Spark, as well as Storm.
big data including an introduction of big data, its architecture,
technologies used and concepts on big data stream. 3.1 Apache Hadoop
Big data is a new concept which has been introduced due to
the large volume and complex data that become difficult to The Apache Hadoop [10] is a software that is open source
process using traditional data base methods and tools. used to process big data across clusters of machines and
According to Gartner [4] “Big data is high volume, high- operate these sets of data in batches. The heart of Hadoop is
velocity and high-variety information assets that demand cost- divided in two main parts namely MapReduce for processing
effective, innovative forms of information processing for data, and HDFS for storing data. It is known for its reliability,
enhanced insight and decision making.” In 2010, [5] Chen et scalability and its processing model.
al. defined big data as “datasets which could not be captured, MapReduce was first introduced by Jeffrey Dean and Sanjay
managed, and processed by general computers within an Ghemawat at Google in 2004[11], it is a programming model
acceptable scope.” [6]NIST says that “Big data shall mean the and an associated implementation for processing and
data of which the data volume, acquisition speed, or data generating large data sets on large clusters of commodity of
representation limits the capacity of using traditional relational machines. It is highly scalable, it can process petabytes of data
methods to conduct effective analysis or the data which may stored in HDFS on one cluster, and it is highly fault tolerant
be effectively processed with important horizontal zoom which lets you run programs on a cluster of commodity server.
technologies”. The characteristics of big data are summarized This framework is based on two servers, a master Job Tracker
in the five Vs: Volume, Velocity, Variety, Veracity and Value. that is unique on the cluster, it receives MapReduce tasks to
Volume represents the size or the quantity of the data from run and organize their execution on the cluster. It is also
terabyte to yotabyte. It is a massive evolution we are talking responsible for scheduling the jobs' component tasks on the
about, since 2005 the data were limited to 0.1 ZB, and they slaves as well as monitoring them and re-executing the failed
may reach 40 ZB and more in 2020[7]. Velocity means that tasks. The other server is the Task Tracker, there are several
the data must be processed and analyzed quickly in terms of per cluster, it performs the job MapReduce itself. Each one of
the speed of their capture. Variety indicates that the data are the Task Trackers is a unit of calculation of the cluster.
not of the same type, which allows us to harness different Users specify a map function that processes a key/value pairs
types of data structured, semi-structured and non-structured. to generate a set of intermediate key/value pairs, and a reduce
Veracity targets the confidence in the data on which decisions function that merges all intermediate values associated with
are based. Last but not least, Value which means that systems the same intermediate key. As figure 2 shows, first the
must not only be designed to process massive data efficiently MapReduce library in the user program splits the input files
but also be able to filter the most important data from all into M pieces of typically 16-64MB per piece, the master
collected data. picks idle workers and assigns each one a map task or a reduce
According to previously stated definitions, we can say that big task. A worker who is assigned a map task reads the contents
data is an abstract concept, which makes it possible to extract of the corresponding input split. The intermediate key/value
the following problems: how to store, analyze, process and pairs produced by the map function are buffered in memory.
extract the right information from a varied datasets quickly Periodically, the buffered pairs are written to local disk. When
generated and in the form of a data stream. a reduce worker has read all intermediate data for its partition,
it sorts it by the intermediate keys so that all occurrences of processing a continuous data stream, Spark SQL for working
the same key are grouped together. The sorting is needed with structured data, MLlib is a machine learning library, and
because typically many different keys map to the same reduce GraphX for graph computation, as shown in figure 3.
task. If the amount of the intermediate data is too large to fit in
memory, an external sort is used. The reduce worker iterates
over the sorted intermediate data and for each unique
intermediate key encountered, it passes the key and the
corresponding set of intermediate values to the user’s reduce
function[11].
A topology consists of spouts and bolts and the links between Big data Batch Batch and Stream
them show how streams are passing around. This topology is processing Stream
represented like a data processing Directed Acyclic Graph achievable High A few seconds Less than a
(DAG) which represents the whole stream processing latency (< 1s) second
procedure. A topology representation is shown below in figure (< 100ms)
6.
A spout is a source of streams that reads tuples from external API Java-Python Java-Python and Any PL
input source and emits them to the bolts as a stream. A bolt is Programmati and Scala Scala
a data processing unit of a storm topology which consumes on
any number of input streams, conducts some specific Guaranteed exactly-once exactly-once At least once
processing, and emits new streams out to other bolts. The core Data processing
abstraction in Storm is the "stream". A stream is an unbounded Processing
sequence of tuples. A tuple is a named list of values, and a Storage data yes yes No
field in a tuple can be an object of any type. Storm provides
the primitives for transforming a stream into a new stream in a In memory No Yes Yes
distributed and reliable way.
Fault tolerance Yes Yes Yes
The comparison above shows that storm is the best tool for the batch layer, the results available from the serving layer are
real time stream processing, Hadoop does batch processing, always out of date by a few hours. The serving layer can be
and spark is capable of doing micro-batching. Storm uses the implemented using NoSQL technologies such as HBase,
spouts and bolts to do one-at-a-time processing to avoid the Apache Druid… etc.
inherent latency overhead imposed by batching and micro- The speed layer compensates for the high latency of updates to
batching. the serving layer. The role of this layer is to compute in real
time the data that have not been taking into account in the last
batch of the batch layer. It produces the real-time views that
5. Real time processing architectures are always up to date and stores them in a fast store. The speed
layer can be realized with data streaming technologies such as
In this paper, we present a short overview of some of the Real Apache Storm or Spark Streaming.
time processing architectures namely Lambda and Kappa.
Yet, the lambda architecture has some limitations; the first
5.1 Lambda Architecture thing is the business logic which is implemented twice in the
real time and batch layers. The developers need to write the
The lambda architecture has been proposed by Nathan Marz. same code on both layers. The second remark consists of the
This architecture mixes the benefit of processing models, need of more frameworks to master. And finally, there are
batch processing and real time processing to provide better simpler solutions when the need is less complex.
results in low latency.
5.2 Kappa Architecture
All new data are sent to both the batch and the speed layer.
The batch layer is responsible for storing the master data set
and contiguously computes views of these data with the use of
the MapReduce algorithm. The results of the batch layer are
called "batch views".
The serving layer indexes the pre-computed views produced
by the batch layer. It is a scalable database that swaps in new Fig. 8 Kappa architecture [21]
batch views as they are made available. Due to the latency of
The chart below represents a short comparison of the two Figure 8 represents the new architecture, subdivided as a
architectures as has been discussed before, namely Lambda result:
and Kappa, following specific criteria.
6. Proposed Architecture
Fig. 9 Proposed architecture
According to the architectures and platforms presented in the
previous paragraphs, we have presented the different benefits Figure 8 represents both the traditional architecture of big data
and disadvantages of each of these architectures, and as well as the proposed architecture. The traditional
according to its information, we designed a new architecture architecture contains three layers namely storage, processing,
that is open source, and takes into account several criteria, and analysis, whereas our proposed architecture is represented
among which the real-time processing of large data from high as follows. The data come from different devices and
speed. It also allows an unlimited number of users to create equipments such as sensors, networks, cyber infrastructure,
many new and innovative features and make several web, email, social media and many more. These Data, which
improvements. come as a stream from different sources with a high speed, are
This architecture must ingest-filter-analyze and process acquired by the Integration Layer using a set of tools and
incoming data streams with low latency, so the system must functionalities (e.g. Apache Kafka). After being ingested, the
respond fairly quickly and it depends on the processing data are going to be filtered through (ELT) extract-transform-
architecture used (spark, storm, etc.) or the size of the data and load operations (e.g. PIG). In other words, the data are going
the complexity of the calculations performed. On the other to be cleaned, and their qualities are going to be analyzed
hand, one must consider how to choose the most efficient tool; …etc. This Filtering Layer serves the data to be prepared for
it should be easy to use not to pose to users, be it analysts or the Real Time Processing Layer, this latter aims to process
developers, infrastructure problems. the data in real time and with very low latency. As shown in
Perfectly, we want an architecture that allows making a figure 9, two technologies are to be used in this layer namely
transition to scale fairly easy and visually changing resource Storm, which is a tool for real time processing, and Machine
allocation. Furthermore, the newly configured resources have Learning. The use of Machine Learning in this layer allows
to join the cluster seamlessly and can handle changes in load the archiving of data. Its goal is to visualize previous trends
or traffic without interrupting the streaming data processing using a request/response method on similar inputs. ML learns
globally. continuously from new coming data which facilitates
And finally, a real-time architecture must provide a live processing. Storm, on the other hand, is also used in this layer
streaming data visualization. It must allow the dynamic in order to process data in real time. It uses the notion of
creation of dashboards, custom graphics, and UI extensions. topology which is a network of Spout and Bolt. As has been
noted before, the streams come from Spout that broadcasts
data coming from external sources in Storm topology.
References
[1] K. LEBOEUF, “2016 Update_ What Happens in One
Internet Minute_ - Excelacom, Inc.” [Online]. Available:
http://www.excelacom.com/resources/blog/2016-update-
what-happens-in-one-internet-minute.
[2] “MapReduce.” [Online]. Available: