Mastering Apache Spark PDF
Mastering Apache Spark PDF
of Contents
Introduction 1.1
Overview of Apache Spark 1.2
1
StorageTab 3.4
StoragePage 3.4.1
RDDPage 3.4.2
EnvironmentTab 3.5
EnvironmentPage 3.5.1
ExecutorsTab 3.6
ExecutorsPage 3.6.1
ExecutorThreadDumpPage 3.6.2
SparkUI — Web UI of Spark Application 3.7
SparkUITab 3.7.1
BlockStatusListener Spark Listener 3.8
EnvironmentListener Spark Listener 3.9
ExecutorsListener Spark Listener 3.10
JobProgressListener Spark Listener 3.11
StorageStatusListener Spark Listener 3.12
StorageListener — Spark Listener for Tracking Persistence Status of RDD Blocks 3.13
RDDOperationGraphListener Spark Listener 3.14
WebUI — Framework For Web UIs 3.15
WebUIPage — Contract of Pages in Web UI 3.15.1
WebUITab — Contract of Tabs in Web UI 3.15.2
RDDStorageInfo 3.16
RDDInfo 3.17
LiveEntity 3.18
LiveRDD 3.18.1
UIUtils 3.19
JettyUtils 3.20
web UI Configuration Properties 3.21
2
Sink — Contract of Metrics Sinks 4.5
MetricsServlet JSON Metrics Sink 4.5.1
Metrics Configuration Properties 4.6
Spark MLlib
Spark MLlib — Machine Learning in Spark 6.1
ML Pipelines (spark.ml) 6.2
Pipeline 6.2.1
PipelineStage 6.2.2
Transformers 6.2.3
Transformer 6.2.3.1
Tokenizer 6.2.3.2
Estimators 6.2.4
Estimator 6.2.4.1
StringIndexer 6.2.4.1.1
KMeans 6.2.4.1.2
TrainValidationSplit 6.2.4.1.3
Predictor 6.2.4.2
RandomForestRegressor 6.2.4.2.1
3
Regressor 6.2.4.3
LinearRegression 6.2.4.3.1
Classifier 6.2.4.4
RandomForestClassifier 6.2.4.4.1
DecisionTreeClassifier 6.2.4.4.2
Models 6.2.5
Model 6.2.5.1
Evaluator — ML Pipeline Component for Model Scoring 6.2.6
BinaryClassificationEvaluator — Evaluator of Binary Classification Models
ClusteringEvaluator — Evaluator of Clustering Models 6.2.6.2 6.2.6.1
MulticlassClassificationEvaluator — Evaluator of Multiclass Classification
Models 6.2.6.3
RegressionEvaluator — Evaluator of Regression Models 6.2.6.4
CrossValidator — Model Tuning / Finding The Best Model 6.2.7
CrossValidatorModel 6.2.7.1
ParamGridBuilder 6.2.7.2
CrossValidator with Pipeline Example 6.2.7.3
Params and ParamMaps 6.2.8
ValidatorParams 6.2.8.1
HasParallelism 6.2.8.2
ML Persistence — Saving and Loading Models and Pipelines 6.3
MLWritable 6.3.1
MLReader 6.3.2
Example — Text Classification 6.4
Example — Linear Regression 6.5
Logistic Regression 6.6
LogisticRegression 6.6.1
Latent Dirichlet Allocation (LDA) 6.7
Vector 6.8
LabeledPoint 6.9
Streaming MLlib 6.10
GeneralizedLinearRegression 6.11
Alternating Least Squares (ALS) Matrix Factorization 6.12
ALS — Estimator for ALSModel 6.12.1
4
ALSModel — Model for Predictions 6.12.2
ALSModelReader 6.12.3
Instrumentation 6.13
MLUtils 6.14
5
Inside Creating SparkContext 9.3.2
ConsoleProgressBar 9.3.3
SparkStatusTracker 9.3.4
Local Properties — Creating Logical Job Groups 9.3.5
RDD — Resilient Distributed Dataset 9.4
RDD 9.4.1
RDD Lineage — Logical Execution Plan 9.4.2
TaskLocation 9.4.3
ParallelCollectionRDD 9.4.4
MapPartitionsRDD 9.4.5
OrderedRDDFunctions 9.4.6
CoGroupedRDD 9.4.7
SubtractedRDD 9.4.8
HadoopRDD 9.4.9
NewHadoopRDD 9.4.10
ShuffledRDD 9.4.11
Operators 9.5
Transformations 9.5.1
PairRDDFunctions 9.5.1.1
Actions 9.5.2
Caching and Persistence 9.6
StorageLevel 9.6.1
Partitions and Partitioning 9.7
Partition 9.7.1
Partitioner 9.7.2
HashPartitioner 9.7.2.1
Shuffling 9.8
Checkpointing 9.9
CheckpointRDD 9.9.1
RDD Dependencies 9.10
NarrowDependency — Narrow Dependencies 9.10.1
ShuffleDependency — Shuffle Dependencies 9.10.2
Map/Reduce-side Aggregator 9.11
AppStatusStore 9.12
6
AppStatusPlugin 9.13
AppStatusListener 9.14
KVStore 9.15
KVStoreView 9.15.1
ElementTrackingStore 9.15.2
InMemoryStore 9.15.3
LevelDB 9.15.4
InterruptibleIterator — Iterator With Support For Task Cancellation 9.16
7
TaskScheduler — Spark Scheduler 11.5
Tasks 11.5.1
ShuffleMapTask — Task for ShuffleMapStage 11.5.1.1
ResultTask 11.5.1.2
FetchFailedException 11.5.2
MapStatus — Shuffle Map Output Status 11.5.3
TaskSet — Set of Tasks for Stage 11.5.4
TaskSetManager 11.5.5
Schedulable 11.5.5.1
Schedulable Pool 11.5.5.2
Schedulable Builders 11.5.5.3
FIFOSchedulableBuilder 11.5.5.3.1
FairSchedulableBuilder 11.5.5.3.2
Scheduling Mode — spark.scheduler.mode Spark Property 11.5.5.4
TaskInfo 11.5.5.5
TaskDescription — Metadata of Single Task 11.5.6
TaskSchedulerImpl — Default TaskScheduler 11.5.7
Speculative Execution of Tasks 11.5.7.1
TaskResultGetter 11.5.7.2
TaskContext 11.5.8
TaskContextImpl 11.5.8.1
TaskResults — DirectTaskResult and IndirectTaskResult 11.5.9
TaskMemoryManager — Memory Manager of Single Task 11.5.10
MemoryConsumer 11.5.10.1
TaskMetrics 11.5.11
ShuffleWriteMetrics 11.5.11.1
TaskSetBlacklist — Blacklisting Executors and Nodes For TaskSet 11.5.12
SchedulerBackend — Pluggable Scheduler Backends 11.6
CoarseGrainedSchedulerBackend 11.6.1
DriverEndpoint — CoarseGrainedSchedulerBackend RPC Endpoint 11.6.1.1
ExecutorBackend — Pluggable Executor Backends 11.7
CoarseGrainedExecutorBackend 11.7.1
MesosExecutorBackend 11.7.2
BlockManager — Key-Value Store of Blocks of Data 11.8
8
MemoryStore 11.8.1
BlockEvictionHandler 11.8.2
StorageMemoryPool 11.8.3
MemoryPool 11.8.4
DiskStore 11.8.5
BlockDataManager 11.8.6
RpcHandler 11.8.7
RpcResponseCallback 11.8.8
TransportRequestHandler 11.8.9
TransportContext 11.8.10
TransportServer 11.8.11
TransportClientFactory 11.8.12
MessageHandler 11.8.13
BlockManagerMaster — BlockManager for Driver 11.8.14
BlockManagerMasterEndpoint — BlockManagerMaster RPC Endpoint
DiskBlockManager 11.8.15 11.8.14.1
BlockInfoManager 11.8.16
BlockInfo 11.8.16.1
BlockManagerSlaveEndpoint 11.8.17
DiskBlockObjectWriter 11.8.18
BlockManagerSource — Metrics Source for BlockManager 11.8.19
ShuffleMetricsSource — Metrics Source of BlockManager for Shuffle-Related
Metrics 11.8.20
StorageStatus 11.8.21
ManagedBuffer 11.8.22
MapOutputTracker — Shuffle Map Output Registry 11.9
MapOutputTrackerMaster — MapOutputTracker For Driver 11.9.1
MapOutputTrackerMasterEndpoint 11.9.1.1
MapOutputTrackerWorker — MapOutputTracker for Executors 11.9.2
ShuffleManager — Pluggable Shuffle Systems 11.10
SortShuffleManager — The Default Shuffle System 11.10.1
ExternalShuffleService 11.10.2
OneForOneStreamManager 11.10.3
ShuffleBlockResolver 11.10.4
9
IndexShuffleBlockResolver 11.10.4.1
ShuffleWriter 11.10.5
BypassMergeSortShuffleWriter 11.10.5.1
SortShuffleWriter 11.10.5.2
UnsafeShuffleWriter — ShuffleWriter for SerializedShuffleHandle 11.10.5.3
BaseShuffleHandle — Fallback Shuffle Handle 11.10.6
BypassMergeSortShuffleHandle — Marker Interface for Bypass Merge Sort Shuffle
Handles 11.10.7
SerializedShuffleHandle — Marker Interface for Serialized Shuffle Handles 11.10.8
ShuffleReader 11.10.9
BlockStoreShuffleReader 11.10.9.1
ShuffleBlockFetcherIterator 11.10.10
ShuffleExternalSorter — Cache-Efficient Sorter 11.10.11
ExternalSorter 11.10.12
Serialization 11.11
Serializer — Task SerDe 11.11.1
SerializerInstance 11.11.2
SerializationStream 11.11.3
DeserializationStream 11.11.4
ExternalClusterManager — Pluggable Cluster Managers 11.12
BroadcastManager 11.13
BroadcastFactory — Pluggable Broadcast Variable Factories 11.13.1
TorrentBroadcastFactory 11.13.1.1
TorrentBroadcast 11.13.1.2
CompressionCodec 11.13.2
ContextCleaner — Spark Application Garbage Collector 11.14
CleanerListener 11.14.1
Dynamic Allocation (of Executors) 11.15
ExecutorAllocationManager — Allocation Manager for Spark Core 11.15.1
ExecutorAllocationClient 11.15.2
ExecutorAllocationListener 11.15.3
ExecutorAllocationManagerSource 11.15.4
HTTP File Server 11.16
Data Locality 11.17
10
Cache Manager 11.18
OutputCommitCoordinator 11.19
RpcEnv — RPC Environment 11.20
RpcEndpoint 11.20.1
RpcEndpointRef 11.20.2
RpcEnvFactory 11.20.3
Netty-based RpcEnv 11.20.4
TransportConf — Transport Configuration 11.21
Utils Helper Object 11.22
Spark on YARN
Spark on YARN 14.1
YarnShuffleService — ExternalShuffleService on YARN 14.2
ExecutorRunnable 14.3
Client 14.4
YarnRMClient 14.5
ApplicationMaster 14.6
AMEndpoint — ApplicationMaster RPC Endpoint 14.6.1
YarnClusterManager — ExternalClusterManager for YARN 14.7
TaskSchedulers for YARN 14.8
11
YarnScheduler 14.8.1
YarnClusterScheduler 14.8.2
SchedulerBackends for YARN 14.9
YarnSchedulerBackend 14.9.1
YarnClientSchedulerBackend 14.9.2
YarnClusterSchedulerBackend 14.9.3
YarnSchedulerEndpoint RPC Endpoint 14.9.4
YarnAllocator 14.10
Introduction to Hadoop YARN 14.11
Setting up YARN Cluster 14.12
Kerberos 14.13
ConfigurableCredentialManager 14.13.1
ClientDistributedCacheManager 14.14
YarnSparkHadoopUtil 14.15
Settings 14.16
Spark Standalone
Spark Standalone 15.1
Standalone Master — Cluster Manager of Spark Standalone 15.2
Standalone Worker 15.3
web UI 15.4
ApplicationPage 15.4.1
LocalSparkCluster — Single-JVM Spark Standalone Cluster 15.5
Submission Gateways 15.6
Management Scripts for Standalone Master 15.7
Management Scripts for Standalone Workers 15.8
Checking Status 15.9
Example 2-workers-on-1-node Standalone Cluster (one executor per worker) 15.10
StandaloneSchedulerBackend 15.11
Spark on Mesos
Spark on Mesos 16.1
12
MesosCoarseGrainedSchedulerBackend 16.2
About Mesos 16.3
Execution Model
Execution Model 17.1
Varia
Building Apache Spark from Sources 19.1
Spark and Hadoop 19.2
SparkHadoopUtil 19.2.1
13
Spark and software in-memory file systems 19.3
Spark and The Others 19.4
Distributed Deep Learning on Spark 19.5
Spark Packages 19.6
Interactive Notebooks
Interactive Notebooks 20.1
Apache Zeppelin 20.1.1
Spark Notebook 20.1.2
Exercises
One-liners using PairRDDFunctions 22.1
Learning Jobs and Partitions Using take Action 22.2
Spark Standalone - Using ZooKeeper for High-Availability of Master 22.3
Spark’s Hello World using Spark shell and Scala 22.4
WordCount using Spark shell 22.5
Your first complete Spark application (using Scala and sbt) 22.6
Spark (notable) use cases 22.7
Using Spark SQL to update data in Hive using ORC files 22.8
Developing Custom SparkListener to monitor DAGScheduler in Scala 22.9
Developing RPC Environment 22.10
Developing Custom RDD 22.11
Working with Datasets from JDBC Data Sources (and PostgreSQL) 22.12
Causing Stage to Fail 22.13
14
Further Learning
Courses 23.1
Books 23.2
15
Introduction
— Flannery O'Connor
I’m Jacek Laskowski, an independent consultant, software developer and technical instructor
specializing in Apache Spark, Apache Kafka and Kafka Streams (with Scala, sbt,
Kubernetes, DC/OS, Apache Mesos, and Hadoop YARN).
I offer software development and consultancy services with very hands-on in-depth
workshops and mentoring. Reach out to me at [email protected] or @jaceklaskowski to
discuss opportunities.
Consider joining me at Warsaw Scala Enthusiasts and Warsaw Spark meetups in Warsaw,
Poland.
I’m also writing Mastering Spark SQL, Mastering Kafka Streams, Apache Kafka
Tip
Notebook and Spark Structured Streaming Notebook gitbooks.
Expect text and code snippets from a variety of public sources. Attribution follows.
16
Overview of Apache Spark
Apache Spark
Apache Spark is an open-source distributed general-purpose cluster computing
framework with (mostly) in-memory data processing engine that can do ETL, analytics,
machine learning and graph processing on large volumes of data at rest (batch processing)
or in motion (streaming processing) with rich concise high-level APIs for the programming
languages: Scala, Python, Java, R, and SQL.
17
Overview of Apache Spark
Using Spark Application Frameworks, Spark simplifies access to machine learning and
predictive analytics at scale.
Spark is mainly written in Scala, but provides developer API for languages like Java, Python,
and R.
If you have large amounts of data that requires low latency processing that a typical
MapReduce program cannot provide, Spark is a viable alternative.
The Apache Spark project is an umbrella for SQL (with Datasets), streaming, machine
learning (pipelines) and graph processing engines built atop Spark Core. You can run them
all in a single application using a consistent API.
Spark runs locally as well as in clusters, on-premises or in cloud. It runs on top of Hadoop
YARN, Apache Mesos, standalone or in the cloud (Amazon EC2 or IBM Bluemix).
Apache Spark’s Streaming and SQL programming models with MLlib and GraphX make it
easier for developers and data scientists to build applications that exploit machine learning
and graph analytics.
At a high level, any Spark application creates RDDs out of some input, run (lazy)
transformations of these RDDs to some other form (shape), and finally perform actions to
collect or store data. Not much, huh?
You can look at Spark from programmer’s, data engineer’s and administrator’s point of view.
And to be honest, all three types of people will spend quite a lot of their time with Spark to
finally reach the point where they exploit all the available features. Programmers use
language-specific APIs (and work at the level of RDDs using transformations and actions),
data engineers use higher-level abstractions like DataFrames or Pipelines APIs or external
tools (that connect to Spark), and finally it all can only be possible to run because
administrators set up Spark clusters to deploy Spark applications to.
18
Overview of Apache Spark
When you hear "Apache Spark" it can be two things — the Spark engine aka
Spark Core or the Apache Spark open source project which is an "umbrella"
term for Spark Core and the accompanying Spark Application Frameworks, i.e.
Note
Spark SQL, Spark Streaming, Spark MLlib and Spark GraphX that sit on top of
Spark Core and the main data abstraction in Spark called RDD - Resilient
Distributed Dataset.
Why Spark
Let’s list a few of the many reasons for Spark. We are doing it first, and then comes the
overview that lends a more technical helping hand.
You could then use Spark Standalone built-in cluster manager to deploy your Spark
applications to a production-grade cluster to run on a full dataset.
One of the Spark project goals was to deliver a platform that supports a very wide array
of diverse workflows - not only MapReduce batch jobs (there were available in
Hadoop already at that time), but also iterative computations like graph algorithms or
Machine Learning.
And also different scales of workloads from sub-second interactive jobs to jobs that run
for many hours.
Spark combines batch, interactive, and streaming workloads under one rich concise API.
Spark supports near real-time streaming workloads via Spark Streaming application
framework.
ETL workloads and Analytics workloads are different, however Spark attempts to offer a
unified platform for a wide variety of workloads.
Graph and Machine Learning algorithms are iterative by nature and less saves to disk or
transfers over network means better performance.
19
Overview of Apache Spark
You should watch the video What is Apache Spark? by Mike Olson, Chief Strategy Officer
and Co-Founder at Cloudera, who provides a very exceptional overview of Apache Spark, its
rise in popularity in the open source community, and how Spark is primed to replace
MapReduce as the general processing engine in Hadoop.
Spark draws many ideas out of Hadoop MapReduce. They work together well - Spark on
YARN and HDFS - while improving on the performance and simplicity of the distributed
computing engine.
And it should not come as a surprise, without Hadoop MapReduce (its advances and
deficiencies), Spark would not have been born at all.
It is also exposed in Java, Python and R (as well as SQL, i.e. SparkSQL, in a sense).
So, when you have a need for distributed Collections API in Scala, Spark with RDD API
should be a serious contender.
It expanded on the available computation styles beyond the only map-and-reduce available
in Hadoop MapReduce.
20
Overview of Apache Spark
and Spark GraphX, you still use the same development and deployment environment to for
large data sets to yield a result, be it a prediction (Spark MLlib), a structured data queries
(Spark SQL) or just a large distributed batch (Spark Core) or streaming (Spark Streaming)
computation.
It’s also very productive of Spark that teams can exploit the different skills the team
members have acquired so far. Data analysts, data scientists, Python programmers, or Java,
or Scala, or R, can all use the same Spark platform using tailor-made API. It makes for
bringing skilled people with their expertise in different programming languages together to a
Spark project.
Using the Spark shell you can execute computations to process large amount of data (The
Big Data). It’s all interactive and very useful to explore the data before final production
release.
Also, using the Spark shell you can access any Spark cluster as if it was your local machine.
Just point the Spark shell to a 20-node of 10TB RAM memory in total (using --master ) and
use all the components (and their abstractions) like Spark SQL, Spark MLlib, Spark
Streaming, and Spark GraphX.
Depending on your needs and skills, you may see a better fit for SQL vs programming APIs
or apply machine learning algorithms (Spark MLlib) from data in graph data structures
(Spark GraphX).
Single Environment
Regardless of which programming language you are good at, be it Scala, Java, Python, R or
SQL, you can use the same single clustered runtime environment for prototyping, ad hoc
queries, and deploying your applications leveraging the many ingestion data points offered
by the Spark platform.
You can be as low-level as using RDD API directly or leverage higher-level APIs of Spark
SQL (Datasets), Spark MLlib (ML Pipelines), Spark GraphX (Graphs) or Spark Streaming
(DStreams).
The single programming model and execution engine for different kinds of workloads
simplify development and deployment architectures.
21
Overview of Apache Spark
Both, input and output data sources, allow programmers and data engineers use Spark as
the platform with the large amount of data that is read from or saved to for processing,
interactively (using Spark shell) or in applications.
Spark embraces many concepts in a single unified development and runtime environment.
Machine learning that is so tool- and feature-rich in Python, e.g. SciKit library, can now
be used by Scala developers (as Pipeline API in Spark MLlib or calling pipe() ).
This single platform gives plenty of opportunities for Python, Scala, Java, and R
programmers as well as data engineers (SparkR) and scientists (using proprietary enterprise
data warehouses with Thrift JDBC/ODBC Server in Spark SQL).
Mind the proverb if all you have is a hammer, everything looks like a nail, too.
Low-level Optimizations
Apache Spark uses a directed acyclic graph (DAG) of computation stages (aka execution
DAG). It postpones any processing until really required for actions. Spark’s lazy evaluation
gives plenty of opportunities to induce low-level optimizations (so users have to know less to
do more).
22
Overview of Apache Spark
Spark supports diverse workloads, but successfully targets low-latency iterative ones. They
are often used in Machine Learning and graph algorithms.
Many Machine Learning algorithms require plenty of iterations before the result models get
optimal, like logistic regression. The same applies to graph algorithms to traverse all the
nodes and edges when needed. Such computations can increase their performance when
the interim partial results are stored in memory or at very fast solid state drives.
Spark can cache intermediate data in memory for faster model building and training. Once
the data is loaded to memory (as an initial step), reusing it multiple times incurs no
performance slowdowns.
Also, graph algorithms can traverse graphs one connection per iteration with the partial
result in memory.
Less disk access and network can make a huge difference when you need to process lots of
data, esp. when it is a BIG Data.
Scala in Spark, especially, makes for a much less boiler-plate code (comparing to other
languages and approaches like MapReduce in Java).
Developers no longer have to learn many different processing engines and platforms, and let
the time be spent on mastering framework APIs per use case (atop a single computation
engine Spark).
23
Overview of Apache Spark
In the no-so-long-ago times, when the most prevalent distributed computing framework was
Hadoop MapReduce, you could reuse a data between computation (even partial ones!) only
after you’ve written it to an external storage like Hadoop Distributed Filesystem (HDFS). It
can cost you a lot of time to compute even very basic multi-stage computations. It simply
suffers from IO (and perhaps network) overhead.
One of the many motivations to build Spark was to have a framework that is good at data
reuse.
Spark cuts it out in a way to keep as much data as possible in memory and keep it there
until a job is finished. It doesn’t matter how many stages belong to a job. What does matter
is the available memory and how effective you are in using Spark API (so no shuffle occur).
The less network and disk IO, the better performance, and Spark tries hard to find ways to
minimize both.
The reasonably small codebase of Spark invites project contributors - programmers who
extend the platform and fix bugs in a more steady pace.
24
ShuffleClient — Contract to Fetch Shuffle Blocks
ShuffleClient can optionally be initialized with an appId (that actually does nothing by
default)
ShuffleClient has shuffle-related Spark metrics that are used when BlockManager is
requested for a shuffle-related Spark metrics source (only when Executor is created for a
non-local / cluster mode).
package org.apache.spark.network.shuffle;
Table 2. ShuffleClients
ShuffleClient Description
BlockTransferService
ExternalShuffleClient
init Method
25
ShuffleClient — Contract to Fetch Shuffle Blocks
MetricSet shuffleMetrics()
26
BlockTransferService — Pluggable Block Transfers (To Fetch and Upload Blocks)
BlockTransferService — Pluggable Block
Transfers (To Fetch and Upload Blocks)
BlockTransferService is the base for ShuffleClients that can fetch and upload blocks of data
synchronously or asynchronously.
package org.apache.spark.network
27
BlockTransferService — Pluggable Block Transfers (To Fetch and Upload Blocks)
fetchBlockSync Method
fetchBlockSync(
host: String,
port: Int,
execId: String,
blockId: String,
tempFileManager: TempFileManager): ManagedBuffer
fetchBlockSync …FIXME
Synchronous (and hence blocking) fetchBlockSync to fetch one block blockId (that
corresponds to the ShuffleClient parent’s asynchronous fetchBlocks).
fetchBlockSync is a mere wrapper around fetchBlocks to fetch one blockId block that
28
BlockTransferService — Pluggable Block Transfers (To Fetch and Upload Blocks)
uploadBlockSync(
hostname: String,
port: Int,
execId: String,
blockId: BlockId,
blockData: ManagedBuffer,
level: StorageLevel,
classTag: ClassTag[_]): Unit
uploadBlockSync …FIXME
uploadBlockSync is a mere blocking wrapper around uploadBlock that waits until the upload
finishes.
29
ExternalShuffleClient
ExternalShuffleClient
ExternalShuffleClient is a ShuffleClient that…FIXME
void registerWithShuffleServer(
String host,
int port,
String execId,
ExecutorShuffleInfo executorInfo) throws IOException, InterruptedException
registerWithShuffleServer …FIXME
fetchBlocks Method
void fetchBlocks(
String host,
int port,
String execId,
String[] blockIds,
BlockFetchingListener listener,
TempFileManager tempFileManager)
fetchBlocks …FIXME
30
NettyBlockTransferService — Netty-Based BlockTransferService
NettyBlockTransferService — Netty-Based
BlockTransferService
NettyBlockTransferService is a BlockTransferService that uses Netty for uploading or
Refer to Logging.
fetchBlocks Method
31
NettyBlockTransferService — Netty-Based BlockTransferService
fetchBlocks(
host: String,
port: Int,
execId: String,
blockIds: Array[String],
listener: BlockFetchingListener): Unit
When executed, fetchBlocks prints out the following TRACE message in the logs:
createAndStart method…FIXME
If however the number of retries is not greater than 0 (it could be 0 or less), the
RetryingBlockFetcher.BlockFetchStarter created earlier is started (with the input blockIds
and listener ).
In case of any Exception , you should see the following ERROR message in the logs and
the input BlockFetchingListener gets notified (using onBlockFetchFailure for every block
id).
Caution FIXME
close(): Unit
32
NettyBlockTransferService — Netty-Based BlockTransferService
close …FIXME
In the end, you should see the INFO message in the logs:
uploadBlock(
hostname: String,
port: Int,
execId: String,
blockId: BlockId,
blockData: ManagedBuffer,
level: StorageLevel,
classTag: ClassTag[_]): Future[Unit]
33
NettyBlockTransferService — Netty-Based BlockTransferService
The UploadBlock message holds the application id, the input execId and blockId . It also
holds the serialized bytes for block metadata with level and classTag serialized (using
the internal JavaSerializer ) as well as the serialized bytes for the input blockData itself
(this time however the serialization uses ManagedBuffer.nioByteBuffer method).
When blockId block was successfully uploaded, you should see the following TRACE
message in the logs:
When an upload failed, you should see the following ERROR message in the logs:
UploadBlock Message
UploadBlock is a BlockTransferMessage that describes a block being uploaded, i.e. send
metadata
34
NettyBlockTransferService — Netty-Based BlockTransferService
As an Encodable , UploadBlock can calculate the encoded size and do encoding and
decoding itself to or from a ByteBuf , respectively.
createServer …FIXME
SparkConf
SecurityManager
Port number
35
NettyBlockRpcServer — NettyBlockTransferService’s RpcHandler
NettyBlockRpcServer —
NettyBlockTransferService’s RpcHandler
NettyBlockRpcServer is a RpcHandler that handles messages for
NettyBlockTransferService.
Tip Enable TRACE logging level to see received messages in the logs.
Refer to Logging.
NettyBlockRpcServer then registers a stream of ManagedBuffer s (for the blocks) with the
36
NettyBlockRpcServer — NettyBlockTransferService’s RpcHandler
In the end, NettyBlockRpcServer responds with a StreamHandle (with the streamId and the
number of blocks). The response is serialized as a ByteBuffer .
Application ID
Serializer
BlockDataManager
37
NettyBlockRpcServer — NettyBlockTransferService’s RpcHandler
receive(
client: TransportClient,
rpcMessage: ByteBuffer,
responseContext: RpcResponseCallback): Unit
receive …FIXME
38
BlockFetchingListener
BlockFetchingListener
BlockFetchingListener is the contract of EventListeners that want to be notified about
package org.apache.spark.network.shuffle;
Table 2. BlockFetchingListeners
BlockFetchingListener Description
RetryingBlockFetchListener
"Unnamed" in
ShuffleBlockFetcherIterator
"Unnamed" in
BlockTransferService
39
RetryingBlockFetcher
RetryingBlockFetcher
RetryingBlockFetcher is…FIXME
0 which it is by default)
At initiateRetry, RetryingBlockFetcher prints out the following INFO message to the logs
(with the number of outstandingBlocksIds):
TransportConf
BlockFetchStarter
BlockFetchingListener
40
RetryingBlockFetcher
void start()
initiateRetry …FIXME
void fetchAllOutstanding()
outstandingBlocksIds.
RetryingBlockFetchListener
RetryingBlockFetchListener is a BlockFetchingListener that RetryingBlockFetcher uses to
onBlockFetchSuccess Method
41
RetryingBlockFetcher
onBlockFetchSuccess …FIXME
onBlockFetchFailure Method
onBlockFetchFailure …FIXME
42
BlockFetchStarter
BlockFetchStarter
BlockFetchStarter is the contract of…FIXME…to createAndStart.
43
Web UI — Spark Application’s Web Console
web UI comes with the following tabs (which may not all be visible immediately, but only
after the respective modules are in use, e.g. the SQL or Streaming tabs):
1. Jobs
2. Stages
3. Storage
4. Environment
5. Executors
You can use the web UI after the application has finished by persisting events
Tip
(using EventLoggingListener) and using Spark History Server.
44
Web UI — Spark Application’s Web Console
45
Jobs
Jobs Tab
Jobs tab in web UI shows status of all Spark jobs in a Spark application (i.e. a
SparkContext).
46
Jobs
Details for Job page is registered under /job URL, i.e. http://localhost:4040/jobs/job/?
id=0 and accepts one mandatory id request parameter as a job identifier.
When a job id is not found, you should see "No information to display for job ID" message.
Figure 4. "No information to display for job" in Details for Job Page
JobPage displays the job’s status, group (if available), and the stages per state: active,
47
Jobs
Figure 5. Details for Job Page with Active and Pending Stages
48
Jobs
49
Stages
Stages Tab
Stages tab in web UI shows…FIXME
50
Storage
Storage Tab
Storage tab in web UI shows…FIXME
51
Environment
Environment Tab
Environment tab in web UI shows…FIXME
52
Executors
Executors Tab
Executors tab in web UI shows…FIXME
What’s interesting in how Storage Memory is displayed in the Executors tab is that the default
in a way that is different from what the page displays (using the custom JavaScript
getExecInfo Method
getExecInfo(
listener: ExecutorsListener,
statusId: Int,
isActive: Boolean): ExecutorSummary
53
Executors
Caution FIXME
Settings
spark.ui.threadDumpsEnabled
spark.ui.threadDumpsEnabled (default: true ) is to enable ( true ) or disable ( false )
ExecutorThreadDumpPage.
54
JobsTab
JobsTab
JobsTab is a SparkUITab with jobs prefix.
Parent SparkUI
AppStatusStore
When created, JobsTab creates the following pages and attaches them immediately:
AllJobsPage
JobPage
handleKillRequest Method
handleKillRequest …FIXME
55
AllJobsPage
AllJobsPage renders a summary, an event timeline, and active, completed, and failed jobs
of a Spark application.
Tip Jobs (in any state) are displayed when their number is greater than 0 .
AllJobsPage displays the Summary section with the current Spark user, total uptime,
56
AllJobsPage
When you hover over a job in Event Timeline not only you see the job legend but also the
job is highlighted in the Summary section.
Figure 4. Hovering Over Job in Event Timeline Highlights The Job in Status Section
The Event Timeline section shows not only jobs but also executors.
57
AllJobsPage
Parent JobsTab
AppStatusStore
58
JobPage
JobPage
JobPage is a WebUIPage with job prefix.
Parent JobsTab
AppStatusStore
59
StagesTab — Stages for All Jobs
When created, StagesTab creates the following pages and attaches them immediately:
AllStagesPage
StagePage
PoolPage
Stages tab in web UI shows the current state of all stages of all jobs in a Spark application
(i.e. a SparkContext) with two optional pages for the tasks and statistics for a stage (when a
stage is selected) and pool details (when the application works in FAIR scheduling mode).
You can access the Stages tab under /stages URL, i.e. http://localhost:4040/stages.
With no jobs submitted yet (and hence no stages to display), the page shows nothing but the
title.
60
StagesTab — Stages for All Jobs
The state sections are only displayed when there are stages in a given state.
Note
Refer to Stages for All Jobs.
In FAIR scheduling mode you have access to the table showing the scheduler pools.
The page uses the parent’s SparkUI to access required services, i.e. SparkContext,
SparkConf, JobProgressListener, RDDOperationGraphListener, and to know whether kill is
enabled or not.
killEnabled flag
Caution FIXME
SparkUI
AppStatusStore
handleKillRequest …FIXME
61
StagesTab — Stages for All Jobs
62
AllStagesPage — Stages for All Jobs
stages in a Spark application - active, pending, completed, and failed stages with their count.
Figure 1. Stages Tab in web UI for FAIR scheduling mode (with pools only)
In FAIR scheduling mode you have access to the table showing the scheduler pools as well
as the pool names per stage.
Internally, AllStagesPage is a WebUIPage with access to the parent Stages tab and more
importantly the JobProgressListener to have access to current state of the entire Spark
application.
63
AllStagesPage — Stages for All Jobs
There are 4 different tables for the different states of stages - active, pending, completed,
and failed. They are displayed only when there are stages in a given state.
Figure 2. Stages Tab in web UI for FAIR scheduling mode (with pools and stages)
You could also notice "retry" for stage when it was retried.
64
StagePage — Stage Details
StagePage — Stage Details
StagePage is a WebUIPage with stage prefix.
StagePage shows the task details for a stage given its id and attempt id.
StagePage uses ExecutorsListener to display stdout and stderr logs of the executors in
Tasks section.
Tasks Section
65
StagePage — Stage Details
The section uses ExecutorsListener to access stdout and stderr logs for
Note
Executor ID / Host column.
The table consists of the following columns: Metric, Min, 25th percentile, Median, 75th
percentile, Max.
66
StagePage — Stage Details
The 1st row is Duration which includes the quantiles based on executorRunTime .
The 2nd row is the optional Scheduler Delay which includes the time to ship the task from
the scheduler to executors, and the time to send the task result from the executors to the
scheduler. It is not enabled by default and you should select Scheduler Delay checkbox
under Show Additional Metrics to include it in the summary table.
The 3rd row is the optional Task Deserialization Time which includes the quantiles based
on executorDeserializeTime task metric. It is not enabled by default and you should select
Task Deserialization Time checkbox under Show Additional Metrics to include it in the
summary table.
The 4th row is GC Time which is the time that an executor spent paused for Java garbage
collection while the task was running (using jvmGCTime task metric).
The 5th row is the optional Result Serialization Time which is the time spent serializing the
task result on a executor before sending it back to the driver (using
resultSerializationTime task metric). It is not enabled by default and you should select
Result Serialization Time checkbox under Show Additional Metrics to include it in the
summary table.
The 6th row is the optional Getting Result Time which is the time that the driver spends
fetching task results from workers. It is not enabled by default and you should select Getting
Result Time checkbox under Show Additional Metrics to include it in the summary table.
If Getting Result Time is large, consider decreasing the amount of data returned
Tip
from each task.
If Tungsten is enabled (it is by default), the 7th row is the optional Peak Execution Memory
which is the sum of the peak sizes of the internal data structures created during shuffles,
aggregations and joins (using peakExecutionMemory task metric). For SQL jobs, this only
tracks all unsafe operators, broadcast joins, and external sort. It is not enabled by default
and you should select Peak Execution Memory checkbox under Show Additional Metrics
to include it in the summary table.
If the stage has an input, the 8th row is Input Size / Records which is the bytes and records
read from Hadoop or from a Spark storage (using inputMetrics.bytesRead and
inputMetrics.recordsRead task metrics).
If the stage has an output, the 9th row is Output Size / Records which is the bytes and
records written to Hadoop or to a Spark storage (using outputMetrics.bytesWritten and
outputMetrics.recordsWritten task metrics).
67
StagePage — Stage Details
If the stage has shuffle read there will be three more rows in the table. The first row is
Shuffle Read Blocked Time which is the time that tasks spent blocked waiting for shuffle
data to be read from remote machines (using shuffleReadMetrics.fetchWaitTime task
metric). The other row is Shuffle Read Size / Records which is the total shuffle bytes and
records read (including both data read locally and data read from remote executors using
shuffleReadMetrics.totalBytesRead and shuffleReadMetrics.recordsRead task metrics). And
the last row is Shuffle Remote Reads which is the total shuffle bytes read from remote
executors (which is a subset of the shuffle read bytes; the remaining shuffle data is read
locally). It uses shuffleReadMetrics.remoteBytesRead task metric.
If the stage has shuffle write, the following row is Shuffle Write Size / Records (using
shuffleWriteMetrics.bytesWritten and shuffleWriteMetrics.recordsWritten task metrics).
If the stage has bytes spilled, the following two rows are Shuffle spill (memory) (using
memoryBytesSpilled task metric) and Shuffle spill (disk) (using diskBytesSpilled task
metric).
Request Parameters
id is…
attempt is…
Metrics
Scheduler Delay is…FIXME
68
StagePage — Stage Details
69
StagePage — Stage Details
Executor ID
Address
Task Time
Total Tasks
Failed Tasks
Killed Tasks
Succeeded Tasks
(optional) Input Size / Records (only when the stage has an input)
(optional) Output Size / Records (only when the stage has an output)
(optional) Shuffle Read Size / Records (only when the stage read bytes for a shuffle)
(optional) Shuffle Write Size / Records (only when the stage wrote bytes for a shuffle)
70
StagePage — Stage Details
(optional) Shuffle Spill (Memory) (only when the stage spilled memory bytes)
(optional) Shuffle Spill (Disk) (only when the stage spilled bytes to disk)
Accumulators
Stage page displays the table with named accumulators (only if they exist). It contains the
name and value of the accumulators.
Parent StagesTab
AppStatusStore
71
PoolPage — Pool Details
The Fair Scheduler Pool Details page shows information about a Schedulable pool and is
only available when a Spark application uses the FAIR scheduling mode (which is controlled
by spark.scheduler.mode setting).
PoolPage uses the parent’s SparkContext to access information about the pool and
order by default).
Summary Table
The Summary table shows the details of a Schedulable pool.
Pool Name
72
PoolPage — Pool Details
Minimum Share
Pool Weight
Running Tasks
SchedulingMode
All the columns are the attributes of a Schedulable but the number of active stages which is
calculated using the list of active stages of a pool (from the parent’s JobProgressListener).
Stage Id
Description
Submitted
Duration
Tasks: Succeeded/Total
Shuffle Read — Total shuffle bytes and records read (includes both data read locally
and data read from remote executors).
The table uses JobProgressListener for information per stage in the pool.
Request Parameters
73
PoolPage — Pool Details
poolname
poolname is the name of the scheduler pool to display on the page. It is a mandatory
request parameter.
74
StorageTab
StorageTab
StorageTab is a SparkUITab with storage prefix.
Parent SparkUI
AppStatusStore
When created, StorageTab creates the following pages and attaches them immediately:
StoragePage
RDDPage
75
StoragePage
StoragePage
StoragePage is a WebUIPage with an empty prefix.
Parent SparkUITab
AppStatusStore
rddRow …FIXME
rddTable …FIXME
receiverBlockTables Method
receiverBlockTables …FIXME
76
StoragePage
render requests the AppStatusStore for rddList and renders an HTML table with their
render requests the AppStatusStore for streamBlocksList and renders an HTML table with
77
RDDPage
RDDPage
RDDPage is a WebUIPage with rdd prefix.
Parent SparkUITab
AppStatusStore
render Method
render …FIXME
78
EnvironmentTab
EnvironmentTab
EnvironmentTab is a SparkUITab with environment prefix.
Parent SparkUI
AppStatusStore
79
EnvironmentPage
EnvironmentPage
EnvironmentPage is a WebUIPage with an empty prefix.
Parent EnvironmentTab
SparkConf
AppStatusStore
80
ExecutorsTab
ExecutorsTab
ExecutorsTab is a SparkUITab with executors prefix.
When created, ExecutorsTab creates the following pages and attaches them immediately:
ExecutorsPage
ExecutorThreadDumpPage
application.
81
ExecutorsPage
ExecutorsPage
ExecutorsPage is a WebUIPage with an empty prefix.
Parent SparkUITab
threadDumpEnabled flag
82
ExecutorThreadDumpPage
ExecutorThreadDumpPage
ExecutorThreadDumpPage is a WebUIPage with threadDump prefix.
SparkUITab
Optional SparkContext
83
SparkUI — Web UI of Spark Application
application)
Name of the Spark application that is exactly the value of spark.app.name configuration
property
When started, SparkUI binds to appUIAddress address that you can control using
SPARK_PUBLIC_DNS environment variable or spark.driver.host Spark property.
84
SparkUI — Web UI of Spark Application
taskList
Refer to Logging.
stop(): Unit
stop stops the HTTP server and prints the following INFO message to the logs:
85
SparkUI — Web UI of Spark Application
appUIAddress Method
appUIAddress: String
appUIAddress returns the entire URL of a Spark application’s web UI, including http://
scheme.
getSparkUser: String
getSparkUser returns the name of the user a Spark application runs as.
createLiveUI Method
createLiveUI(
sc: SparkContext,
conf: SparkConf,
listenerBus: SparkListenerBus,
jobProgressListener: JobProgressListener,
securityManager: SecurityManager,
appName: String,
startTime: Long): SparkUI
86
SparkUI — Web UI of Spark Application
createHistoryUI Method
Caution FIXME
appUIHostPort Method
appUIHostPort: String
appUIHostPort returns the Spark application’s web UI which is the public hostname and
getAppName Method
getAppName: String
getAppName returns the name of the Spark application (of a SparkUI instance).
create(
sc: Option[SparkContext],
store: AppStatusStore,
conf: SparkConf,
securityManager: SecurityManager,
appName: String,
basePath: String = "",
startTime: Long,
appSparkVersion: String = org.apache.spark.SPARK_VERSION): SparkUI
Internally, create simply creates a new SparkUI (with the predefined Spark version).
87
SparkUI — Web UI of Spark Application
AppStatusStore
SparkContext
SparkConf
SecurityManager
Application name
basePath
Start time
appSparkVersion
SparkUI initializes the internal registries and counters and the tabs and handlers.
initialize(): Unit
initialize creates and attaches the following tabs (with the reference to the SparkUI and
its AppStatusStore):
1. JobsTab
2. StagesTab
3. StorageTab
4. EnvironmentTab
88
SparkUI — Web UI of Spark Application
5. ExecutorsTab
1. Creates a static handler for serving files from a static directory, i.e. /static to serve
static files from org/apache/spark/ui/static directory (on CLASSPATH)
2. Creates a redirect handler to redirect / to /jobs/ (and so the Jobs tab is the
welcome tab when you open the web UI)
3. Creates the /api/* context handler for the Status REST API
89
SparkUITab
SparkUITab
SparkUITab is the contract of WebUITab extensions with two additional properties:
appName
appSparkVersion
package org.apache.spark.ui
Table 2. SparkUITabs
SparkUITab Description
EnvironmentTab
ExecutorsTab
JobsTab
StagesTab
StorageTab
90
SparkUITab
91
BlockStatusListener Spark Listener
blockManagers
The lookup table for a collection of BlockId and
BlockUIData per BlockManagerId.
onBlockManagerAdded
Registers a BlockManager in blockManagers internal
registry (with no blocks).
onBlockManagerRemoved
Removes a BlockManager from blockManagers internal
registry.
onBlockUpdated
Ignores updates for unregistered BlockManager s or
non- StreamBlockId s.
92
EnvironmentListener Spark Listener
Caution FIXME
93
ExecutorsListener Spark Listener
application for Stage Details page, Jobs tab and /allexecutors REST endpoint.
onExecutorBlacklisted FIXME
onExecutorUnblacklisted FIXME
onNodeBlacklisted FIXME
onNodeUnblacklisted FIXME
94
ExecutorsListener Spark Listener
A collection of SparkListenerEvents.
executorEvents
Used to build the event timeline in AllJobsPage and
Details for Job pages.
updateExecutorBlacklist Method
Caution FIXME
Caution FIXME
Caution FIXME
Caution FIXME
Caution FIXME
95
ExecutorsListener Spark Listener
defined) and finds the driver’s active StorageStatus (using the current
StorageStatusListener). onApplicationStart then uses the driver’s StorageStatus (if
defined) to set executorLogs .
onExecutorAdded finds the executor (using the input executorAdded ) in the internal
totalCores totalCores
96
ExecutorsListener Spark Listener
removes the oldest event if the number of elements in executorEvents collection is greater
than spark.ui.timeline.executors.maximum configuration property.
SparkListenerTaskStart ).
97
ExecutorsListener Spark Listener
Note onTaskEnd is part of SparkListener contract to announce that a task has ended.
tasksActive is decremented but only when the number of active tasks for the executor is
greater than 0 .
If the TaskMetrics (in the input taskEnd ) is available, the metrics are added to the
taskSummary for the task’s executor.
98
ExecutorsListener Spark Listener
inputRecords inputMetrics.recordsRead
outputBytes outputMetrics.bytesWritten
outputRecords outputMetrics.recordsWritten
shuffleRead shuffleReadMetrics.remoteBytesRead
shuffleWrite shuffleWriteMetrics.bytesWritten
jvmGCTime metrics.jvmGCTime
activeStorageStatusList: Seq[StorageStatus]
executors).
FIXME
AllExecutorListResource does executorList
Note
ExecutorListResource does executorList
99
JobProgressListener Spark Listener
onExecutorMetricsUpdate
onBlockManagerAdded
Records an executor and its block manager in the internal
executorIdToBlockManagerId registry.
onBlockManagerRemoved
Removes the executor from the internal
executorIdToBlockManagerId registry.
100
JobProgressListener Spark Listener
Does nothing.
onTaskGettingResult
FIXME: Why is this event intercepted at all?!
updateAggregateMetrics Method
Caution FIXME
101
JobProgressListener Spark Listener
numFailedStages
stageIdToData
Holds StageUIData per stage, i.e. the stage and stage
attempt ids.
stageIdToInfo
stageIdToActiveJobIds
poolToActiveStages
activeJobs
completedJobs
failedJobs
jobIdToData
jobGroupToJobIds
pendingStages
activeStages
completedStages
skippedStages
failedStages
onJobStart Callback
102
JobProgressListener Spark Listener
onJobStart reads the optional Spark Job group id as spark.jobGroup.id (from properties
onJobStart then creates a JobUIData using the input jobStart with status attribute set
onJobStart looks the job ids for the group id (in jobGroupToJobIds registry) and adds the
job id.
The internal pendingStages is updated with StageInfo for the stage id (for every StageInfo
in SparkListenerJobStart.stageInfos collection).
onJobEnd Method
onJobEnd removes the job from activeJobs registry. It removes stages from pendingStages
registry.
When completed successfully, the job is added to completedJobs registry with status
attribute set to JobExecutionStatus.SUCCEEDED . numCompletedJobs gets incremented.
When failed, the job is added to failedJobs registry with status attribute set to
JobExecutionStatus.FAILED . numFailedJobs gets incremented.
For every stage in the job, the stage is removed from the active jobs (in
stageIdToActiveJobIds) that can remove the entire entry if no active jobs exist.
onExecutorMetricsUpdate Method
103
JobProgressListener Spark Listener
onExecutorMetricsUpdate(executorMetricsUpdate: SparkListenerExecutorMetricsUpdate): Un
it
onTaskStart Method
onTaskStart looks the StageUIData for the stage and stage attempt ids up (in
stageIdToData registry).
stageData.taskData .
Ultimately, onTaskStart looks the stage in the internal stageIdToActiveJobIds and for each
active job reads its JobUIData (from jobIdToData). It then increments numActiveTasks .
onTaskEnd Method
onTaskEnd looks the StageUIData for the stage and stage attempt ids up (in stageIdToData
registry).
onTaskEnd reads the ExecutorSummary for the executor (the task has finished on).
104
JobProgressListener Spark Listener
Again, depending on the task end’s reason onTaskEnd computes errorMessage and
updates StageUIData .
Ultimately, onTaskEnd looks the stage in the internal stageIdToActiveJobIds and for each
active job reads its JobUIData (from jobIdToData). It then decrements numActiveTasks and
increments numCompletedTasks , numKilledTasks or numFailedTasks depending on the task’s
end reason.
onStageSubmitted Method
onStageCompleted Method
stageIdToInfo registry.
onStageCompleted looks the StageUIData for the stage and the stage attempt ids up in
stageIdToData registry.
If the stage completed successfully (i.e. has no failureReason ), onStageCompleted adds the
stage to completedStages registry and increments numCompletedStages counter. It trims
completedStages.
105
JobProgressListener Spark Listener
Otherwise, when the stage failed, onStageCompleted adds the stage to failedStages registry
and increments numFailedStages counter. It trims failedStages.
Ultimately, onStageCompleted looks the stage in the internal stageIdToActiveJobIds and for
each active job reads its JobUIData (from jobIdToData). It then decrements
numActiveStages . When completed successfully, it adds the stage to
JobUIData
Caution FIXME
blockManagerIds method
blockManagerIds: Seq[BlockManagerId]
Caution FIXME
StageUIData
Caution FIXME
Settings
Table 3. Spark Properties
Setting Default Value Description
spark.ui.retainedJobs 1000
The number of jobs to hold
information about
spark.ui.retainedStages 1000
The number of stages to
hold information about
spark.ui.retainedTasks 100000
The number of tasks to
hold information about
106
StorageStatusListener Spark Listener
107
StorageStatusListener Spark Listener
Caution FIXME
storageStatusList: Seq[StorageStatus]
internal registry).
108
StorageStatusListener Spark Listener
deadStorageStatusList Method
deadStorageStatusList: Seq[StorageStatus]
updateStorageStatus(unpersistedRDDId: Int)
updateStorageStatus then finds RDD blocks for unpersistedRDDId RDD (for every
109
StorageListener — Spark Listener for Tracking Persistence Status of RDD Blocks
StorageStatusListener
110
StorageListener — Spark Listener for Tracking Persistence Status of RDD Blocks
activeStorageStatusList: Seq[StorageStatus]
executors).
updates registered RDDInfos (with block updates from BlockManagers) (passing in BlockId
and BlockStatus as a single-element collection of updated blocks).
onStageCompleted finds the identifiers of the RDDs that have participated in the completed
stage and removes them from _rddInfoMap registry as well as the RDDs that are no longer
cached.
111
StorageListener — Spark Listener for Tracking Persistence Status of RDD Blocks
stageSubmitted , possibly adding new RDDInfo instances if they were not registered yet.
onUnpersistRDD removes the RDDInfo from _rddInfoMap registry for the unpersisted RDD
(from unpersistRDD ).
updateRDDInfo finds the RDDs for the input updatedBlocks (for BlockIds).
updateRDDInfo takes RDDInfo entries (in _rddInfoMap registry) for which there are blocks in
Caution FIXME
112
StorageListener — Spark Listener for Tracking Persistence Status of RDD Blocks
113
RDDOperationGraphListener Spark Listener
Caution FIXME
114
WebUI — Framework For Web UIs
WebUI — Base Web UI
WebUI is the base of the web UIs in Apache Spark:
Note Spark on YARN uses a different web framework for the web UI.
package org.apache.spark.ui
WebUI is a Scala abstract class and cannot be created directly, but only as one of the web
UIs.
115
WebUI — Framework For Web UIs
Table 2. WebUIs
WebUI Description
Once bound to a Jetty HTTP server, WebUI is available at an HTTP port (and is used in the
web URL as boundPort ).
publicHostName is…FIXME and the boundPort is the port that the port the Jetty HTTP
Server bound to.
116
WebUI — Framework For Web UIs
WebUITabs
tabs
Used when…FIXME
ServletContextHandlers
handlers
Used when…FIXME
className
Used when…FIXME
Enable INFO or ERROR logging level for the corresponding loggers of the
WebUIs, e.g. org.apache.spark.ui.SparkUI , to see what happens inside.
Refer to Logging.
117
WebUI — Framework For Web UIs
SecurityManager
SSLOptions
Port number
SparkConf
WebUI is a Scala abstract class and cannot be created directly, but only as one
Note
of the implementations.
detachPage …FIXME
detachTab …FIXME
detachHandler …FIXME
118
WebUI — Framework For Web UIs
detachHandler …FIXME
Internally, attachPage creates the path of the WebUIPage that is / (forward slash)
followed by the prefix of the page.
addStaticHandler …FIXME
119
WebUI — Framework For Web UIs
attachHandler simply adds the input Jetty ServletContextHandler to handlers registry and
getBasePath Method
getBasePath: String
getTabs: Seq[WebUITab]
Note getTabs is used exclusively when WebUITab is requested for the header tabs.
120
WebUI — Framework For Web UIs
getHandlers: Seq[ServletContextHandler]
bind(): Unit
bind …FIXME
stop(): Unit
stop …FIXME
121
WebUIPage — Contract of Pages in Web UI
JSON.
attached to a WebUITab
package org.apache.spark.ui
render
Used exclusively when WebUI is requested to attach a
page (and…FIXME)
Table 2. WebUIPages
WebUIPage Description
AllExecutionsPage Used in Spark SQL module
AllJobsPage
AllStagesPage
122
WebUIPage — Contract of Pages in Web UI
EnvironmentPage
ExecutorsPage
ExecutorThreadDumpPage
JobPage
PoolPage
RDDPage
StagePage
StoragePage
123
WebUITab — Contract of Tabs in Web UI
attached to a WebUITab
attachPage prepends the page prefix (of the input WebUIPage ) with the tab prefix (with no
basePath: String
headerTabs: Seq[WebUITab]
124
WebUITab — Contract of Tabs in Web UI
Parent WebUI
Prefix
WebUITab is a Scala abstract class and cannot be created directly, but only as
Note
one of the implementations.
125
RDDStorageInfo
RDDStorageInfo
RDDStorageInfo contains information about RDD persistence:
RDD id
RDD name
Storage level ID
Memory used
Disk used
requested to write).
1. web UI’s StoragePage is requested to render an HTML table row and an entire table for
RDD details
126
RDDInfo
RDDInfo
RDDInfo is…FIXME
127
LiveEntity
LiveEntity
LiveEntity is the contract of a live entity in Spark that…FIXME
package org.apache.spark.status
LiveEntity tracks the last write time (in lastWriteTime internal registry).
write Method
128
LiveRDD
LiveRDD
LiveRDD is a LiveEntity that…FIXME
onStageSubmitted event
doUpdate Method
doUpdate(): Any
doUpdate …FIXME
129
UIUtils
UIUtils
UIUtils is a utility object for…FIXME
headerSparkPage Method
headerSparkPage(
request: HttpServletRequest,
title: String,
content: => Seq[Node],
activeTab: SparkUITab,
refreshInterval: Option[Int] = None,
helpText: Option[String] = None,
showVisualization: Boolean = false,
useDataTables: Boolean = false): Seq[Node]
headerSparkPage …FIXME
130
JettyUtils
JettyUtils
JettyUtils is a set of utility methods for creating Jetty HTTP Server-specific components.
createRedirectHandler
createServletHandler(
path: String,
servlet: HttpServlet,
basePath: String): ServletContextHandler (1)
createServletHandler[T <: AnyRef](
path: String,
servletParams: ServletParams[T],
securityMgr: SecurityManager,
conf: SparkConf,
basePath: String = ""): ServletContextHandler (2)
createServletHandler …FIXME
131
JettyUtils
createServlet creates the X-Frame-Options header that can be either ALLOW-FROM with the
createServlet creates a Java Servlets HttpServlet with support for GET requests.
When handling GET requests, the HttpServlet first checks view permissions of the remote
user (by requesting the SecurityManager to checkUIViewPermissions of the remote user).
Enable DEBUG logging level for org.apache.spark.SecurityManager logger to see what happens w
SecurityManager does the security check.
log4j.logger.org.apache.spark.SecurityManager=DEBUG
Tip
With view permissions check passed, the HttpServlet sends a response with the following:
FIXME
In case the view permissions didn’t allow to view the page, the HttpServlet sends an error
response with the following:
Status 403
132
JettyUtils
successful, sets resourceBase init parameter of the Jetty DefaultServlet to the URL.
Note resourceBase init parameter is used to replace the context resource base.
resolved.
createRedirectHandler Method
createRedirectHandler(
srcPath: String,
destPath: String,
beforeRedirect: HttpServletRequest => Unit = x => (),
basePath: String = "",
httpMethods: Set[String] = Set("GET")): ServletContextHandler
133
JettyUtils
createRedirectHandler …FIXME
134
web UI Configuration Properties
spark.ui.consoleProgress.update.interval
200 Update interval, i.e. how often to
(ms) show the progress.
135
web UI Configuration Properties
deadExecutorStorageStatus (in
StorageStatusListener ) internal
registries.
spark.ui.timeline.tasks.maximum 1000
136
Spark Metrics
Spark Metrics
Spark Metrics gives you execution metrics of Spark subsystems (aka metrics instances),
e.g. the driver of a Spark application or the master of a Spark Standalone cluster.
Spark Metrics uses Dropwizard Metrics 3.1.0 Java library for the metrics infrastructure.
Metrics is a Java library which gives you unparalleled insight into what your code does
in production.
MetricsSystem uses Dropwizard Metrics' MetricRegistry that acts as the integration point
configuration properties.
137
Spark Metrics
Among the metrics sinks is MetricsServlet that is used when sink.servlet metrics sink is
configured in metrics configuration.
You can then use jconsole to access Spark metrics through JMX.
*.sink.jmx.class=org.apache.spark.metrics.sink.JmxSink
138
Spark Metrics
Content-Type: text/json;charset=utf-8
Date: Sat, 25 Feb 2017 14:14:16 GMT
Server: Jetty(9.2.z-SNAPSHOT)
X-Frame-Options: SAMEORIGIN
{
"counters": {
"app-20170225151406-0000.driver.HiveExternalCatalog.fileCacheHits": {
"count": 0
},
"app-20170225151406-0000.driver.HiveExternalCatalog.filesDiscovered": {
"count": 0
},
"app-20170225151406-0000.driver.HiveExternalCatalog.hiveClientCalls": {
"count": 2
},
"app-20170225151406-0000.driver.HiveExternalCatalog.parallelListingJobCount":
{
"count": 0
},
"app-20170225151406-0000.driver.HiveExternalCatalog.partitionsFetched": {
"count": 0
}
},
"gauges": {
...
"timers": {
"app-20170225151406-0000.driver.DAGScheduler.messageProcessingTime": {
"count": 0,
"duration_units": "milliseconds",
"m15_rate": 0.0,
"m1_rate": 0.0,
"m5_rate": 0.0,
"max": 0.0,
"mean": 0.0,
"mean_rate": 0.0,
"min": 0.0,
"p50": 0.0,
"p75": 0.0,
"p95": 0.0,
"p98": 0.0,
"p99": 0.0,
"p999": 0.0,
"rate_units": "calls/second",
"stddev": 0.0
}
},
"version": "3.0.0"
}
139
Spark Metrics
Note You have to use the trailing slash ( / ) to have the output.
$ http http://192.168.1.4:8080/metrics/master/json/path
HTTP/1.1 200 OK
Cache-Control: no-cache, no-store, must-revalidate
Content-Length: 207
Content-Type: text/json;charset=UTF-8
Server: Jetty(8.y.z-SNAPSHOT)
X-Frame-Options: SAMEORIGIN
{
"counters": {},
"gauges": {
"master.aliveWorkers": {
"value": 0
},
"master.apps": {
"value": 0
},
"master.waitingApps": {
"value": 0
},
"master.workers": {
"value": 0
}
},
"histograms": {},
"meters": {},
"timers": {},
"version": "3.0.0"
}
140
MetricsSystem
by default).
141
MetricsSystem
142
MetricsSystem
MetricsConfig
metricsConfig Initialized when MetricsSystem is created.
Used when MetricsSystem registers sinks and sources.
143
MetricsSystem
Refer to Logging.
registerSource creates an identifier for the metrics source and registers it with
MetricRegistry.
When registerSource tries to register a name more than once, you should see the following
INFO message in the logs:
144
MetricsSystem
DAGScheduler
BlockManager
145
MetricsSystem
registerSources(): Unit
registerSources finds the configuration of all the metrics sources for the subsystem (as
For every metrics source, registerSources finds class property, creates an instance, and
in the end registers it.
When registerSources fails, you should see the following ERROR message in the logs
followed by the exception.
getServletHandlers: Array[ServletContextHandler]
If the MetricsSystem is running and the MetricsServlet is defined for the metrics system,
getServletHandlers simply requests the MetricsServlet for the JSON servlet handler.
146
MetricsSystem
SparkContext is created
Note
Spark Standalone’s Master and Worker are requested to start (as
onStart )
registerSinks(): Unit
registerSinks requests the MetricsConfig for the configuration of all metrics sinks (i.e.
For every metrics sink configuration, registerSinks takes class property and (if defined)
creates an instance of the metric sink using an constructor that takes the configuration,
MetricRegistry and SecurityManager.
For a single servlet metrics sink, registerSinks converts the sink to a MetricsServlet and
sets the metricsServlet internal registry.
For all other metrics sinks, registerSinks adds the sink to the sinks internal registry.
In case of an Exception , registerSinks prints out the following ERROR message to the
logs:
stop Method
stop(): Unit
stop …FIXME
147
MetricsSystem
getSourcesByName Method
getSourcesByName …FIXME
removeSource Method
removeSource …FIXME
Instance name
SparkConf
SecurityManager
createMetricsSystem(
instance: String
conf: SparkConf
securityMgr: SecurityManager): MetricsSystem
148
MetricsSystem
report(): Unit
start(): Unit
start registers the "static" metrics sources for Spark SQL, i.e. CodegenMetrics and
HiveCatalogMetrics .
start then registers the configured metrics sources and sinks for the Spark instance.
SparkContext is created
149
MetricsSystem
150
MetricsConfig — Metrics System Configuration
configured using spark.metrics.conf configuration property. The file is first loaded from the
path directly before using Spark’s CLASSPATH.
configuration properties.
MetricsConfig makes sure that the default metrics properties are always defined.
*.sink.servlet.path /metrics/json
master.sink.servlet.path /metrics/master/json
applications.sink.servlet.path /metrics/applications/json
151
MetricsConfig — Metrics System Configuration
initialize(): Unit
initialize sets the default properties and loads configuration properties from a
initialize takes all Spark properties that start with spark.metrics.conf. prefix from
In the end, initialize splits configuration per Spark subsystem with the default
configuration (denoted as * ) assigned to all subsystems afterwards.
152
MetricsConfig — Metrics System Configuration
loadPropertiesFromFile tries to open the input path file (if defined) or the default metrics
If either file is available, loadPropertiesFromFile loads the properties (to properties registry).
In case of exceptions, you should see the following ERROR message in the logs followed by
the exception.
subProperties takes prop properties and destructures keys given regex . subProperties
takes the matching prefix (of a key per regex ) and uses it as a new key with the value(s)
being the matching suffix(es).
getInstance Method
getInstance …FIXME
153
Source — Contract of Metrics Sources
package org.apache.spark.metrics.source
trait Source {
def sourceName: String
def metricRegistry: MetricRegistry
}
154
Source — Contract of Metrics Sources
Table 2. Sources
Source Description
ApplicationSource
BlockManagerSource
CacheMetrics
CodegenMetrics
DAGSchedulerSource
ExecutorAllocationManagerSource
ExecutorSource
ExternalShuffleServiceSource
HiveCatalogMetrics
JvmSource
LiveListenerBusMetrics
MasterSource
MesosClusterSchedulerSource
ShuffleMetricsSource
StreamingSource
WorkerSource
155
Sink — Contract of Metrics Sinks
package org.apache.spark.metrics.sink
trait Sink {
def start(): Unit
def stop(): Unit
def report(): Unit
}
Table 2. Sinks
Sink Description
ConsoleSink
CsvSink
GraphiteSink
JmxSink
MetricsServlet
Slf4jSink
StatsdSink
156
Sink — Contract of Metrics Sinks
157
MetricsServlet JSON Metrics Sink
MetricsServlet is a "special" sink as it is only available to the metrics instances with a web
UI:
You can access the metrics from MetricsServlet at /metrics/json URI by default. The entire
URL depends on a metrics instance, e.g. http://localhost:4040/metrics/json/ for a running
Spark application.
158
MetricsServlet JSON Metrics Sink
$ http http://localhost:4040/metrics/json/
HTTP/1.1 200 OK
Cache-Control: no-cache, no-store, must-revalidate
Content-Length: 5005
Content-Type: text/json;charset=utf-8
Date: Mon, 11 Jun 2018 06:29:03 GMT
Server: Jetty(9.3.z-SNAPSHOT)
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-XSS-Protection: 1; mode=block
{
"counters": {
"local-1528698499919.driver.HiveExternalCatalog.fileCacheHits": {
"count": 0
},
"local-1528698499919.driver.HiveExternalCatalog.filesDiscovered": {
"count": 0
},
"local-1528698499919.driver.HiveExternalCatalog.hiveClientCalls": {
"count": 0
},
"local-1528698499919.driver.HiveExternalCatalog.parallelListingJobCount": {
"count": 0
},
"local-1528698499919.driver.HiveExternalCatalog.partitionsFetched": {
"count": 0
},
"local-1528698499919.driver.LiveListenerBus.numEventsPosted": {
"count": 7
},
"local-1528698499919.driver.LiveListenerBus.queue.appStatus.numDroppedEvents":
{
"count": 0
},
"local-1528698499919.driver.LiveListenerBus.queue.executorManagement.numDroppe
dEvents": {
"count": 0
}
},
...
MetricsServlet can be configured using configuration properties with sink.servlet prefix (in
metrics configuration). That is not required since MetricsConfig makes sure that
MetricsServlet is always configured.
159
MetricsServlet JSON Metrics Sink
MetricsServlet uses jackson-databind, the general data-binding package for Jackson (as
ObjectMapper) with Dropwizard Metrics library (i.e. registering a Coda Hale MetricsModule ).
sample false
Whether to show entire set of samples for
histograms
SecurityManager
160
MetricsServlet JSON Metrics Sink
getHandlers returns just a single ServletContextHandler (in a collection) that gives metrics
161
Metrics Configuration Properties
162
Status REST API — Monitoring Spark Applications Using REST API
SparkUI - Application UI for an active Spark application (i.e. a Spark application that is
still running)
HistoryServer - Application UI for active and completed Spark applications (i.e. Spark
applications that are still running or have already finished)
Status REST API uses ApiRootResource main resource class that registers /api/v1 URI
path and the subpaths.
Jersey RESTful Web Services framework with support for the Java API for RESTful
Web Services (JAX-RS API)
Eclipse Jetty as the lightweight HTTP server and the Java Servlet container
163
ApiRootResource — /api/v1 URI Handler
ApiRootResource uses @Path("/v1") annotation at the class level. It is a partial URI path
template relative to the base URI of the server on which the resource is deployed, the
context root of the application, and the URL pattern to which the JAX-RS runtime responds.
Learn more about @Path annotation in The @Path Annotation and URI Path
Tip
Templates.
ApiRootResource registers the /api/* context handler (with the REST resources and
With the @Path("/v1") annotation and after registering the /api/* context handler,
ApiRootResource serves HTTP requests for paths under the /api/v1 URI paths for SparkUI
and HistoryServer.
ApiRootResource gives the metrics of a Spark application in JSON format (using JAX-RS
API).
164
ApiRootResource — /api/v1 URI Handler
// start spark-shell
$ http http://localhost:4040/api/v1/applications
HTTP/1.1 200 OK
Content-Encoding: gzip
Content-Length: 257
Content-Type: application/json
Date: Tue, 05 Jun 2018 18:36:16 GMT
Server: Jetty(9.3.z-SNAPSHOT)
Vary: Accept-Encoding, User-Agent
[
{
"attempts": [
{
"appSparkVersion": "2.3.1-SNAPSHOT",
"completed": false,
"duration": 0,
"endTime": "1969-12-31T23:59:59.999GMT",
"endTimeEpoch": -1,
"lastUpdated": "2018-06-05T15:04:48.328GMT",
"lastUpdatedEpoch": 1528211088328,
"sparkUser": "jacek",
"startTime": "2018-06-05T15:04:48.328GMT",
"startTimeEpoch": 1528211088328
}
],
"id": "local-1528211089216",
"name": "Spark shell"
}
]
{
"spark": "2.3.1"
}
165
ApiRootResource — /api/v1 URI Handler
applications
Delegates to the
ApplicationListResource resource class
applications/{appId}
Delegates to the
OneApplicationResource resource class
166
ApplicationListResource — applications URI Handler
ApplicationListResource — applications URI
Handler
ApplicationListResource is a ApiRequestContext that ApiRootResource uses to handle
// start spark-shell
// there should be a single Spark application -- the spark-shell itself
$ http http://localhost:4040/api/v1/applications
HTTP/1.1 200 OK
Content-Encoding: gzip
Content-Length: 255
Content-Type: application/json
Date: Wed, 06 Jun 2018 12:40:33 GMT
Server: Jetty(9.3.z-SNAPSHOT)
Vary: Accept-Encoding, User-Agent
[
{
"attempts": [
{
"appSparkVersion": "2.3.1-SNAPSHOT",
"completed": false,
"duration": 0,
"endTime": "1969-12-31T23:59:59.999GMT",
"endTimeEpoch": -1,
"lastUpdated": "2018-06-06T12:30:19.220GMT",
"lastUpdatedEpoch": 1528288219220,
"sparkUser": "jacek",
"startTime": "2018-06-06T12:30:19.220GMT",
"startTimeEpoch": 1528288219220
}
],
"id": "local-1528288219790",
"name": "Spark shell"
}
]
167
ApplicationListResource — applications URI Handler
isAttemptInRange(
attempt: ApplicationAttemptInfo,
minStartDate: SimpleDateParam,
maxStartDate: SimpleDateParam,
minEndDate: SimpleDateParam,
maxEndDate: SimpleDateParam,
anyRunning: Boolean): Boolean
isAttemptInRange …FIXME
appList Method
appList(
@QueryParam("status") status: JList[ApplicationStatus],
@DefaultValue("2010-01-01") @QueryParam("minDate") minDate: SimpleDateParam,
@DefaultValue("3000-01-01") @QueryParam("maxDate") maxDate: SimpleDateParam,
@DefaultValue("2010-01-01") @QueryParam("minEndDate") minEndDate: SimpleDateParam,
@DefaultValue("3000-01-01") @QueryParam("maxEndDate") maxEndDate: SimpleDateParam,
@QueryParam("limit") limit: Integer)
: Iterator[ApplicationInfo]
appList …FIXME
168
OneApplicationResource — applications/appId URI Handler
OneApplicationResource — applications/appId
URI Handler
OneApplicationResource is a AbstractApplicationResource (and so a ApiRequestContext
// start spark-shell
// there should be a single Spark application -- the spark-shell itself
$ http http://localhost:4040/api/v1/applications
HTTP/1.1 200 OK
Content-Encoding: gzip
Content-Length: 255
Content-Type: application/json
Date: Wed, 06 Jun 2018 12:40:33 GMT
Server: Jetty(9.3.z-SNAPSHOT)
Vary: Accept-Encoding, User-Agent
[
{
"attempts": [
{
"appSparkVersion": "2.3.1-SNAPSHOT",
"completed": false,
"duration": 0,
"endTime": "1969-12-31T23:59:59.999GMT",
"endTimeEpoch": -1,
"lastUpdated": "2018-06-06T12:30:19.220GMT",
"lastUpdatedEpoch": 1528288219220,
"sparkUser": "jacek",
"startTime": "2018-06-06T12:30:19.220GMT",
"startTimeEpoch": 1528288219220
}
],
"id": "local-1528288219790",
"name": "Spark shell"
}
]
$ http http://localhost:4040/api/v1/applications/local-1528288219790
HTTP/1.1 200 OK
Content-Encoding: gzip
Content-Length: 255
169
OneApplicationResource — applications/appId URI Handler
Content-Type: application/json
Date: Wed, 06 Jun 2018 12:41:43 GMT
Server: Jetty(9.3.z-SNAPSHOT)
Vary: Accept-Encoding, User-Agent
{
"attempts": [
{
"appSparkVersion": "2.3.1-SNAPSHOT",
"completed": false,
"duration": 0,
"endTime": "1969-12-31T23:59:59.999GMT",
"endTimeEpoch": -1,
"lastUpdated": "2018-06-06T12:30:19.220GMT",
"lastUpdatedEpoch": 1528288219220,
"sparkUser": "jacek",
"startTime": "2018-06-06T12:30:19.220GMT",
"startTimeEpoch": 1528288219220
}
],
"id": "local-1528288219790",
"name": "Spark shell"
}
getApp Method
getApp(): ApplicationInfo
getApp requests the UIRoot for the application info (given the appId).
170
StagesResource
StagesResource
StagesResource is…FIXME
GET stageList
{stageId:
\d+}/{stageAttemptId: GET oneAttemptData
\d+}
{stageId:
\d+}/{stageAttemptId: GET taskSummary
\d+}/taskSummary
{stageId:
\d+}/{stageAttemptId: GET taskList
\d+}/taskList
stageList Method
stageList …FIXME
stageData Method
stageData(
@PathParam("stageId") stageId: Int,
@QueryParam("details") @DefaultValue("true") details: Boolean): Seq[StageData]
stageData …FIXME
oneAttemptData Method
171
StagesResource
oneAttemptData(
@PathParam("stageId") stageId: Int,
@PathParam("stageAttemptId") stageAttemptId: Int,
@QueryParam("details") @DefaultValue("true") details: Boolean): StageData
oneAttemptData …FIXME
taskSummary Method
taskSummary(
@PathParam("stageId") stageId: Int,
@PathParam("stageAttemptId") stageAttemptId: Int,
@DefaultValue("0.05,0.25,0.5,0.75,0.95") @QueryParam("quantiles") quantileString: St
ring)
: TaskMetricDistributions
taskSummary …FIXME
taskList Method
taskList(
@PathParam("stageId") stageId: Int,
@PathParam("stageAttemptId") stageAttemptId: Int,
@DefaultValue("0") @QueryParam("offset") offset: Int,
@DefaultValue("20") @QueryParam("length") length: Int,
@DefaultValue("ID") @QueryParam("sortBy") sortBy: TaskSorting): Seq[TaskData]
taskList …FIXME
172
OneApplicationAttemptResource
OneApplicationAttemptResource
OneApplicationAttemptResource is a AbstractApplicationResource (and so a
ApiRequestContext indirectly).
applicationAttempt.
// start spark-shell
// there should be a single Spark application -- the spark-shell itself
// CAUTION: FIXME Demo of OneApplicationAttemptResource in Action
getAttempt Method
getAttempt(): ApplicationAttemptInfo
getAttempt requests the UIRoot for the application info (given the appId) and finds the
173
AbstractApplicationResource
AbstractApplicationResource
AbstractApplicationResource is a BaseAppResource with a set of URI paths that are
174
AbstractApplicationResource
// start spark-shell
$ http http://localhost:4040/api/v1/applications
HTTP/1.1 200 OK
Content-Encoding: gzip
Content-Length: 257
Content-Type: application/json
Date: Tue, 05 Jun 2018 18:46:32 GMT
Server: Jetty(9.3.z-SNAPSHOT)
Vary: Accept-Encoding, User-Agent
[
{
"attempts": [
{
"appSparkVersion": "2.3.1-SNAPSHOT",
"completed": false,
"duration": 0,
"endTime": "1969-12-31T23:59:59.999GMT",
"endTimeEpoch": -1,
"lastUpdated": "2018-06-05T15:04:48.328GMT",
"lastUpdatedEpoch": 1528211088328,
"sparkUser": "jacek",
"startTime": "2018-06-05T15:04:48.328GMT",
"startTimeEpoch": 1528211088328
}
],
"id": "local-1528211089216",
"name": "Spark shell"
}
]
$ http http://localhost:4040/api/v1/applications/local-1528211089216/storage/rdd
HTTP/1.1 200 OK
Content-Length: 3
Content-Type: application/json
Date: Tue, 05 Jun 2018 18:48:00 GMT
Server: Jetty(9.3.z-SNAPSHOT)
Vary: Accept-Encoding, User-Agent
[]
$ http http://localhost:4040/api/v1/applications/local-1528211089216/storage/rdd
// output omitted for brevity
175
AbstractApplicationResource
Table 1. AbstractApplicationResources
AbstractApplicationResource Description
OneApplicationAttemptResource
stages stages
storage/rdd/{rddId:
\\d+} GET rddData
rddList Method
rddList(): Seq[RDDStorageInfo]
rddList …FIXME
environmentInfo Method
environmentInfo(): ApplicationEnvironmentInfo
environmentInfo …FIXME
176
AbstractApplicationResource
rddData Method
rddData …FIXME
allExecutorList Method
allExecutorList(): Seq[ExecutorSummary]
allExecutorList …FIXME
executorList Method
executorList(): Seq[ExecutorSummary]
executorList …FIXME
oneJob Method
oneJob …FIXME
jobsList Method
177
AbstractApplicationResource
jobsList …FIXME
178
BaseAppResource
BaseAppResource
BaseAppResource is the contract of ApiRequestContexts that can withUI and use appId and
@PathParam("attemptId")
attemptId
Used when…FIXME
Table 2. BaseAppResources
BaseAppResource Description
AbstractApplicationResource
BaseStreamingAppResource
StagesResource
withUI Method
withUI …FIXME
179
ApiRequestContext
ApiRequestContext
ApiRequestContext is the contract of…FIXME
package org.apache.spark.status.api.v1
trait ApiRequestContext {
// only required methods that have no implementation
// the others follow
@Context
var servletContext: ServletContext = _
@Context
var httpRequest: HttpServletRequest = _
}
Table 2. ApiRequestContexts
ApiRequestContext Description
ApiRootResource
ApiStreamingApp
ApplicationListResource
BaseAppResource
SecurityFilter
180
ApiRequestContext
uiRoot: UIRoot
uiRoot simply requests UIRootFromServletContext to get the current UIRoot (for the given
servletContext).
181
UIRoot — Contract for Root Contrainers of Application UI Information
package org.apache.spark.status.api.v1
trait UIRoot {
// only required methods that have no implementation
// the others follow
def withSparkUI[T](appId: String, attemptId: Option[String])(fn: SparkUI => T): T
def getApplicationInfoList: Iterator[ApplicationInfo]
def getApplicationInfo(appId: String): Option[ApplicationInfo]
def securityManager: SecurityManager
}
withSparkUI
Used exclusively when BaseAppResource is requested
withUI
Table 2. UIRoots
UIRoot Description
Application UI for active and completed Spark
HistoryServer applications (i.e. Spark applications that are still running
or have already finished)
writeEventLogs Method
182
UIRoot — Contract for Root Contrainers of Application UI Information
writeEventLogs …FIXME
183
UIRootFromServletContext
UIRootFromServletContext
UIRootFromServletContext manages the current UIRoot object in a Jetty ContextHandler .
UIRootFromServletContext uses its canonical name for the context attribute that is used to
setUiRoot Method
setUiRoot …FIXME
getUiRoot Method
getUiRoot …FIXME
184
Spark MLlib — Machine Learning in Spark
Spark MLlib
I’m new to Machine Learning as a discipline and Spark MLlib in particular so
Caution
mistakes in this document are considered a norm (not an exception).
You can find the following types of machine learning algorithms in MLlib:
Classification
Regression
Recommendation
Clustering
Statistics
Linear Algebra
Pipelines
Machine Learning uses large datasets to identify (infer) patterns and make decisions (aka
predictions). Automated decision making is what makes Machine Learning so appealing.
You can teach a system from a dataset and let the system act by itself to predict future.
The amount of data (measured in TB or PB) is what makes Spark MLlib especially important
since a human could not possibly extract much value from the dataset in a short time.
Spark handles data distribution and makes the huge data available by means of RDDs,
DataFrames, and recently Datasets.
185
Spark MLlib — Machine Learning in Spark
Use cases for Machine Learning (and hence Spark MLlib that comes with appropriate
algorithms):
Operational optimizations
Concepts
This section introduces the concepts of Machine Learning and how they are modeled in
Spark MLlib.
Observation
An observation is used to learn about or evaluate (i.e. draw conclusions about) the
observed item’s target value.
Feature
A feature (aka dimension or variable) is an attribute of an observation. It is an independent
variable.
Spark models features as columns in a DataFrame (one per feature or a set of features).
Categorical with discrete values, i.e. the set of possible values is limited, and can range
from one to many thousands. There is no ordering implied, and so the values are
incomparable.
Numerical with quantitative values, i.e. any numerical values that you can compare to
each other. You can further classify them into discrete and continuous features.
Label
A label is a variable that a machine learning system learns to predict that are assigned to
observations.
186
Spark MLlib — Machine Learning in Spark
FP-growth Algorithm
Spark 1.5 have significantly improved on frequent pattern mining capabilities with new
algorithms for association rule generation and sequential pattern mining.
Frequent Itemset Mining using the Parallel FP-growth algorithm (since Spark 1.3)
finds popular routing paths that generate most traffic in a particular region
the algorithm looks for common subsets of items that appear across transactions,
e.g. sub-paths of the network that are frequently traversed.
A naive solution: generate all possible itemsets and count their occurrence
the algorithm finds all frequent itemsets without generating and testing all
candidates
187
Spark MLlib — Machine Learning in Spark
retailer could then use this information, put both toothbrush and floss on sale, but
raise the price of toothpaste to increase overall profit.
FPGrowth model
extract frequent sequential patterns like routing updates, activation failures, and
broadcasting timeouts that could potentially lead to customer complaints and
proactively reach out to customers when it happens.
Power Iteration Clustering (PIC) in MLlib, a simple and scalable graph clustering
method
org.apache.spark.mllib.clustering.PowerIterationClustering
a graph algorithm
takes an undirected graph with similarities defined on edges and outputs clustering
assignment on nodes
The edge properties are cached and remain static during the power iterations.
188
Spark MLlib — Machine Learning in Spark
New MLlib Algorithms in Spark 1.3: FP-Growth and Power Iteration Clustering
(video) GOTO 2015 • A Taste of Random Decision Forests on Apache Spark • Sean
Owen
189
ML Pipelines (spark.ml)
ML Pipelines (spark.ml)
ML Pipeline API (aka Spark ML or spark.ml due to the package the API lives in) lets Spark
users quickly and easily assemble and configure practical distributed Machine Learning
pipelines (aka workflows) by standardizing the APIs for different Machine Learning concepts.
Both scikit-learn and GraphLab have the concept of pipelines built into their
Note
system.
Pipeline
PipelineStage
Transformers
Models
Estimators
Evaluator
190
ML Pipelines (spark.ml)
You may also think of two additional steps before the final model becomes production ready
and hence of any use:
You use a collection of Transformer instances to prepare input DataFrame - the dataset
with proper input data (in columns) for a chosen ML algorithm.
With a Model you can calculate predictions (in prediction column) on features input
column through DataFrame transformation.
Example: In text classification, preprocessing steps like n-gram extraction, and TF-IDF
feature weighting are often necessary before training of a classification model like an SVM.
Upon deploying a model, your system must not only know the SVM weights to apply to input
features, but also transform raw data into the format the model is trained on.
191
ML Pipelines (spark.ml)
Components of ML Pipeline:
Pipelines become objects that can be saved out and applied in real-time to new
data.
You could persist (i.e. save to a persistent storage) or unpersist (i.e. load from a
persistent storage) ML components as described in Persisting Machine Learning
Components.
Parameter tuning
Pipelines
A ML pipeline (or a ML workflow) is a sequence of Transformers and Estimators to fit a
PipelineModel to an input dataset.
import org.apache.spark.ml.Pipeline
192
ML Pipelines (spark.ml)
Pipeline instances).
The Pipeline object can read or load pipelines (refer to Persisting Machine Learning
Components page).
read: MLReader[Pipeline]
load(path: String): Pipeline
You can create a Pipeline with an optional uid identifier. It is of the format
pipeline_[randomUid] when unspecified.
scala> println(pipeline.uid)
pipeline_94be47c3b709
scala> println(pipeline.uid)
my_pipeline
The fit method returns a PipelineModel that holds a collection of Transformer objects
that are results of Estimator.fit method for every Estimator in the Pipeline (with possibly-
modified dataset ) or simply input Transformer objects. The input dataset DataFrame is
193
ML Pipelines (spark.ml)
It then searches for the index of the last Estimator to calculate Transformers for Estimator
and simply return Transformer back up to the index in the pipeline. For each Estimator the
fit method is called with the input dataset . The result DataFrame is passed to the next
transform method is called for every Transformer calculated but the last one (that is the
The method returns a PipelineModel with uid and transformers. The parent Estimator is
the Pipeline itself.
(video) Building, Debugging, and Tuning Spark Machine Learning Pipelines - Joseph
Bradley (Databricks)
(video) Spark MLlib: Making Practical Machine Learning Easy and Scalable
(video) Apache Spark MLlib 2 0 Preview: Data Science and Production by Joseph K.
Bradley (Databricks)
194
Pipeline
195
PipelineStage
PipelineStage has the following direct implementations (of which few are abstract classes,
too):
Estimators
Models
Pipeline
Predictor
Transformer
196
Transformers
Transformers
A transformer is a ML Pipeline component that transforms a DataFrame into another
DataFrame (both called datasets).
Transformers prepare a dataset for an machine learning algorithm to work with. They are
also very helpful to transform DataFrames in general (even outside the machine learning
space).
StopWordsRemover
Binarizer
SQLTransformer
UnaryTransformer
Tokenizer
RegexTokenizer
NGram
HashingTF
OneHotEncoder
Model
197
Transformers
StopWordsRemover
StopWordsRemover is a machine learning feature transformer that takes a string array column
and outputs a string array column with all defined stop words removed. The transformer
comes with a standard set of English stop words as default (that are the same as scikit-learn
uses, i.e. from the Glasgow Information Retrieval Group).
import org.apache.spark.ml.feature.StopWordsRemover
val stopWords = new StopWordsRemover
scala> println(stopWords.explainParams)
caseSensitive: whether to do case-sensitive comparison during filtering (default: false
)
inputCol: input column name (undefined)
outputCol: output column name (default: stopWords_9c2c0fdd8a68__output)
stopWords: stop words (default: [Ljava.lang.String;@5dabe7c8)
null values from the input array are preserved unless adding null to
Note
stopWords explicitly.
198
Transformers
import org.apache.spark.ml.feature.RegexTokenizer
val regexTok = new RegexTokenizer("regexTok")
.setInputCol("text")
.setPattern("\\W+")
import org.apache.spark.ml.feature.StopWordsRemover
val stopWords = new StopWordsRemover("stopWords")
.setInputCol(regexTok.getOutputCol)
scala> stopWords.transform(regexTok.transform(df)).show(false)
+-------------------------------+---+------------------------------------+------------
-----+
|text |id |regexTok__output |stopWords__o
utput|
+-------------------------------+---+------------------------------------+------------
-----+
|please find it done (and empty)|0 |[please, find, it, done, and, empty]|[]
|
|About to be rich! |1 |[about, to, be, rich] |[rich]
|
|empty |2 |[empty] |[]
|
+-------------------------------+---+------------------------------------+------------
-----+
Binarizer
Binarizer is a Transformer that splits the values in the input column into two groups -
"ones" for values larger than the threshold and "zeros" for the others.
It works with DataFrames with the input column of DoubleType or VectorUDT. The type of
the result output column matches the type of the input column, i.e. DoubleType or
VectorUDT .
199
Transformers
import org.apache.spark.ml.feature.Binarizer
val bin = new Binarizer()
.setInputCol("rating")
.setOutputCol("label")
.setThreshold(3.5)
scala> println(bin.explainParams)
inputCol: input column name (current: rating)
outputCol: output column name (default: binarizer_dd9710e2a831__output, current: label
)
threshold: threshold used to binarize continuous features (default: 0.0, current: 3.5)
scala> bin.transform(doubles).show
+---+------+-----+
| id|rating|label|
+---+------+-----+
| 0| 1.0| 0.0|
| 1| 1.0| 0.0|
| 2| 5.0| 1.0|
+---+------+-----+
import org.apache.spark.mllib.linalg.Vectors
val denseVec = Vectors.dense(Array(4.0, 0.4, 3.7, 1.5))
val vectors = Seq((0, denseVec)).toDF("id", "rating")
scala> bin.transform(vectors).show
+---+-----------------+-----------------+
| id| rating| label|
+---+-----------------+-----------------+
| 0|[4.0,0.4,3.7,1.5]|[1.0,0.0,1.0,0.0]|
+---+-----------------+-----------------+
SQLTransformer
SQLTransformer is a Transformer that does transformations by executing SELECT … FROM
THIS with THIS being the underlying temporary table registered for the input dataset.
Internally, THIS is replaced with a random name for a temporary table (using
registerTempTable).
It requires that the SELECT query uses THIS that corresponds to a temporary table and
simply executes the mandatory statement using sql method.
You have to specify the mandatory statement parameter using setStatement method.
200
Transformers
import org.apache.spark.ml.feature.SQLTransformer
val sql = new SQLTransformer()
scala> println(sql.explainParams)
statement: SQL statement (current: SELECT sentence FROM __THIS__ WHERE label = 0)
VectorAssembler
VectorAssembler is a feature transformer that assembles (merges) multiple columns into a
It supports columns of the types NumericType , BooleanType , and VectorUDT . Doubles are
passed on untouched. Other numberic types and booleans are cast to doubles.
201
Transformers
import org.apache.spark.ml.feature.VectorAssembler
val vecAssembler = new VectorAssembler()
scala> print(vecAssembler.explainParams)
inputCols: input column names (undefined)
outputCol: output column name (default: vecAssembler_5ac31099dbee__output)
final case class Record(id: Int, n1: Int, n2: Double, flag: Boolean)
val ds = Seq(Record(0, 4, 2.0, true)).toDS
scala> ds.printSchema
root
|-- id: integer (nullable = false)
|-- n1: integer (nullable = false)
|-- n2: double (nullable = false)
|-- flag: boolean (nullable = false)
scala> features.printSchema
root
|-- id: integer (nullable = false)
|-- n1: integer (nullable = false)
|-- n2: double (nullable = false)
|-- flag: boolean (nullable = false)
|-- features: vector (nullable = true)
scala> features.show
+---+---+---+----+-------------+
| id| n1| n2|flag| features|
+---+---+---+----+-------------+
| 0| 4|2.0|true|[4.0,2.0,1.0]|
+---+---+---+----+-------------+
UnaryTransformers
The UnaryTransformer abstract class is a specialized Transformer that applies
transformation to one input column and writes results to another (by appending a new
column).
Each UnaryTransformer defines the input and output columns using the following "chain"
methods (they return the transformer on which they were executed and so are chainable):
setInputCol(value: String)
202
Transformers
setOutputCol(value: String)
When transform is called, it first calls transformSchema (with DEBUG logging enabled) and
then adds the column as a result of calling a protected abstract createTransformFunc .
Internally, transform method uses Spark SQL’s udf to define a function (based on
createTransformFunc function described above) that will create the new output column (with
appropriate outputDataType ). The UDF is later applied to the input column of the input
DataFrame and the result becomes the output column (using DataFrame.withColumn
method).
Tokenizer that converts a string column to lowercase and then splits it by white spaces.
NGram that converts the input array of strings into an array of n-grams.
HashingTF that maps a sequence of terms to their term frequencies (cf. SPARK-13998
HashingTF should extend UnaryTransformer)
OneHotEncoder that maps a numeric input column of label indices onto a column of
binary vectors.
RegexTokenizer
RegexTokenizer is a UnaryTransformer that tokenizes a String into a collection of String .
203
Transformers
import org.apache.spark.ml.feature.RegexTokenizer
val regexTok = new RegexTokenizer()
scala> tokenized.show(false)
+-----+------------------+-----------------------------+
|label|sentence |regexTok_810b87af9510__output|
+-----+------------------+-----------------------------+
|0 |hello world |[hello, world] |
|1 |two spaces inside|[two, spaces, inside] |
+-----+------------------+-----------------------------+
It supports minTokenLength parameter that is the minimum token length that you can change
using setMinTokenLength method. It simply filters out smaller tokens and defaults to 1 .
scala> rt.setInputCol("line").setMinTokenLength(6).transform(df).show
+-----+--------------------+-----------------------------+
|label| line|regexTok_8c74c5e8b83a__output|
+-----+--------------------+-----------------------------+
| 1| hello world| []|
| 2|yet another sentence| [another, sentence]|
+-----+--------------------+-----------------------------+
It has gaps parameter that indicates whether regex splits on gaps ( true ) or matches
tokens ( false ). You can set it using setGaps . It defaults to true .
When set to true (i.e. splits on gaps) it uses Regex.split while Regex.findAllIn for false .
204
Transformers
scala> rt.setInputCol("line").setGaps(false).transform(df).show
+-----+--------------------+-----------------------------+
|label| line|regexTok_8c74c5e8b83a__output|
+-----+--------------------+-----------------------------+
| 1| hello world| []|
| 2|yet another sentence| [another, sentence]|
+-----+--------------------+-----------------------------+
scala> rt.setInputCol("line").setGaps(false).setPattern("\\W").transform(df).show(false
)
+-----+--------------------+-----------------------------+
|label|line |regexTok_8c74c5e8b83a__output|
+-----+--------------------+-----------------------------+
|1 |hello world |[] |
|2 |yet another sentence|[another, sentence] |
+-----+--------------------+-----------------------------+
It has pattern parameter that is the regex for tokenizing. It uses Scala’s .r method to
convert the string to regex. Use setPattern to set it. It defaults to \\s+ .
It has toLowercase parameter that indicates whether to convert all characters to lowercase
before tokenizing. Use setToLowercase to change it. It defaults to true .
NGram
In this example you use org.apache.spark.ml.feature.NGram that converts the input
collection of strings into a collection of n-grams (of n words).
import org.apache.spark.ml.feature.NGram
+---+--------------+---------------+
| id| tokens|bigrams__output|
+---+--------------+---------------+
| 0|[hello, world]| [hello world]|
+---+--------------+---------------+
HashingTF
Another example of a transformer is org.apache.spark.ml.feature.HashingTF that works on a
Column of ArrayType .
It transforms the rows for the input column into a sparse term frequency vector.
205
Transformers
import org.apache.spark.ml.feature.HashingTF
val hashingTF = new HashingTF()
.setInputCol("words")
.setOutputCol("features")
.setNumFeatures(5000)
// Use HashingTF
val hashedDF = hashingTF.transform(regexedDF)
scala> hashedDF.show(false)
+---+------------------+---------------------+-----------------------------------+
|id |text |words |features |
+---+------------------+---------------------+-----------------------------------+
|0 |hello world |[hello, world] |(5000,[2322,3802],[1.0,1.0])
|
|1 |two spaces inside|[two, spaces, inside]|(5000,[276,940,2533],[1.0,1.0,1.0])|
+---+------------------+---------------------+-----------------------------------+
The name of the output column is optional, and if not specified, it becomes the identifier of a
HashingTF object with the __output suffix.
scala> hashingTF.uid
res7: String = hashingTF_fe3554836819
scala> hashingTF.transform(regexDF).show(false)
+---+------------------+---------------------+----------------------------------------
---+
|id |text |words |hashingTF_fe3554836819__output
|
+---+------------------+---------------------+----------------------------------------
---+
|0 |hello world |[hello, world] |(262144,[71890,72594],[1.0,1.0])
|
|1 |two spaces inside|[two, spaces, inside]|(262144,[53244,77869,115276],[1.0,1.0,1.0
])|
+---+------------------+---------------------+----------------------------------------
---+
OneHotEncoder
OneHotEncoder is a Tokenizer that maps a numeric input column of label indices onto a
206
Transformers
// dataset to transform
val df = Seq(
(0, "a"), (1, "b"),
(2, "c"), (3, "a"),
(4, "a"), (5, "c"))
.toDF("label", "category")
import org.apache.spark.ml.feature.StringIndexer
val indexer = new StringIndexer().setInputCol("category").setOutputCol("cat_index").fi
t(df)
val indexed = indexer.transform(df)
import org.apache.spark.sql.types.NumericType
scala> indexed.schema("cat_index").dataType.isInstanceOf[NumericType]
res0: Boolean = true
import org.apache.spark.ml.feature.OneHotEncoder
val oneHot = new OneHotEncoder()
.setInputCol("cat_index")
.setOutputCol("cat_vec")
scala> oneHotted.show(false)
+-----+--------+---------+-------------+
|label|category|cat_index|cat_vec |
+-----+--------+---------+-------------+
|0 |a |0.0 |(2,[0],[1.0])|
|1 |b |2.0 |(2,[],[]) |
|2 |c |1.0 |(2,[1],[1.0])|
|3 |a |0.0 |(2,[0],[1.0])|
|4 |a |0.0 |(2,[0],[1.0])|
|5 |c |1.0 |(2,[1],[1.0])|
+-----+--------+---------+-------------+
scala> oneHotted.printSchema
root
|-- label: integer (nullable = false)
|-- category: string (nullable = true)
|-- cat_index: double (nullable = true)
|-- cat_vec: vector (nullable = true)
scala> oneHotted.schema("cat_vec").dataType.isInstanceOf[VectorUDT]
res1: Boolean = true
Custom UnaryTransformer
The following class is a custom UnaryTransformer that transforms words using upper letters.
207
Transformers
package pl.japila.spark
import org.apache.spark.ml._
import org.apache.spark.ml.util.Identifiable
import org.apache.spark.sql.types._
scala> upper.setInputCol("text").transform(df).show
+---+-----+--------------------------+
| id| text|upper_0b559125fd61__output|
+---+-----+--------------------------+
| 0|hello| HELLO|
| 1|world| WORLD|
+---+-----+--------------------------+
208
Transformer
Transformer
Transformer is the contract in Spark MLlib for transformers that transform one dataset into
another.
Caution FIXME
Transformer Contract
package org.apache.spark.ml
209
Tokenizer
Tokenizer
Tokenizer is a unary transformer that converts the column of String values to lowercase
import org.apache.spark.ml.feature.Tokenizer
val tok = new Tokenizer()
// dataset to transform
val df = Seq(
(1, "Hello world!"),
(2, "Here is yet another sentence.")).toDF("id", "sentence")
210
Estimators
That was so machine learning to explain an estimator this way, wasn’t it? It is
Note that the more I spend time with Pipeline API the often I use the terms and
phrases from this space. Sorry.
Technically, an Estimator produces a Model (i.e. a Transformer) for a given DataFrame and
parameters (as ParamMap ). It fits a model to the input DataFrame and ParamMap to produce
a Transformer (a Model ) that can calculate predictions for any DataFrame -based input
datasets.
It is basically a function that maps a DataFrame onto a Model through fit method, i.e. it
takes a DataFrame and produces a Transformer as a Model .
fit(dataset: DataFrame): M
211
Estimator
Estimator
Estimator is the contract in Spark MLlib for estimators that fit models to a dataset.
Estimator accepts parameters that you can set through dedicated setter methods upon
creating an Estimator . You could also fit a model with extra parameters.
import org.apache.spark.ml.classification.LogisticRegression
Estimator Contract
package org.apache.spark.ml
212
Estimator
fit copies the extra paramMap and fits a model (of type M ).
fit is used mainly for model tuning to find the best model (using
Note
CrossValidator and TrainValidationSplit).
213
Estimator
StringIndexer
org.apache.spark.ml.feature.StringIndexer is an Estimator that produces a
StringIndexerModel .
import org.apache.spark.ml.feature.StringIndexer
val strIdx = new StringIndexer()
.setInputCol("label")
.setOutputCol("index")
scala> println(strIdx.explainParams)
handleInvalid: how to handle invalid entries. Options are skip (which will filter out
rows with bad values), or error (which will throw an error). More options may be added
later (default: error)
inputCol: input column name (current: label)
outputCol: output column name (default: strIdx_ded89298e014__output, current: index)
scala> indexed.show
+---+-----+-----+
| id|label|index|
+---+-----+-----+
| 0| a| 3.0|
| 1| b| 5.0|
| 2| c| 7.0|
| 3| d| 9.0|
| 4| e| 0.0|
| 5| f| 2.0|
| 6| g| 6.0|
| 7| h| 8.0|
| 8| i| 4.0|
| 9| j| 1.0|
+---+-----+-----+
214
Estimator
KMeans
KMeans class is an implementation of the K-means clustering algorithm in machine learning
import org.apache.spark.ml.clustering._
val kmeans = new KMeans()
scala> println(kmeans.explainParams)
featuresCol: features column name (default: features)
initMode: initialization algorithm (default: k-means||)
initSteps: number of steps for k-means|| (default: 5)
k: number of clusters to create (default: 2)
maxIter: maximum number of iterations (>= 0) (default: 20)
predictionCol: prediction column name (default: prediction)
seed: random seed (default: -1689246527)
tol: the convergence tolerance for iterative algorithms (default: 1.0E-4)
type IntegerType .
215
Estimator
Internally, fit method "unwraps" the feature vector in featuresCol column in the input
DataFrame and creates an RDD[Vector] . It then hands the call over to the MLlib variant of
Each item (row) in a data set is described by a numeric vector of attributes called features .
A single feature (a dimension of the vector) represents a word (token) with a value that is a
metric that defines the importance of that word or term in the document.
Refer to Logging.
KMeans Example
You can represent a text corpus (document collection) using the vector space model. In this
representation, the vectors have dimension that is the number of different words in the
corpus. It is quite natural to have vectors with a lot of zero values as not all words will be in a
document. We will use an optimized memory representation to avoid zero values using
sparse vectors.
This example shows how to use k-means to classify emails as a spam or not.
// NOTE Don't copy and paste the final case class with the other lines
// It won't work with paste mode in spark-shell
final case class Email(id: Int, text: String)
216
Estimator
.setInputCol("tokens")
.setOutputCol("features")
.setNumFeatures(20)
import org.apache.spark.ml.clustering.KMeans
val kmeans = new KMeans
scala> kmModel.clusterCenters.map(_.toSparse)
res36: Array[org.apache.spark.mllib.linalg.SparseVector] = Array((20,[13],[3.0]), (20,[
0,2,3,6,7,8,10,11,17,19],[1.5,0.5,1.0,0.5,0.5,0.5,1.5,1.0,1.0,1.0]))
scala> .show(false)
+---------+------------+---------------------+----------+
|text |tokens |features |prediction|
+---------+------------+---------------------+----------+
|hello mom|[hello, mom]|(20,[2,19],[1.0,1.0])|1 |
+---------+------------+---------------------+----------+
217
Estimator
218
Estimator
TrainValidationSplit
TrainValidationSplit is…FIXME
219
Predictor
Predictor
Predictor is an Estimator for a PredictionModel with its own abstract train method.
train(dataset: DataFrame): M
The train method is supposed to ease dealing with schema validation and copying
parameters to a trained PredictionModel model. It also sets the parent of the model to itself.
It implements the abstract fit(dataset: DataFrame) of the Estimator abstract class that
validates and transforms the schema of a dataset (using a custom transformSchema of
PipelineStage), and then calls the abstract train method.
220
Predictor
RandomForestRegressor
RandomForestRegressor is a Predictor for Random Forest machine learning algorithm that
trains a RandomForestRegressionModel .
221
Predictor
import org.apache.spark.mllib.linalg.Vectors
val features = Vectors.sparse(10, Seq((2, 0.2), (4, 0.4)))
scala> data.show(false)
+-----+--------------------------+
|label|features |
+-----+--------------------------+
|0.0 |(10,[2,4,6],[0.2,0.4,0.6])|
|1.0 |(10,[2,4,6],[0.2,0.4,0.6])|
|2.0 |(10,[2,4,6],[0.2,0.4,0.6])|
|3.0 |(10,[2,4,6],[0.2,0.4,0.6])|
|4.0 |(10,[2,4,6],[0.2,0.4,0.6])|
+-----+--------------------------+
scala> model.trees.foreach(println)
DecisionTreeRegressionModel (uid=dtr_247e77e2f8e0) of depth 1 with 3 nodes
DecisionTreeRegressionModel (uid=dtr_61f8eacb2b61) of depth 2 with 7 nodes
DecisionTreeRegressionModel (uid=dtr_63fc5bde051c) of depth 2 with 5 nodes
DecisionTreeRegressionModel (uid=dtr_64d4e42de85f) of depth 2 with 5 nodes
DecisionTreeRegressionModel (uid=dtr_693626422894) of depth 3 with 9 nodes
DecisionTreeRegressionModel (uid=dtr_927f8a0bc35e) of depth 2 with 5 nodes
DecisionTreeRegressionModel (uid=dtr_82da39f6e4e1) of depth 3 with 7 nodes
DecisionTreeRegressionModel (uid=dtr_cb94c2e75bd1) of depth 0 with 1 nodes
DecisionTreeRegressionModel (uid=dtr_29e3362adfb2) of depth 1 with 3 nodes
DecisionTreeRegressionModel (uid=dtr_d6d896abcc75) of depth 3 with 7 nodes
DecisionTreeRegressionModel (uid=dtr_aacb22a9143d) of depth 2 with 5 nodes
DecisionTreeRegressionModel (uid=dtr_18d07dadb5b9) of depth 2 with 7 nodes
DecisionTreeRegressionModel (uid=dtr_f0615c28637c) of depth 2 with 5 nodes
DecisionTreeRegressionModel (uid=dtr_4619362d02fc) of depth 2 with 5 nodes
DecisionTreeRegressionModel (uid=dtr_d39502f828f4) of depth 2 with 5 nodes
DecisionTreeRegressionModel (uid=dtr_896f3a4272ad) of depth 3 with 9 nodes
DecisionTreeRegressionModel (uid=dtr_891323c29838) of depth 3 with 7 nodes
DecisionTreeRegressionModel (uid=dtr_d658fe871e99) of depth 2 with 5 nodes
DecisionTreeRegressionModel (uid=dtr_d91227b13d41) of depth 2 with 5 nodes
DecisionTreeRegressionModel (uid=dtr_4a7976921f4b) of depth 2 with 5 nodes
scala> model.treeWeights
res12: Array[Double] = Array(1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0
, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0)
scala> model.featureImportances
res13: org.apache.spark.mllib.linalg.Vector = (1,[0],[1.0])
222
Predictor
223
Regressor
Regressor
Regressor is…FIXME
224
Regressor
LinearRegression
LinearRegression is a Regressor that represents the linear regression algorithm in Machine
Learning.
import org.apache.spark.ml.regression.LinearRegression
val lr = new LinearRegression
scala> println(lr.explainParams)
elasticNetParam: the ElasticNet mixing parameter, in range [0, 1]. For alpha = 0, the
penalty is an L2 penalty. For alpha = 1, it is an L1 penalty (default: 0.0)
featuresCol: features column name (default: features)
fitIntercept: whether to fit an intercept term (default: true)
labelCol: label column name (default: label)
maxIter: maximum number of iterations (>= 0) (default: 100)
predictionCol: prediction column name (default: prediction)
regParam: regularization parameter (>= 0) (default: 0.0)
solver: the solver algorithm for optimization. If this is not set or empty, default va
lue is 'auto' (default: auto)
standardization: whether to standardize the training features before fitting the model
(default: true)
tol: the convergence tolerance for iterative algorithms (default: 1.0E-6)
weightCol: weight column name. If this is not set or empty, we treat all instance weig
hts as 1.0 (default: )
LinearRegression Example
import org.apache.spark.mllib.linalg.Vectors
import org.apache.spark.mllib.regression.LabeledPoint
val data = (0.0 to 9.0 by 1) // create a collection of Doubles
.map(n => (n, n)) // make it pairs
.map { case (label, features) =>
LabeledPoint(label, Vectors.dense(features)) } // create labeled points of dense v
ectors
.toDF // make it a DataFrame
225
Regressor
scala> data.show
+-----+--------+
|label|features|
+-----+--------+
| 0.0| [0.0]|
| 1.0| [1.0]|
| 2.0| [2.0]|
| 3.0| [3.0]|
| 4.0| [4.0]|
| 5.0| [5.0]|
| 6.0| [6.0]|
| 7.0| [7.0]|
| 8.0| [8.0]|
| 9.0| [9.0]|
+-----+--------+
import org.apache.spark.ml.regression.LinearRegression
val lr = new LinearRegression
scala> model.intercept
res1: Double = 0.0
scala> model.coefficients
res2: org.apache.spark.mllib.linalg.Vector = [1.0]
// make predictions
scala> val predictions = model.transform(data)
predictions: org.apache.spark.sql.DataFrame = [label: double, features: vector ... 1 m
ore field]
scala> predictions.show
+-----+--------+----------+
|label|features|prediction|
+-----+--------+----------+
| 0.0| [0.0]| 0.0|
| 1.0| [1.0]| 1.0|
| 2.0| [2.0]| 2.0|
| 3.0| [3.0]| 3.0|
| 4.0| [4.0]| 4.0|
| 5.0| [5.0]| 5.0|
| 6.0| [6.0]| 6.0|
| 7.0| [7.0]| 7.0|
| 8.0| [8.0]| 8.0|
| 9.0| [9.0]| 9.0|
+-----+--------+----------+
import org.apache.spark.ml.evaluation.RegressionEvaluator
226
Regressor
import org.apache.spark.mllib.linalg.DenseVector
// NOTE Follow along to learn spark.ml-way (not RDD-way)
predictions.rdd.map { r =>
(r(0).asInstanceOf[Double], r(1).asInstanceOf[DenseVector](0).toDouble, r(2).asInsta
nceOf[Double]))
.toDF("label", "feature0", "prediction").show
+-----+--------+----------+
|label|feature0|prediction|
+-----+--------+----------+
| 0.0| 0.0| 0.0|
| 1.0| 1.0| 1.0|
| 2.0| 2.0| 2.0|
| 3.0| 3.0| 3.0|
| 4.0| 4.0| 4.0|
| 5.0| 5.0| 5.0|
| 6.0| 6.0| 6.0|
| 7.0| 7.0| 7.0|
| 8.0| 8.0| 8.0|
| 9.0| 9.0| 9.0|
+-----+--------+----------+
import org.apache.spark.sql.Row
import org.apache.spark.mllib.linalg.DenseVector
case class Prediction(label: Double, feature0: Double, prediction: Double)
object Prediction {
def apply(r: Row) = new Prediction(
label = r(0).asInstanceOf[Double],
feature0 = r(1).asInstanceOf[DenseVector](0).toDouble,
prediction = r(2).asInstanceOf[Double])
}
import org.apache.spark.sql.Row
import org.apache.spark.mllib.linalg.DenseVector
defined class Prediction
defined object Prediction
scala> predictions.rdd.map(Prediction.apply).toDF.show
+-----+--------+----------+
|label|feature0|prediction|
+-----+--------+----------+
| 0.0| 0.0| 0.0|
| 1.0| 1.0| 1.0|
227
Regressor
train Method
columns:
It returns LinearRegressionModel .
It first counts the number of elements in features column (usually features ). The column
has to be of mllib.linalg.Vector type (and can easily be prepared using HashingTF
transformer).
import org.apache.spark.ml.feature.RegexTokenizer
val regexTok = new RegexTokenizer()
val spamTokens = regexTok.setInputCol("email").transform(spam)
scala> spamTokens.show(false)
+---+--------------------------------+---------------------------------------+
|id |email |regexTok_646b6bcc4548__output |
+---+--------------------------------+---------------------------------------+
|0 |Hi Jacek. Wanna more SPAM? Best!|[hi, jacek., wanna, more, spam?, best!]|
|1 |This is SPAM. This is SPAM |[this, is, spam., this, is, spam] |
+---+--------------------------------+---------------------------------------+
import org.apache.spark.ml.feature.HashingTF
val hashTF = new HashingTF()
.setInputCol(regexTok.getOutputCol)
.setOutputCol("features")
.setNumFeatures(5000)
228
Regressor
scala> spamLabeled.show
+---+--------------------+-----------------------------+--------------------+-----+
| id| email|regexTok_646b6bcc4548__output| features|label|
+---+--------------------+-----------------------------+--------------------+-----+
| 0|Hi Jacek. Wanna m...| [hi, jacek., wann...|(5000,[2525,2943,...| 1.0|
| 1|This is SPAM. Thi...| [this, is, spam.,...|(5000,[1713,3149,...| 1.0|
+---+--------------------+-----------------------------+--------------------+-----+
scala> training.show
+---+--------------------+-----------------------------+--------------------+-----+
| id| email|regexTok_646b6bcc4548__output| features|label|
+---+--------------------+-----------------------------+--------------------+-----+
| 2|Hi Jacek. I hope ...| [hi, jacek., i, h...|(5000,[72,105,942...| 0.0|
| 3|Welcome to Apache...| [welcome, to, apa...|(5000,[2894,3365,...| 0.0|
| 0|Hi Jacek. Wanna m...| [hi, jacek., wann...|(5000,[2525,2943,...| 1.0|
| 1|This is SPAM. Thi...| [this, is, spam.,...|(5000,[1713,3149,...| 1.0|
+---+--------------------+-----------------------------+--------------------+-----+
import org.apache.spark.ml.regression.LinearRegression
val lr = new LinearRegression
229
Regressor
scala> lrModel.transform(emailHashed).select("prediction").show
+-----------------+
| prediction|
+-----------------+
|0.563603440350882|
+-----------------+
230
Classifier
Classifier
Classifier is a Predictor that…FIXME
extractLabeledPoints Method
extractLabeledPoints …FIXME
getNumClasses Method
getNumClasses …FIXME
231
Classifier
RandomForestClassifier
RandomForestClassifier is a probabilistic Classifier for…FIXME
232
Classifier
DecisionTreeClassifier
DecisionTreeClassifier is a probabilistic Classifier for…FIXME
233
Models
ML Pipeline Models
Model abstract class is a Transformer with the optional Estimator that has produced it (as a
An Estimator is optional and is available only after fit (of an Estimator) has
Note
been executed whose result a model is.
There are two direct implementations of the Model class that are not directly related to a
concrete ML algorithm:
PipelineModel
PredictionModel
PipelineModel
Once fit, you can use the result model as any other models to transform datasets (as
DataFrame ).
234
Models
// Transformer #1
import org.apache.spark.ml.feature.Tokenizer
val tok = new Tokenizer().setInputCol("text")
// Transformer #2
import org.apache.spark.ml.feature.HashingTF
val hashingTF = new HashingTF().setInputCol(tok.getOutputCol).setOutputCol("features")
PredictionModel
PredictionModel is an abstract class to represent a model for prediction algorithms like
regression and classification (that have their own specialized models - details coming up
below).
import org.apache.spark.ml.PredictionModel
The contract of PredictionModel class requires that every custom implementation defines
predict method (with FeaturesType type being the type of features ).
RegressionModel
235
Models
ClassificationModel
RandomForestRegressionModel
Internally, transform first ensures that the type of the features column matches the type
of the model and adds the prediction column of type Double to the schema of the result
DataFrame .
It then creates the result DataFrame and adds the prediction column with a predictUDF
function applied to the values of the features column.
FIXME A diagram to show the transformation from a dataframe (on the left)
Caution and another (on the right) with an arrow to represent the transformation
method.
Refer to Logging.
ClassificationModel
ClassificationModel is a PredictionModel that transforms a DataFrame with mandatory
features , label , and rawPrediction (of type Vector) columns to a DataFrame with
ClassificationModel comes with its own transform (as Transformer) and predict (as
PredictionModel).
models)
DecisionTreeClassificationModel ( final )
236
Models
LogisticRegressionModel
NaiveBayesModel
RandomForestClassificationModel ( final )
RegressionModel
RegressionModel is a PredictionModel that transforms a DataFrame with mandatory label ,
It comes with no own methods or values and so is more a marker abstract class (to combine
different features of regression models under one type).
LinearRegressionModel
LinearRegressionModel represents a model produced by a LinearRegression estimator. It
label (required)
features (required)
prediction
regParam
elasticNetParam
maxIter (Int)
tol (Double)
fitIntercept (Boolean)
standardization (Boolean)
weightCol (String)
solver (String)
237
Models
With DEBUG logging enabled (see above) you can see the following messages in the logs
when transform is called and transforms the schema.
The coefficients Vector and intercept Double are the integral part of
Note
LinearRegressionModel as the required input parameters of the constructor.
LinearRegressionModel Example
238
Models
import org.apache.spark.ml.regression.LinearRegression
val lr = new LinearRegression
// Importing LinearRegressionModel and being explicit about the type of model value
// is for learning purposes only
import org.apache.spark.ml.regression.LinearRegressionModel
val model: LinearRegressionModel = lr.fit(ds)
RandomForestRegressionModel
RandomForestRegressionModel is a PredictionModel with features column of type Vector.
KMeansModel
KMeansModel is a Model of KMeans algorithm.
239
Models
// See spark-mllib-estimators.adoc#KMeans
val kmeans: KMeans = ???
val trainingDF: DataFrame = ???
val kmModel = kmeans.fit(trainingDF)
scala> kmModel.transform(inputDF).show(false)
+-----+---------+----------+
|label|features |prediction|
+-----+---------+----------+
|0.0 |[0.2,0.4]|0 |
+-----+---------+----------+
240
Model
Model
Model is the contract for a fitted model, i.e. a Transformer that was produced by an
Estimator.
Model Contract
package org.apache.spark.ml
241
Evaluator — ML Pipeline Component for Model Scoring
ML Pipeline evaluators are transformers that take DataFrames and compute metrics
indicating how good a model is.
Evaluator is used to evaluate models and is usually (if not always) used for best model
Evaluator uses isLargerBetter method to indicate whether the Double metric should be
Table 1. Evaluators
Evaluator Description
BinaryClassificationEvaluator Evaluator of binary classification models
Evaluator Contract
242
Evaluator — ML Pipeline Component for Model Scoring
package org.apache.spark.ml.evaluation
243
BinaryClassificationEvaluator — Evaluator of Binary Classification Models
BinaryClassificationEvaluator — Evaluator of
Binary Classification Models
BinaryClassificationEvaluator is an Evaluator of cross-validate models from binary
metric that is the area under the specified curve (and so isLargerBetter is turned on for either
metric).
import org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
val binEval = new BinaryClassificationEvaluator().
setMetricName("areaUnderROC").
setRawPredictionCol("rawPrediction").
setLabelCol("label")
scala> binEval.isLargerBetter
res0: Boolean = true
scala> println(binEval.explainParams)
labelCol: label column name (default: label)
metricName: metric name in evaluation (areaUnderROC|areaUnderPR) (default: areaUnderRO
C)
rawPredictionCol: raw prediction (a.k.a. confidence) column name (default: rawPredicti
on)
rawPredictionCol rawPrediction
Column name with raw predictions (a.k.a.
confidence)
labelCol label
Name of the column with indexed labels
(i.e. 0 s or 1 s)
244
BinaryClassificationEvaluator — Evaluator of Binary Classification Models
evaluate …FIXME
245
ClusteringEvaluator — Evaluator of Clustering Models
ClusteringEvaluator — Evaluator of Clustering
Models
ClusteringEvaluator is an Evaluator of clustering models (e.g. FPGrowth ,
NaiveBayes )
ClusteringEvaluator finds the best model by maximizing the model evaluation metric (i.e.
import org.apache.spark.ml.evaluation.ClusteringEvaluator
val cluEval = new ClusteringEvaluator().
setPredictionCol("prediction").
setFeaturesCol("features").
setMetricName("silhouette")
scala> cluEval.isLargerBetter
res0: Boolean = true
scala> println(cluEval.explainParams)
featuresCol: features column name (default: features, current: features)
metricName: metric name in evaluation (silhouette) (default: silhouette, current: silh
ouette)
predictionCol: prediction column name (default: prediction, current: prediction)
featuresCol features
Name of the column with features (of type
VectorUDT )
predictionCol prediction
Name of the column with prediction (of
type NumericType )
246
ClusteringEvaluator — Evaluator of Clustering Models
evaluate …FIXME
247
MulticlassClassificationEvaluator — Evaluator of Multiclass Classification Models
MulticlassClassificationEvaluator — Evaluator
of Multiclass Classification Models
MulticlassClassificationEvaluator is an Evaluator that takes datasets with the following
two columns:
248
RegressionEvaluator — Evaluator of Regression Models
RegressionEvaluator — Evaluator of
Regression Models
RegressionEvaluator is an Evaluator of regression models (e.g. ALS,
GeneralizedLinearRegression).
import org.apache.spark.ml.evaluation.RegressionEvaluator
val regEval = new RegressionEvaluator().
setMetricName("r2").
setPredictionCol("prediction").
setLabelCol("label")
scala> regEval.isLargerBetter
res0: Boolean = true
scala> println(regEval.explainParams)
labelCol: label column name (default: label, current: label)
metricName: metric name in evaluation (mse|rmse|r2|mae) (default: rmse, current: r2)
predictionCol: prediction column name (default: prediction, current: prediction)
249
RegressionEvaluator — Evaluator of Regression Models
250
RegressionEvaluator — Evaluator of Regression Models
import org.apache.spark.ml.feature.HashingTF
val hashTF = new HashingTF()
.setInputCol(tok.getOutputCol) // it reads the output of tok
.setOutputCol("features")
import org.apache.spark.ml.Pipeline
val pipeline = new Pipeline().setStages(Array(tok, hashTF, lr))
// Let's do prediction
// Note that we're using the same dataset as for fitting the model
// Something you'd definitely not be doing in prod
val predictions = model.transform(dataset)
import org.apache.spark.ml.evaluation.RegressionEvaluator
val regEval = new RegressionEvaluator
scala> regEval.evaluate(predictions)
res0: Double = 0.0
evaluate …FIXME
251
RegressionEvaluator — Evaluator of Regression Models
252
CrossValidator — Model Tuning / Finding The Best Model
cross-validation metrics.
CrossValidator takes any Estimator for model selection, including the Pipeline
Note
that is used to transform raw datasets and generate a Model.
Use ParamGridBuilder for the parameter grid, i.e. collection of ParamMaps for
Note
model tuning.
import org.apache.spark.ml.Pipeline
val pipeline: Pipeline = ...
import org.apache.spark.ml.param.ParamMap
val paramGrid: Array[ParamMap] = new ParamGridBuilder().
addGrid(...).
addGrid(...).
build
import org.apache.spark.ml.tuning.CrossValidator
val cv = new CrossValidator().
setEstimator(pipeline).
setEvaluator(...).
setEstimatorParamMaps(paramGrid).
setNumFolds(...).
setParallelism(...)
import org.apache.spark.ml.tuning.CrossValidatorModel
val bestModel: CrossValidatorModel = cv.fit(training)
CrossValidator is a MLWritable.
253
CrossValidator — Model Tuning / Finding The Best Model
Refer to Logging.
Note fit is part of Estimator Contract to fit a model (i.e. produce a model).
254
CrossValidator — Model Tuning / Finding The Best Model
fit creates a Instrumentation and requests it to print out the parameters numFolds,
INFO ...FIXME
fit requests Instrumentation to print out the tuning parameters to the logs.
INFO ...FIXME
fit kFolds the RDD of the dataset per numFolds and seed parameters.
fit computes metrics for every pair of training and validation RDDs.
fit requests the Estimator to fit the best model (for the dataset and the best set of
estimatorParamMap).
In the end, fit creates a CrossValidatorModel (for the ID, the best model and the average
metrics for every kFold) and copies parameters to it.
255
CrossValidator — Model Tuning / Finding The Best Model
Tip You can monitor the storage for persisting the datasets in web UI’s Storage tab.
For every map in estimatorParamMaps parameter fit fits a model using the Estimator.
fit unpersists the training data (per pair of training and validation RDDs)
Note
when all models have been trained.
fit requests the models to transform their respective validation datasets (with the
fit waits until all metrics are available and unpersists the validation dataset.
Unique ID
256
CrossValidator — Model Tuning / Finding The Best Model
257
CrossValidatorModel
CrossValidatorModel
CrossValidatorModel is a Model that is created when CrossValidator is requested to find
Unique ID
Best Model
258
ParamGridBuilder
ParamGridBuilder
ParamGridBuilder is…FIXME
259
CrossValidator with Pipeline Example
import org.apache.spark.ml.classification.RandomForestClassifier
val rfc = new RandomForestClassifier
import org.apache.spark.ml.Pipeline
val pipeline = new Pipeline()
.setStages(Array(tok, hashTF, rfc))
+--------------------------+-----+--------------------------+----------+
|text |label|features |prediction|
+--------------------------+-----+--------------------------+----------+
|[science] hello world |0.0 |(10,[0,8],[2.0,1.0]) |0.0 |
|long text |1.0 |(10,[4,9],[1.0,1.0]) |1.0 |
|[science] hello all people|0.0 |(10,[0,6,8],[1.0,1.0,2.0])|0.0 |
|[science] hello hello |0.0 |(10,[0,8],[1.0,2.0]) |0.0 |
+--------------------------+-----+--------------------------+----------+
260
CrossValidator with Pipeline Example
+-------------+--------------------------------------+----------+
|text |rawPrediction |prediction|
+-------------+--------------------------------------+----------+
|Hello ScienCE|[12.666666666666668,7.333333333333333]|0.0 |
+-------------+--------------------------------------+----------+
import org.apache.spark.ml.tuning.ParamGridBuilder
val paramGrid = new ParamGridBuilder().build
import org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
val binEval = new BinaryClassificationEvaluator
import org.apache.spark.ml.tuning.CrossValidator
val cv = new CrossValidator()
.setEstimator(pipeline) // <-- pipeline is the estimator
.setEvaluator(binEval) // has to match the estimator
.setEstimatorParamMaps(paramGrid)
261
Params and ParamMaps
import org.apache.spark.ml.recommendation.ALS
val als = new ALS().
setMaxIter(5).
setRegParam(0.01).
setUserCol("userId").
setItemCol("movieId").
setRatingCol("rating")
scala> :type als.params
Array[org.apache.spark.ml.param.Param[_]]
scala> println(als.explainParams)
alpha: alpha for implicit preference (default: 1.0)
checkpointInterval: set checkpoint interval (>= 1) or disable checkpoint (-1). E.g. 10
means that the cache will get checkpointed every 10 iterations (default: 10)
coldStartStrategy: strategy for dealing with unknown or new users/items at prediction
time. This may be useful in cross-validation or production scenarios, for handling use
r/item ids the model has not seen in the training data. Supported values: nan,drop. (d
efault: nan)
finalStorageLevel: StorageLevel for ALS model factors. (default: MEMORY_AND_DISK)
implicitPrefs: whether to use implicit preference (default: false)
intermediateStorageLevel: StorageLevel for intermediate datasets. Cannot be 'NONE'. (d
efault: MEMORY_AND_DISK)
itemCol: column name for item ids. Ids must be within the integer value range. (defaul
t: item, current: movieId)
maxIter: maximum number of iterations (>= 0) (default: 10, current: 5)
nonnegative: whether to use nonnegative constraint for least squares (default: false)
numItemBlocks: number of item blocks (default: 10)
numUserBlocks: number of user blocks (default: 10)
predictionCol: prediction column name (default: prediction)
rank: rank of the factorization (default: 10)
ratingCol: column name for ratings (default: rating, current: rating)
regParam: regularization parameter (>= 0) (default: 0.1, current: 0.01)
seed: random seed (default: 1994790107)
userCol: column name for user ids. Ids must be within the integer value range. (defaul
t: user, current: userId)
262
Params and ParamMaps
import org.apache.spark.ml.tuning.CrossValidator
val cv = new CrossValidator
scala> println(cv.explainParams)
estimator: estimator for selection (undefined)
estimatorParamMaps: param maps for the estimator (undefined)
evaluator: evaluator used to select hyper-parameters that maximize the validated metri
c (undefined)
numFolds: number of folds for cross validation (>= 2) (default: 3)
seed: random seed (default: -1191137437)
Params comes with $ (dollar) method for Spark MLlib developers to access the user-
Params Contract
package org.apache.spark.ml.param
trait Params {
def copy(extra: ParamMap): Params
}
explainParams(): String
corresponding help text with the param name, the description and optionally the default and
the user-defined values if available.
263
Params and ParamMaps
import org.apache.spark.ml.recommendation.ALS
val als = new ALS().
setMaxIter(5).
setRegParam(0.01).
setUserCol("userId").
setItemCol("movieId").
setRatingCol("rating")
scala> println(als.explainParams)
alpha: alpha for implicit preference (default: 1.0)
checkpointInterval: set checkpoint interval (>= 1) or disable checkpoint (-1). E.g. 10
means that the cache will get checkpointed every 10 iterations (default: 10)
coldStartStrategy: strategy for dealing with unknown or new users/items at prediction
time. This may be useful in cross-validation or production scenarios, for handling use
r/item ids the model has not seen in the training data. Supported values: nan,drop. (d
efault: nan)
finalStorageLevel: StorageLevel for ALS model factors. (default: MEMORY_AND_DISK)
implicitPrefs: whether to use implicit preference (default: false)
intermediateStorageLevel: StorageLevel for intermediate datasets. Cannot be 'NONE'. (d
efault: MEMORY_AND_DISK)
itemCol: column name for item ids. Ids must be within the integer value range. (default
: item, current: movieId)
maxIter: maximum number of iterations (>= 0) (default: 10, current: 5)
nonnegative: whether to use nonnegative constraint for least squares (default: false)
numItemBlocks: number of item blocks (default: 10)
numUserBlocks: number of user blocks (default: 10)
predictionCol: prediction column name (default: prediction)
rank: rank of the factorization (default: 10)
ratingCol: column name for ratings (default: rating, current: rating)
regParam: regularization parameter (>= 0) (default: 0.1, current: 0.01)
seed: random seed (default: 1994790107)
userCol: column name for user ids. Ids must be within the integer value range. (default
: user, current: userId)
copyValues iterates over params collection and sets the default value followed by what may
264
Params and ParamMaps
265
ValidatorParams
ValidatorParams
Table 1. ValidatorParams' Parameters
Parameter Default Value Description
estimator (undefined) Estimator for best model selection
logTuningParams Method
logTuningParams …FIXME
loadImpl Method
loadImpl[M](
path: String,
sc: SparkContext,
expectedClassName: String): (Metadata, Estimator[M], Evaluator, Array[ParamMap])
loadImpl …FIXME
transformSchemaImpl Method
transformSchemaImpl …FIXME
266
ValidatorParams
267
HasParallelism
HasParallelism
HasParallelism is a Scala trait for Spark MLlib components that allow for specifying the
getExecutionContext Method
getExecutionContext: ExecutionContext
getExecutionContext …FIXME
268
ML Persistence — Saving and Loading Models and Pipelines
They allow you to save and load models despite the languages — Scala, Java, Python or R
— they have been saved in and loaded later on.
MLWriter
MLWriter abstract class comes with save(path: String) method to save a ML component
to a given path .
It comes with another (chainable) method overwrite to overwrite the output path if it
already exists.
overwrite(): this.type
The component is saved into a JSON file (see MLWriter Example section below).
Enable INFO logging level for the MLWriter implementation logger to see what
happens inside.
Add the following line to conf/log4j.properties :
Tip
log4j.logger.org.apache.spark.ml.Pipeline$.PipelineWriter=INFO
Refer to Logging.
FIXME The logging doesn’t work and overwriting does not print out INFO
Caution
message to the logs :(
MLWriter Example
import org.apache.spark.ml._
val pipeline = new Pipeline().setStages(Array.empty[PipelineStage])
pipeline.write.overwrite.save("sample-pipeline")
269
ML Persistence — Saving and Loading Models and Pipelines
The result of save for "unfitted" pipeline is a JSON file for metadata (as shown below).
$ cat sample-pipeline/metadata/part-00000 | jq
{
"class": "org.apache.spark.ml.Pipeline",
"timestamp": 1472747720477,
"sparkVersion": "2.1.0-SNAPSHOT",
"uid": "pipeline_181c90b15d65",
"paramMap": {
"stageUids": []
}
}
The result of save for pipeline model is a JSON file for metadata while Parquet for model
data, e.g. coefficients.
270
ML Persistence — Saving and Loading Models and Pipelines
$ cat sample-model/metadata/part-00000 | jq
{
"class": "org.apache.spark.ml.PipelineModel",
"timestamp": 1472748168005,
"sparkVersion": "2.1.0-SNAPSHOT",
"uid": "pipeline_3ed598da1c4b",
"paramMap": {
"stageUids": [
"regexTok_bf73e7c36e22",
"hashingTF_ebece38da130",
"logreg_819864aa7120"
]
}
}
$ tree sample-model/stages/
sample-model/stages/
|-- 0_regexTok_bf73e7c36e22
| `-- metadata
| |-- _SUCCESS
| `-- part-00000
|-- 1_hashingTF_ebece38da130
| `-- metadata
| |-- _SUCCESS
| `-- part-00000
`-- 2_logreg_819864aa7120
|-- data
| |-- _SUCCESS
| `-- part-r-00000-56423674-0208-4768-9d83-2e356ac6a8d2.snappy.parquet
`-- metadata
|-- _SUCCESS
`-- part-00000
7 directories, 8 files
MLReader
MLReader abstract class comes with load(path: String) method to load a ML component
271
ML Persistence — Saving and Loading Models and Pipelines
import org.apache.spark.ml._
val pipeline = Pipeline.read.load("sample-pipeline")
scala> pipelineModel.stages
res1: Array[org.apache.spark.ml.Transformer] = Array(regexTok_bf73e7c36e22, hashingTF_
ebece38da130, logreg_819864aa7120)
272
MLWritable
MLWritable
MLWritable is…FIXME
273
MLReader
MLReader
MLReader is the contract for…FIXME
MLReader Contract
package org.apache.spark.ml.util
274
Example — Text Classification
Example — Text Classification
The example was inspired by the video Building, Debugging, and Tuning Spark
Note
Machine Learning Pipelines - Joseph Bradley (Databricks).
The example uses a case class LabeledText to have the schema described
Note
nicely.
import spark.implicits._
scala> data.show
+-----+-------------+
|label| text|
+-----+-------------+
| 0| hello world|
| 1|witaj swiecie|
+-----+-------------+
It is then tokenized and transformed into another DataFrame with an additional column
called features that is a Vector of numerical values.
Note Paste the code below into Spark Shell using :paste mode.
import spark.implicits._
275
Example — Text Classification
Now, the tokenization part comes that maps the input text of each text document into tokens
(a Seq[String] ) and then into a Vector of numerical values that can only then be
understood by a machine learning algorithm (that operates on Vector instances).
scala> articles.show
+---+------------+--------------------+
| id| topic| text|
+---+------------+--------------------+
| 0| sci.math| Hello, Math!|
| 1|alt.religion| Hello, Religion!|
| 2| sci.physics| Hello, Physics!|
| 3| sci.math|Hello, Math Revised!|
| 4| sci.math| Better Math|
| 5|alt.religion| TGIF|
+---+------------+--------------------+
scala> trainDF.show
+---+------------+--------------------+-----+
| id| topic| text|label|
+---+------------+--------------------+-----+
| 1|alt.religion| Hello, Religion!| 0.0|
| 3| sci.math|Hello, Math Revised!| 1.0|
+---+------------+--------------------+-----+
scala> testDF.show
+---+------------+---------------+-----+
| id| topic| text|label|
+---+------------+---------------+-----+
| 0| sci.math| Hello, Math!| 1.0|
| 2| sci.physics|Hello, Physics!| 1.0|
| 4| sci.math| Better Math| 1.0|
| 5|alt.religion| TGIF| 0.0|
+---+------------+---------------+-----+
The train a model phase uses the logistic regression machine learning algorithm to build a
model and predict label for future input text documents (and hence classify them as
scientific or non-scientific).
276
Example — Text Classification
import org.apache.spark.ml.feature.RegexTokenizer
val tokenizer = new RegexTokenizer()
.setInputCol("text")
.setOutputCol("words")
import org.apache.spark.ml.feature.HashingTF
val hashingTF = new HashingTF()
.setInputCol(tokenizer.getOutputCol) // it does not wire transformers -- it's just
a column name
.setOutputCol("features")
.setNumFeatures(5000)
import org.apache.spark.ml.classification.LogisticRegression
val lr = new LogisticRegression().setMaxIter(20).setRegParam(0.01)
import org.apache.spark.ml.Pipeline
val pipeline = new Pipeline().setStages(Array(tokenizer, hashingTF, lr))
It uses two columns, namely label and features vector to build a logistic regression
model to make predictions.
277
Example — Text Classification
import org.apache.spark.ml.evaluation.BinaryClassificationEvaluator
val evaluator = new BinaryClassificationEvaluator().setMetricName("areaUnderROC")
import org.apache.spark.ml.param.ParamMap
val evaluatorParams = ParamMap(evaluator.metricName -> "areaUnderROC")
278
Example — Text Classification
import org.apache.spark.ml.tuning.ParamGridBuilder
val paramGrid = new ParamGridBuilder()
.addGrid(hashingTF.numFeatures, Array(100, 1000))
.addGrid(lr.regParam, Array(0.05, 0.2))
.addGrid(lr.maxIter, Array(5, 10, 15))
.build
paramGrid: Array[org.apache.spark.ml.param.ParamMap] =
Array({
logreg_cdb8970c1f11-maxIter: 5,
hashingTF_8d7033d05904-numFeatures: 100,
logreg_cdb8970c1f11-regParam: 0.05
}, {
logreg_cdb8970c1f11-maxIter: 5,
hashingTF_8d7033d05904-numFeatures: 1000,
logreg_cdb8970c1f11-regParam: 0.05
}, {
logreg_cdb8970c1f11-maxIter: 10,
hashingTF_8d7033d05904-numFeatures: 100,
logreg_cdb8970c1f11-regParam: 0.05
}, {
logreg_cdb8970c1f11-maxIter: 10,
hashingTF_8d7033d05904-numFeatures: 1000,
logreg_cdb8970c1f11-regParam: 0.05
}, {
logreg_cdb8970c1f11-maxIter: 15,
hashingTF_8d7033d05904-numFeatures: 100,
logreg_cdb8970c1f11-regParam: 0.05
}, {
logreg_cdb8970c1f11-maxIter: 15,
hashingTF_8d7033d05904-numFeatures: 1000,
logreg_cdb8970c1f11-...
import org.apache.spark.ml.tuning.CrossValidator
import org.apache.spark.ml.param._
val cv = new CrossValidator()
.setEstimator(pipeline)
.setEstimatorParamMaps(paramGrid)
.setEvaluator(evaluator)
.setNumFolds(10)
Let’s use the cross-validated model to calculate predictions and evaluate their precision.
279
Example — Text Classification
FIXME Review
Caution
https://github.com/apache/spark/blob/master/mllib/src/test/scala/org/apache/spark/ml/tuning
cvModel.write.overwrite.save("model")
280
Example — Linear Regression
Example — Linear Regression
The DataFrame used for Linear Regression has to have features column of
org.apache.spark.mllib.linalg.VectorUDT type.
Note You can change the name of the column using featuresCol parameter.
scala> println(lr.explainParams)
elasticNetParam: the ElasticNet mixing parameter, in range [0, 1]. For alpha = 0, the
penalty is an L2 penalty. For alpha = 1, it is an L1 penalty (default: 0.0)
featuresCol: features column name (default: features)
fitIntercept: whether to fit an intercept term (default: true)
labelCol: label column name (default: label)
maxIter: maximum number of iterations (>= 0) (default: 100)
predictionCol: prediction column name (default: prediction)
regParam: regularization parameter (>= 0) (default: 0.0)
solver: the solver algorithm for optimization. If this is not set or empty, default va
lue is 'auto' (default: auto)
standardization: whether to standardize the training features before fitting the model
(default: true)
tol: the convergence tolerance for iterative algorithms (default: 1.0E-6)
weightCol: weight column name. If this is not set or empty, we treat all instance weig
hts as 1.0 (default: )
281
Example — Linear Regression
import org.apache.spark.ml.Pipeline
val pipeline = new Pipeline("my_pipeline")
import org.apache.spark.ml.regression._
val lr = new LinearRegression
282
Logistic Regression
Logistic Regression
In statistics, logistic regression, or logit regression, or logit model is a regression
model where the dependent variable (DV) is categorical.
283
LogisticRegression
LogisticRegression
LogisticRegression is…FIXME
284
Latent Dirichlet Allocation (LDA)
Topic modeling is a type of model that can be very useful in identifying hidden thematic
structure in documents. Broadly speaking, it aims to find structure within an unstructured
collection of documents. Once the structure is "discovered", you may answer questions like:
Spark MLlib offers out-of-the-box support for Latent Dirichlet Allocation (LDA) which is the
first MLlib algorithm built upon GraphX.
Example
285
Vector
Vector
Vector sealed trait represents a numeric vector of values (of Double type) and their
Note package.
It is not the Vector type in Scala or Java. Train your eyes to see two types of
the same name. You’ve been warned.
There are exactly two available implementations of Vector sealed trait (that also belong to
org.apache.spark.mllib.linalg package):
DenseVector
SparseVector
286
Vector
import org.apache.spark.mllib.linalg.Vectors
// You can create dense vectors explicitly by giving values per index
val denseVec = Vectors.dense(Array(0.0, 0.4, 0.3, 1.5))
val almostAllZeros = Vectors.dense(Array(0.0, 0.4, 0.3, 1.5, 0.0, 0.0, 0.0, 0.0, 0.0,
0.0))
// You can however create a sparse vector by the size and non-zero elements
val sparse = Vectors.sparse(10, Seq((1, 0.4), (2, 0.3), (3, 1.5)))
import org.apache.spark.mllib.linalg._
scala> sv.size
res0: Int = 5
scala> sv.toArray
res1: Array[Double] = Array(1.0, 1.0, 1.0, 1.0, 1.0)
scala> sv == sv.copy
res2: Boolean = true
scala> sv.toJson
res3: String = {"type":0,"size":5,"indices":[0,1,2,3,4],"values":[1.0,1.0,1.0,1.0,1.0]}
287
LabeledPoint
LabeledPoint
Caution FIXME
LabeledPoint is a convenient class for declaring a schema for DataFrames that are used as
288
Streaming MLlib
Streaming MLlib
The following Machine Learning algorithms have their streaming variants in MLlib:
k-means
Linear Regression
Logistic Regression
Note The streaming algorithms belong to spark.mllib (the older RDD-based API).
Streaming k-means
org.apache.spark.mllib.clustering.StreamingKMeans
Sources
Streaming Machine Learning in Spark- Jeremy Freeman (HHMI Janelia Research
Center)
289
GeneralizedLinearRegression
GeneralizedLinearRegression (GLM)
GeneralizedLinearRegression is a regression algorithm. It supports the following error
distribution families:
1. gaussian
2. binomial
3. poisson
4. gamma
1. identity
2. logit
3. log
4. inverse
5. probit
6. cloglog
7. sqrt
import org.apache.spark.ml.regression._
val glm = new GeneralizedLinearRegression()
import org.apache.spark.ml.linalg._
val features = Vectors.sparse(5, Seq((3,1.0)))
val trainDF = Seq((0, features, 1)).toDF("id", "features", "label")
val glmModel = glm.fit(trainDF)
GeneralizedLinearRegressionModel.
290
GeneralizedLinearRegression
GeneralizedLinearRegressionModel
Regressor
Regressor is a custom Predictor.
291
Alternating Least Squares (ALS) Matrix Factorization
Read the original paper Scalable Collaborative Filtering with Jointly Derived
Tip
Neighborhood Interpolation Weights by Robert M. Bell and Yehuda Koren.
Our method is very fast in practice, generating a prediction in about 0.2 milliseconds.
Importantly, it does not require training many parameters or a lengthy preprocessing,
making it very practical for large scale applications. Finally, we show how to apply these
methods to the perceivably much slower user-oriented approach. To this end, we
suggest a novel scheme for low dimensional embedding of the users. We evaluate
these methods on the Netflix dataset, where they deliver significantly better results than
the commercial Netflix Cinematch recommender system.
ALS Example
import spark.implicits._
import org.apache.spark.ml.recommendation.ALS
val als = new ALS().
setMaxIter(5).
setRegParam(0.01).
setUserCol("userId").
setItemCol("movieId").
setRatingCol("rating")
292
Alternating Least Squares (ALS) Matrix Factorization
import org.apache.spark.ml.recommendation.ALS.Rating
// FIXME Use a much richer dataset, i.e. Spark's data/mllib/als/sample_movielens_ratin
gs.txt
// FIXME Load it using spark.read
val ratings = Seq(
Rating(0, 2, 3),
Rating(0, 3, 1),
Rating(0, 5, 2),
Rating(1, 2, 2)).toDF("userId", "movieId", "rating")
val Array(training, testing) = ratings.randomSplit(Array(0.8, 0.2))
import org.apache.spark.ml.recommendation.ALSModel
val model = als.fit(training)
// drop NaNs
model.setColdStartStrategy("drop")
val predictions = model.transform(testing)
import org.apache.spark.ml.evaluation.RegressionEvaluator
val evaluator = new RegressionEvaluator().
setMetricName("rmse"). // root mean squared error
setLabelCol("rating").
setPredictionCol("prediction")
val rmse = evaluator.evaluate(predictions)
println(s"Root-mean-square error = $rmse")
System.exit(0)
293
Alternating Least Squares (ALS) Matrix Factorization
294
ALS — Estimator for ALSModel
Supported values:
coldStartStrategy nan
nan - predicted value for
unknown ids will be NaN
drop - rows in the input
DataFrame containing
unknown ids are dropped
from the output DataFrame
(with predictions).
295
ALS — Estimator for ALSModel
predictionCol prediction
The main purpose of the
estimator
Of type FloatType
Regularization parameter
296
ALS — Estimator for ALSModel
computeFactors[ID](
srcFactorBlocks: RDD[(Int, FactorBlock)],
srcOutBlocks: RDD[(Int, OutBlock)],
dstInBlocks: RDD[(Int, InBlock[ID])],
rank: Int,
regParam: Double,
srcEncoder: LocalIndexEncoder,
implicitPrefs: Boolean = false,
alpha: Double = 1.0,
solver: LeastSquaresNESolver): RDD[(Int, FactorBlock)]
computeFactors …FIXME
Internally, fit validates the schema of the dataset (to make sure that the types of the
columns are correct and the prediction column is not available yet).
fit casts the rating column (as defined using ratingCol parameter) to FloatType .
fit selects user, item and rating columns (from the dataset ) and converts it to RDD of
Rating instances.
fit prints out the training parameters as INFO message to the logs:
297
ALS — Estimator for ALSModel
INFO ...FIXME
fit trains a model, i.e. generates a pair of RDDs of user and item factors.
fit converts the RDDs with user and item factors to corresponding DataFrames with id
partitionRatings[ID](
ratings: RDD[Rating[ID]],
srcPart: Partitioner,
dstPart: Partitioner): RDD[((Int, Int), RatingBlock[ID])]
partitionRatings …FIXME
makeBlocks[ID](
prefix: String,
ratingBlocks: RDD[((Int, Int), RatingBlock[ID])],
srcPart: Partitioner,
dstPart: Partitioner,
storageLevel: StorageLevel)(
implicit srcOrd: Ordering[ID]): (RDD[(Int, InBlock[ID])], RDD[(Int, OutBlock)])
makeBlocks …FIXME
298
ALS — Estimator for ALSModel
train Method
train[ID](
ratings: RDD[Rating[ID]],
rank: Int = 10,
numUserBlocks: Int = 10,
numItemBlocks: Int = 10,
maxIter: Int = 10,
regParam: Double = 0.1,
implicitPrefs: Boolean = false,
alpha: Double = 1.0,
nonnegative: Boolean = false,
intermediateRDDStorageLevel: StorageLevel = StorageLevel.MEMORY_AND_DISK,
finalRDDStorageLevel: StorageLevel = StorageLevel.MEMORY_AND_DISK,
checkpointInterval: Int = 10,
seed: Long = 0L)(
implicit ord: Ordering[ID]): (RDD[(ID, Array[Float])], RDD[(ID, Array[Float])])
train partition the ratings RDD (using two HashPartitioners with numUserBlocks and
train creates a pair of user in and out block RDDs for blockRatings .
train creates a pair of user in and out block RDDs for the swappedBlockRatings RDD.
Caution FIXME train gets too "heavy", i.e. advanced. Gave up for now. Sorry.
299
ALS — Estimator for ALSModel
requirement failed: ALS is not designed to run without persisting intermediate RDDs.
validateAndTransformSchema …FIXME
300
ALSModel — Model for Predictions
ALSModel is a MLWritable.
301
ALSModel — Model for Predictions
import org.apache.spark.sql._
class MyALS(spark: SparkSession) {
import spark.implicits._
val userFactors = Seq((0, Seq(0.3, 0.2))).toDF("id", "features")
val itemFactors = Seq((0, Seq(0.3, 0.2))).toDF("id", "features")
import org.apache.spark.ml.recommendation._
val alsModel = new ALSModel(uid = "uid", rank = 10, userFactors, itemFactors)
}
// END :pa -raw
import org.apache.spark.sql.types._
val mySchema = new StructType().
add($"user".float).
add($"item".float)
transform left-joins the dataset with userFactors dataset (using userCol column of
302
ALSModel — Model for Predictions
Left join takes two datasets and gives all the rows from the left side (of the join)
combined with the corresponding row from the right side if available or null .
transform left-joins the dataset with itemFactors dataset (using itemCol column of
transform makes predictions using the features columns of userFactors and itemFactors
transform takes (selects) all the columns from the dataset and predictionCol with
predictions.
Ultimately, transform drops rows containing null or NaN values for predictions if
coldStartStrategy is drop .
The default value of coldStartStrategy is nan that does not drop missing
Note
values from predictions column.
transformSchema Method
303
ALSModel — Model for Predictions
Unique ID
Rank
predict: UserDefinedFunction
predict is a user-defined function (UDF) that takes two collections of float numbers and
copy then copies extra parameters to the new ALSModel and sets the parent.
304
ALSModel — Model for Predictions
305
ALSModelReader
ALSModelReader
ALSModelReader is…FIXME
load Method
load …FIXME
306
Instrumentation
Instrumentation
Instrumentation is…FIXME
logParams …FIXME
create …FIXME
307
MLUtils
MLUtils
MLUtils is…FIXME
kFold Method
kFold …FIXME
308
Spark Shell — spark-shell shell script
Under the covers, Spark shell is a standalone Spark application written in Scala that offers
environment with auto-completion (using TAB key) where you can run ad-hoc queries and
get familiar with the features of Spark (that help you in developing your own standalone
Spark applications). It is a very convenient tool to explore the many things available in Spark
with immediate feedback. It is one of the many reasons why Spark is so helpful for tasks to
process datasets of any size.
There are variants of Spark shell for different languages: spark-shell for Scala, pyspark
for Python and sparkR for R.
Note This document (and the book in general) uses spark-shell for Scala only.
$ ./bin/spark-shell
scala>
scala> :imports
1) import spark.implicits._ (59 terms, 38 are implicit)
2) import spark.sql (1 terms)
309
Spark Shell — spark-shell shell script
When you execute spark-shell you actually execute Spark submit as follows:
org.apache.spark.deploy.SparkSubmit --class
org.apache.spark.repl.Main --name Spark shell spark-
Note shell
$ ./bin/spark-shell
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newL
evel).
WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using
builtin-java classes where applicable
WARN ObjectStore: Failed to get database global_temp, returning NoSuchObjectException
Spark context Web UI available at http://10.47.71.138:4040
Spark context available as 'sc' (master = local[*], app id = local-1477858597347).
Spark session available as 'spark'.
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 2.1.0-SNAPSHOT
/_/
Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_112)
Type in expressions to have them evaluated.
Type :help for more information.
scala>
Spark shell creates an instance of SparkSession under the name spark for you (so you
don’t have to know the details how to do it yourself on day 1).
310
Spark Shell — spark-shell shell script
scala> :type sc
org.apache.spark.SparkContext
To close Spark shell, you press Ctrl+D or type in :q (or any subset of :quit ).
scala> :q
Settings
Table 1. Spark Properties
Spark Property Default Value Description
Used in spark-shell to create REPL
ClassLoader to load new classes defined
in the Scala REPL as a user types code.
311
Spark Submit — spark-submit shell script
You can submit your Spark application to a Spark deployment environment for execution, kill
or request status of Spark applications.
You can find spark-submit script in bin directory of the Spark distribution.
$ ./bin/spark-submit
Usage: spark-submit [options] <app jar | python file> [app arguments]
Usage: spark-submit --kill [submission ID] --master [spark://...]
Usage: spark-submit --status [submission ID] --master [spark://...]
Usage: spark-submit run-example [options] example-class [example args]
...
When executed, spark-submit script first checks whether SPARK_HOME environment variable
is set and sets it to the directory that contains bin/spark-submit shell script if not. It then
executes spark-class shell script to run SparkSubmit standalone application.
FIXME Add Cluster Manager and Deploy Mode to the table below (see
Caution
options value)
Table 1. Command-Line Options, Spark Properties and Environment Variables (from SparkSubmitArgum
handle)
Command-
Spark Property Environment Variable
Line Option
action Defaults to
--archives
--conf
--deploy-
mode
spark.submit.deployMode DEPLOY_MODE Deploy mode
--driver-
class-path
spark.driver.extraClassPath The driver’s class path
--driver-
java-options
spark.driver.extraJavaOptions The driver’s JVM option
--driver-
library-path
spark.driver.extraLibraryPath The driver’s native libra
--driver-
memory
spark.driver.memory SPARK_DRIVER_MEMORY The driver’s memory
312
Spark Submit — spark-submit shell script
--driver-
spark.driver.cores
cores
--exclude-
spark.jars.excludes
packages
--executor-
cores
spark.executor.cores SPARK_EXECUTOR_CORES The number of executo
--executor-
memory
spark.executor.memory SPARK_EXECUTOR_MEMORY An executor’s memory
--files spark.files
ivyRepoPath spark.jars.ivy
--jars spark.jars
--keytab spark.yarn.keytab
submissionToKill
--kill
to KILL
--class
SPARK_YARN_APP_NAME
Uses mainClass
--name spark.app.name
(YARN only) off primaryResource
ways set it
--num-
executors spark.executor.instances
--packages spark.jars.packages
--principal spark.yarn.principal
--
properties- spark.yarn.principal
file
--proxy-
user
--py-files
--queue
--
repositories
313
Spark Submit — spark-submit shell script
submissionToRequestSta
--status
action set to
--supervise
--total-
executor- spark.cores.max
cores
--verbose
--version SparkSubmit.printVersi
--help printUsageAndExit(0)
--usage-
printUsageAndExit(1)
error
$ SPARK_PRINT_LAUNCH_COMMAND=1 ./bin/spark-shell
Spark Command: /Library/Ja...
Tip
Avoid using scala.App trait for a Spark application’s main class in Scala as
reported in SPARK-4170 Closure problems when running Scala app that "extends
Tip App".
Refer to Executing Main — runMain internal method in this document.
prepareSubmitEnvironment(args: SparkSubmitArguments)
: (Seq[String], Seq[String], Map[String, String], String)
sysProps, childMainClass) .
314
Spark Submit — spark-submit shell script
Caution FIXME
Tip See the elements of the return tuple using --verbose command-line option.
--properties-file [FILE]
--properties-file command-line option sets the path to a file FILE from which Spark
--driver-cores NUM
--driver-cores command-line option sets the number of cores to NUM for the driver in the
315
Spark Submit — spark-submit shell script
--jars JARS
--jars is a comma-separated list of local jars to include on the driver’s and executors'
classpaths.
Caution FIXME
--files FILES
Caution FIXME
--archives ARCHIVES
Caution FIXME
--queue QUEUE_NAME
With --queue you can choose the YARN resource queue to submit a Spark application to.
The default queue name is default .
316
Spark Submit — spark-submit shell script
Actions
runMain(
childArgs: Seq[String],
childClasspath: Seq[String],
sysProps: Map[String, String],
childMainClass: String,
verbose: Boolean): Unit
runMain is an internal method to build execution environment and invoke the main method
When verbose input flag is enabled (i.e. true ) runMain prints out all the input
parameters, i.e. childMainClass , childArgs , sysProps , and childClasspath (in that
order).
317
Spark Submit — spark-submit shell script
Main class:
[childMainClass]
Arguments:
[childArgs one per line]
System properties:
[sysProps one per line]
Classpath elements:
[childClasspath one per line]
Note Use spark-submit 's --verbose command-line option to enable verbose flag.
spark.driver.userClassPathFirst flag.
It adds the jars specified in childClasspath input parameter to the context classloader (that
is later responsible for loading the childMainClass main class).
It sets all the system properties specified in sysProps input parameter (using Java’s
System.setProperty method).
Note childMainClass is the main class spark-submit has been invoked with.
Avoid using scala.App trait for a Spark application’s main class in Scala as
Tip reported in SPARK-4170 Closure problems when running Scala app that "extends
App".
If you use scala.App for the main class, you should see the following warning message in
the logs:
Warning: Subclasses of scala.App may not work correctly. Use a main() method instead.
Finally, runMain executes the main method of the Spark application passing in the
childArgs arguments.
Any SparkUserAppException exceptions lead to System.exit while the others are simply re-
thrown.
318
Spark Submit — spark-submit shell script
addJarToClasspath is an internal method to add file or local jars (as localJar ) to the
loader classloader.
Internally, addJarToClasspath resolves the URI of localJar . If the URI is file or local
and the file denoted by localJar exists, localJar is added to loader . Otherwise, the
following warning is printed out to the logs:
For all other URIs, the following warning is printed out to the logs:
FIXME What is a URI fragment? How does this change re YARN distributed
Caution
cache? See Utils#resolveURI .
Command-line Options
Execute spark-submit --help to know about the command-line options supported.
Options:
--master MASTER_URL spark://host:port, mesos://host:port, yarn, or local.
--deploy-mode DEPLOY_MODE Whether to launch the driver program locally ("client")
319
Spark Submit — spark-submit shell script
or
on one of the worker machines inside the cluster ("clust
er")
(Default: client).
--class CLASS_NAME Your application's main class (for Java / Scala apps).
--name NAME A name of your application.
--jars JARS Comma-separated list of local jars to include on the dri
ver
and executor classpaths.
--packages Comma-separated list of maven coordinates of jars to inc
lude
on the driver and executor classpaths. Will search the l
ocal
maven repo, then maven central and any additional remote
repositories given by --repositories. The format for the
coordinates should be groupId:artifactId:version.
--exclude-packages Comma-separated list of groupId:artifactId, to exclude w
hile
resolving the dependencies provided in --packages to avo
id
dependency conflicts.
--repositories Comma-separated list of additional remote repositories t
o
search for the maven coordinates given with --packages.
--py-files PY_FILES Comma-separated list of .zip, .egg, or .py files to plac
e
on the PYTHONPATH for Python apps.
--files FILES Comma-separated list of files to be placed in the workin
g
directory of each executor.
--driver-memory MEM Memory for driver (e.g. 1000M, 2G) (Default: 1024M).
--driver-java-options Extra Java options to pass to the driver.
--driver-library-path Extra library path entries to pass to the driver.
--driver-class-path Extra class path entries to pass to the driver. Note tha
t
jars added with --jars are automatically included in the
classpath.
--executor-memory MEM Memory per executor (e.g. 1000M, 2G) (Default: 1G).
320
Spark Submit — spark-submit shell script
YARN-only:
--driver-cores NUM Number of cores used by the driver, only in cluster mode
(Default: 1).
--queue QUEUE_NAME The YARN queue to submit to (Default: "default").
--num-executors NUM Number of executors to launch (Default: 2).
--archives ARCHIVES Comma separated list of archives to be extracted into th
e
working directory of each executor.
--principal PRINCIPAL Principal to be used to login to KDC, while running on
secure HDFS.
--keytab KEYTAB The full path to the file that contains the keytab for t
he
principal specified above. This keytab will be copied to
the node running the Application Master via the Secure
Distributed Cache, for renewing the login tickets and th
e
delegation tokens periodically.
--class
--conf or -c
--driver-java-options
--driver-library-path
--driver-memory
--executor-memory
--files
321
Spark Submit — spark-submit shell script
--jars
--master
--name
--packages
--exclude-packages
--proxy-user
--py-files
--repositories
--total-executor-cores
--help or -h
--usage-error
YARN-only options:
--archives
--executor-cores
--keytab
--num-executors
--principal
322
Spark Submit — spark-submit shell script
--driver-class-path command-line option sets the extra class path entries (e.g. jars and
SparkSubmitArguments.handle called).
$ ./bin/spark-submit --version
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 2.1.0-SNAPSHOT
/_/
Branch master
Compiled by user jacek on 2016-09-30T07:08:39Z
Revision 1fad5596885aab8b32d2307c0edecbae50d5bd7a
Url https://github.com/apache/spark.git
Type --help for more information.
323
Spark Submit — spark-submit shell script
In verbose mode, the parsed arguments are printed out to the System error output.
FIXME
It also prints out propertiesFile and the properties from the file.
FIXME
Environment Variables
The following is the list of environment variables that are considered when command-line
options are not specified:
SPARK_EXECUTOR_CORES
DEPLOY_MODE
SPARK_YARN_APP_NAME
_SPARK_CMD_USAGE
./bin/spark-submit \
--packages my:awesome:package \
--repositories s3n://$aws_ak:$aws_sak@bucket/path/to/repo
324
Spark Submit — spark-submit shell script
When executed, spark-submit script simply passes the call to spark-class with
org.apache.spark.deploy.SparkSubmit class followed by command-line arguments.
It then relays the execution to action-specific internal methods (with the application
arguments):
The action can only have one of the three available values: SUBMIT , KILL , or
Note
REQUEST_STATUS .
export JAVA_HOME=/your/directory/java
export HADOOP_HOME=/usr/lib/hadoop
export SPARK_WORKER_CORES=2
export SPARK_WORKER_MEMORY=1G
325
Spark Submit — spark-submit shell script
326
SparkSubmitArguments
SparkSubmitArguments — spark-submit’s
Command-Line Argument Parser
SparkSubmitArguments is a custom SparkSubmitArgumentsParser to handle the command-line
arguments of spark-submit script that the actions (i.e. submit, kill and status) use for their
execution (possibly with the explicit env environment).
loadEnvironmentArguments(): Unit
loadEnvironmentArguments calculates the Spark properties for the current execution of spark-
submit.
Spark config properties start with spark. prefix and can be set using --conf
Note
[key=value] command-line option.
handle Method
handle parses the input opt argument and returns true or throws an
handle sets the internal properties in the table Command-Line Options, Spark Properties
mergeDefaultSparkProperties(): Unit
327
SparkSubmitArguments
mergeDefaultSparkProperties merges Spark properties from the default Spark properties file,
328
SparkSubmitOptionParser — spark-submit’s Command-Line Parser
SparkSubmitOptionParser — spark-submit’s
Command-Line Parser
SparkSubmitOptionParser is the parser of spark-submit's command-line options.
--driver-cores
--exclude-packages
--executor-cores
--executor-memory
--files
--jars
--keytab
329
SparkSubmitOptionParser — spark-submit’s Command-Line Parser
--name
--num-executors
--packages
--principal
--proxy-user
--py-files
--queue
--repositories
--supervise
--total-executor-cores
--verbose or -v
SparkSubmitOptionParser Callbacks
SparkSubmitOptionParser is supposed to be overriden for the following capabilities (as
callbacks).
330
SparkSubmitOptionParser — spark-submit’s Command-Line Parser
Table 2. Callbacks
Callback Description
handle Executed when an option with an argument is parsed.
handleExtraArgs
Executed for the command-line arguments that handle and
handleUnknown callbacks have not processed.
org.apache.spark.launcher.SparkSubmitArgumentsParser is a custom
Note
SparkSubmitOptionParser .
parse calls handle callback whenever it finds a known command-line option or a switch (a
331
SparkSubmitCommandBuilder Command Builder
1. pyspark-shell-main
2. sparkr-shell-main
3. run-example
1. handle to handle the known options (see the table below). It sets up master ,
arguments.
For spark-shell it assumes that the application arguments are after spark-
Note
submit 's arguments.
SparkSubmitCommandBuilder.buildCommand /
buildSparkSubmitCommand
332
SparkSubmitCommandBuilder Command Builder
buildSparkSubmitCommand builds the first part of the Java command passing in the extra
variables.
addPermGenSizeOpt case…elaborate
buildSparkSubmitArgs method
List<String> buildSparkSubmitArgs()
arguments that spark-submit recognizes (when it is executed later on and uses the very
same SparkSubmitOptionParser parser to parse command-line arguments).
333
SparkSubmitCommandBuilder Command Builder
verbose VERBOSE
getEffectiveConfig internal method builds effectiveConfig that is conf with the Spark
properties file loaded (using loadPropertiesFile internal method) skipping keys that have
already been loaded (it happened when the command-line options were parsed in handle
method).
334
SparkSubmitCommandBuilder Command Builder
isClientMode checks master first (from the command-line options) and then spark.master
Caution FIXME Review master and deployMode . How are they set?
isClientMode responds positive when no explicit master and client deploy mode set
explicitly.
OptionParser
OptionParser is a custom SparkSubmitOptionParser that SparkSubmitCommandBuilder uses
callbacks).
335
SparkSubmitCommandBuilder Command Builder
--deploy-mode deployMode
--properties-file propertiesFile
--driver-java-options
Sets spark.driver.extraJavaOptions (in
conf )
--driver-library-path
Sets spark.driver.extraLibraryPath (in
conf )
--driver-class-path
Sets spark.driver.extraClassPath (in
conf )
--version
Disables isAppResourceReq and adds
itself to sparkArgs .
336
SparkSubmitCommandBuilder Command Builder
Otherwise, handleUnknown sets appResource and stops further parsing of the argument list.
337
spark-class shell script
Note Ultimately, any shell script in Spark, e.g. spark-submit, calls spark-class script.
You can find spark-class script in bin directory of the Spark distribution.
Depending on the Spark distribution (or rather lack thereof), i.e. whether RELEASE file exists
or not, it sets SPARK_JARS_DIR environment variable to [SPARK_HOME]/jars or
[SPARK_HOME]/assembly/target/scala-[SPARK_SCALA_VERSION]/jars , respectively (with the latter
If SPARK_JARS_DIR does not exist, spark-class prints the following error message and exits
with the code 1 .
spark-class sets LAUNCH_CLASSPATH environment variable to include all the jars under
SPARK_JARS_DIR .
the Spark command to launch. The Main class programmatically computes the command
that spark-class executes afterwards.
338
spark-class shell script
Main expects that the first parameter is the class name that is the "operation mode":
$ ./bin/spark-class org.apache.spark.launcher.Main
Exception in thread "main" java.lang.IllegalArgumentException: Not enough arguments: m
issing class name.
at org.apache.spark.launcher.CommandBuilderUtils.checkArgument(CommandBuilderU
tils.java:241)
at org.apache.spark.launcher.Main.main(Main.java:51)
the command.
339
AbstractCommandBuilder
AbstractCommandBuilder
AbstractCommandBuilder is the base command builder for SparkSubmitCommandBuilder
buildJavaCommand
getConfDir
buildJavaCommand builds the Java command for a Spark application (which is a collection of
elements with the path to java executable, JVM options from java-opts file, and a class
path).
buildJavaCommand loads extra Java options from the java-opts file in configuration
directory if the file exists and adds them to the result Java command.
Eventually, buildJavaCommand builds the class path (with the extra class path if non-empty)
and adds it as -cp to the result Java command.
buildClassPath method
340
AbstractCommandBuilder
Directories always end up with the OS-specific file separator at the end of their
Note
paths.
Properties loadPropertiesFile()
from a properties file (when specified on the command line) or spark-defaults.conf in the
configuration directory.
It loads the settings from the following files starting from the first and checking every location
until the first properties file is found:
341
AbstractCommandBuilder
AbstractCommandBuilder.setPropertiesFile ).
2. [SPARK_CONF_DIR]/spark-defaults.conf
3. [SPARK_HOME]/conf/spark-defaults.conf
a Spark application.
application.
Spark home not found; set it explicitly or use the SPARK_HOME environment variable.
342
SparkLauncher — Launching Spark Applications Programmatically
code (not spark-submit directly). It uses a builder pattern to configure a Spark application
and launch it as a child process using spark-submit.
build module.
application to launch.
addAppArgs(String… args)
Adds command line arguments for a
Spark application.
addFile(String file)
Adds a file to be submitted with a Spark
application.
addJar(String jar)
Adds a jar file to be submitted with the
application.
addPyFile(String file)
Adds a python file / zip / egg to be
submitted with a Spark application.
addSparkArg(String arg)
Adds a no-value argument to the Spark
invocation.
directory(File dir)
Sets the working directory of spark-
submit.
redirectError(File errFile)
Redirects error output to the specified
errFile file.
343
SparkLauncher — Launching Spark Applications Programmatically
redirectOutput(File outFile)
Redirects output to the specified outFile
file.
setVerbose(boolean verbose)
Enables verbose reporting for
SparkSubmit.
After the invocation of a Spark application is set up, use launch() method to launch a sub-
process that will start the configured Spark application. It is however recommended to use
startApplication method instead.
344
SparkLauncher — Launching Spark Applications Programmatically
import org.apache.spark.launcher.SparkLauncher
345
Spark Architecture
Spark Architecture
Spark uses a master/worker architecture. There is a driver that talks to a single
coordinator called master that manages workers in which executors run.
346
Spark Architecture
347
Driver
Driver
A Spark driver (aka an application’s driver process) is a JVM process that hosts
SparkContext for a Spark application. It is the master node in a Spark application.
It is the cockpit of jobs and tasks execution (using DAGScheduler and Task Scheduler). It
hosts Web UI for the environment.
A driver is where the task scheduler lives and spawns tasks across workers.
Spark shell is a Spark application and the driver. It creates a SparkContext that
Note
is available as sc .
348
Driver
Driver requires the additional services (beside the common ones like ShuffleManager,
MemoryManager, BlockTransferService, BroadcastManager, CacheManager):
Listener Bus
RPC Environment
HttpFileServer
Launches tasks
Driver’s Memory
It can be set first using spark-submit’s --driver-memory command-line option or
spark.driver.memory and falls back to SPARK_DRIVER_MEMORY if not set earlier.
Note It is printed out to the standard error output in spark-submit’s verbose mode.
Driver’s Cores
It can be set first using spark-submit’s --driver-cores command-line option for cluster
deploy mode.
In client deploy mode the driver’s memory corresponds to the memory of the
Note
JVM process the Spark application runs on.
Note It is printed out to the standard error output in spark-submit’s verbose mode.
349
Driver
Settings
Table 1. Spark Properties
Spark Property Default Value Description
Port to use for the
BlockManager on the driver.
More precisely,
spark.driver.blockManager.port spark.blockManager.port spark.driver.blockManager.port
is used when
NettyBlockTransferService
created (while SparkEnv
created for the driver).
spark.driver.extraLibraryPath
350
Driver
spark.driver.appUIAddress
spark.driver.appUIAddress is
used exclusively in Spark on
YARN. It is set when spark.driver.libraryPath
YarnClientSchedulerBackend
starts to run ExecutorLauncher
(and register ApplicationMaster
for the Spark application).
spark.driver.extraClassPath
spark.driver.extraClassPath system property sets the additional classpath entries (e.g. jars
and directories) that should be added to the driver’s classpath in cluster deploy mode.
For client deploy mode you can use a properties file or command line to set
spark.driver.extraClassPath .
Do not use SparkConf since it is too late for client deploy mode given the
Note JVM has already been set up to start a Spark application.
Refer to buildSparkSubmitCommand Internal Method for the very low-level details
of how it is handled internally.
351
Executor
Executor
Executor is a distributed agent that is responsible for executing tasks.
Executor typically runs for the entire lifetime of a Spark application which is called static
allocation of executors (but you could also opt in for dynamic allocation).
Executors reports heartbeat and partial metrics for active tasks to HeartbeatReceiver RPC
Endpoint on the driver.
When an executor starts it first registers with the driver and communicates directly to
execute tasks.
352
Executor
Executors can run multiple tasks over its lifetime, both in parallel and sequentially. They
track running tasks (by their task ids in runningTasks internal registry). Consult Launching
Tasks section.
Executors use a Executor task launch worker thread pool for launching tasks.
Executors send metrics (and heartbeats) using the internal heartbeater - Heartbeat Sender
Thread.
It is recommended to have as many executors as data nodes and as many cores as you can
get from the cluster.
Executors are described by their id, hostname, environment (as SparkEnv ), and
classpath (and, less importantly, and more for internal optimization, whether they run in
local or cluster mode).
353
Executor
maxDirectResultSize
maxResultSize
Refer to Logging.
updateDependencies …FIXME
createClassLoader Method
Caution FIXME
addReplClassLoaderIfNeeded Method
354
Executor
Caution FIXME
Executor ID
SparkEnv
Collection of user-defined JARs (to add to tasks' class path). Empty by default
Flag that says whether the executor runs in local or cluster mode (default: false , i.e.
cluster mode is preferred)
Note isLocal is enabled exclusively for LocalEndpoint (for Spark in local mode).
When created, you should see the following INFO messages in the logs:
(only for non-local modes) Executor requests the BlockManager to initialize (with the Spark
application id of the SparkConf).
(only for non-local modes) Executor requests the MetricsSystem to register the
ExecutorSource and shuffleMetricsSource of the BlockManager.
Executor creates a task class loader (optionally with REPL support) that the current
355
Executor
Executor initializes the internal registries and counters in the meantime (not necessarily at
launchTask(
context: ExecutorBackend,
taskId: Long,
attemptNumber: Int,
taskName: String,
serializedTask: ByteBuffer): Unit
356
Executor
Executors track the TaskRunner that run tasks. A task might not be assigned to
Note
a TaskRunner yet when the executor sends a heartbeat.
A blocking Heartbeat message that holds the executor id, all accumulator updates (per task
id), and BlockManagerId is sent to HeartbeatReceiver RPC endpoint (with
spark.executor.heartbeatInterval timeout).
If the response requests to reregister BlockManager, you should see the following INFO
message in the logs:
357
Executor
If there are any issues with communicating with the driver, you should see the following
WARN message in the logs:
The internal heartbeatFailures is incremented and checked to be less than the acceptable
number of failures (i.e. spark.executor.heartbeat.maxFailures Spark property). If the number
is greater, the following ERROR is printed out to the logs:
ERROR Executor: Exit as unable to send heartbeats to driver more than [HEARTBEAT_MAX_F
AILURES] times
reportHeartBeat(): Unit
reportHeartBeat collects TaskRunners for currently running tasks (aka active tasks) with
their tasks deserialized (i.e. either ready for execution or already started).
reportHeartBeat then records the latest values of internal and external accumulators for
every task.
358
Executor
the driver).
In case of a non-fatal exception, you should see the following WARN message in the logs
(followed by the stack trace).
359
Executor
Coarse-Grained Executors
Coarse-grained executors are executors that use CoarseGrainedExecutorBackend for task
scheduling.
Resource Offers
Read resourceOffers in TaskSchedulerImpl and resourceOffer in TaskSetManager.
launch worker-[ID] (with ID being the task id) for launching tasks.
threadPool is created when Executor is created and shut down when it stops.
You can change the assigned memory per executor per node in standalone cluster using
SPARK_EXECUTOR_MEMORY environment variable.
You can find the value displayed as Memory per Node in web UI for standalone Master (as
depicted in the figure below).
360
Executor
Metrics
Every executor registers its own ExecutorSource to report metrics.
stop(): Unit
361
Executor
Settings
Table 3. Spark Properties
Default
Spark Property Description
Value
spark.executor.cores
Number of cores for an
executor.
362
Executor
spark.executor.id
spark.executor.logs.rolling.maxSize
spark.executor.logs.rolling.maxRetainedFiles
spark.executor.logs.rolling.strategy
spark.executor.logs.rolling.time.interval
Equivalent to
SPARK_EXECUTOR_MEMORY
spark.executor.memory 1g environment variable.
spark.executor.port
spark.executor.port
spark.executor.uri Equivalent to
SPARK_EXECUTOR_URI
spark.task.maxDirectResultSize 1048576B
363
TaskRunner
TaskRunner
TaskRunner is a thread of execution of a single task.
364
TaskRunner
FIXME
taskId
Used when…FIXME
FIXME
threadName
Used when…FIXME
FIXME
taskName
Used when…FIXME
FIXME
finished
Used when…FIXME
FIXME
killed
Used when…FIXME
FIXME
threadId
Used when…FIXME
FIXME
startGCTime
Used when…FIXME
FIXME
task
Used when…FIXME
FIXME
replClassLoader
Used when…FIXME
Refer to Logging.
365
TaskRunner
ExecutorBackend
TaskDescription
computeTotalGcTime Method
Caution FIXME
updateDependencies Method
Caution FIXME
setTaskFinishedAndClearInterruptStatus Method
Caution FIXME
Lifecycle
It is created with an ExecutorBackend (to send the task’s status updates to), task and
attempt ids, task name, and serialized version of the task (as ByteBuffer ).
run(): Unit
When executed, run initializes threadId as the current thread identifier (using Java’s
Thread)
run then sets the name of the current thread as threadName (using Java’s Thread).
366
TaskRunner
Note run uses ExecutorBackend that was specified when TaskRunner was created.
run deserializes the task (using the context class loader) and sets its localProperties and
TaskMemoryManager . run sets the task internal reference to hold the deserialized task.
run records the current time as the task’s start time (as taskStart ).
run runs the task (with taskAttemptId as taskId, attemptNumber from TaskDescription ,
367
TaskRunner
The task runs inside a "monitored" block (i.e. try-finally block) to detect any
Note memory and lock leaks after the task’s run finishes regardless of the final
outcome - the computed value or an exception thrown.
After the task’s run has finished (inside the "finally" block of the "monitored" block), run
requests BlockManager to release all locks of the task (for the task’s taskId). The locks are
later used for lock leak detection.
run then requests TaskMemoryManager to clean up allocated memory (that helps finding
memory leaks).
If run detects memory leak of the managed memory (i.e. the memory freed is greater than
0 ) and spark.unsafe.exceptionOnMemoryLeak Spark property is enabled (it is not by
default) and no exception was reported while the task ran, run reports a SparkException :
ERROR Executor: Managed memory leak detected; size = [freedMemory] bytes, TID = [taskI
d]
If run detects lock leaking (i.e. the number of locks released) and
spark.storage.exceptionOnPinLeak Spark property is enabled (it is not by default) and no
exception was reported while the task ran, run reports a SparkException :
INFO Executor: [releasedLocks] block locks were not released by TID = [taskId]:
[releasedLocks separated by comma]
368
TaskRunner
Rigth after the "monitored" block, run records the current time as the task’s finish time (as
taskFinish ).
If the task was killed (while it was running), run reports a TaskKilledException (and the
TaskRunner exits).
run creates a Serializer and serializes the task’s result. run measures the time to
executorDeserializeTime
executorDeserializeCpuTime
executorRunTime
executorCpuTime
jvmGCTime
resultSerializationTime
run collects the latest values of internal and external accumulators used in the task.
run creates a DirectTaskResult (with the serialized result and the latest values of
accumulators).
run serializes the DirectTaskResult and gets the byte buffer’s limit.
run selects the proper serialized version of the result before sending it to ExecutorBackend .
run branches off based on the serialized DirectTaskResult byte buffer’s limit.
When maxResultSize is greater than 0 and the serialized DirectTaskResult buffer limit
exceeds it, the following WARN message is displayed in the logs:
369
TaskRunner
WARN Executor: Finished [taskName] (TID [taskId]). Result is larger than maxResultSize
([resultSize] > [maxResultSize]), dropping it.
$ ./bin/spark-shell -c spark.driver.maxResultSize=1m
scala> sc.version
res0: String = 2.0.0-SNAPSHOT
scala> sc.getConf.get("spark.driver.maxResultSize")
res1: String = 1m
In this case, run creates a IndirectTaskResult (with a TaskResultBlockId for the task’s
taskId and resultSize ) and serializes it.
INFO Executor: Finished [taskName] (TID [taskId]). [resultSize] bytes result sent via
BlockManager)
In this case, run creates a IndirectTaskResult (with a TaskResultBlockId for the task’s
taskId and resultSize ) and serializes it.
The difference between the two above cases is that the result is dropped or
Note
stored in BlockManager with MEMORY_AND_DISK_SER storage level.
370
TaskRunner
When the two cases above do not hold, you should see the following INFO message in the
logs:
INFO Executor: Finished [taskName] (TID [taskId]). [resultSize] bytes result sent to d
river
run uses the serialized DirectTaskResult byte buffer as the final serializedResult .
run notifies ExecutorBackend that taskId is in TaskState.FINISHED state with the serialized
result and removes taskId from the owning executor’s runningTasks registry.
When run catches a exception while executing the task, run acts according to its type (as
presented in the following "run’s Exception Cases" table and the following sections linked
from the table).
FetchFailedException
When FetchFailedException is reported while running a task, run
setTaskFinishedAndClearInterruptStatus.
ExecutorBackend that the task has failed (with taskId, TaskState.FAILED , and a serialized
reason).
371
TaskRunner
run uses a closure Serializer to serialize the failure reason. The Serializer
Note
was created before run ran the task.
TaskKilledException
When TaskKilledException is reported while running a task, you should see the following
INFO message in the logs:
task has been killed (with taskId, TaskState.KILLED , and a serialized TaskKilled object).
task has been killed (with taskId, TaskState.KILLED , and a serialized TaskKilled object).
CommitDeniedException
When CommitDeniedException is reported while running a task, run
setTaskFinishedAndClearInterruptStatus and notifies ExecutorBackend that the task has
failed (with taskId, TaskState.FAILED , and a serialized TaskKilled object).
Throwable
When run catches a Throwable , you should see the following ERROR message in the
logs (followed by the exception).
372
TaskRunner
run then records the following task metrics (only when Task is available):
executorRunTime
jvmGCTime
run then collects the latest values of internal and external accumulators (with taskFailed
error.
The difference between this most Throwable case and other FAILED cases
Note (i.e. FetchFailedException and CommitDeniedException) is just the serialized
ExceptionFailure vs a reason being sent to ExecutorBackend , respectively.
kill marks the TaskRunner as killed and kills the task (if available and not finished
already).
Note kill passes the input interruptThread on to the task itself while killing it.
When executed, you should see the following INFO message in the logs:
373
TaskRunner
killed flag is checked periodically in run to stop executing the task. Once killed,
Note
the task will eventually stop.
Settings
Table 3. Spark Properties
Spark Property Default Value Description
spark.unsafe.exceptionOnMemoryLeak false FIXME
374
ExecutorSource
ExecutorSource
ExecutorSource is a metrics source of an Executor. It uses an executor’s threadPool for
Every executor has its own separate ExecutorSource that is registered when
Note
CoarseGrainedExecutorBackend receives a RegisteredExecutor .
375
ExecutorSource
376
ExecutorSource
377
Master
Master
A master is a running Spark instance that connects to a cluster manager for resources.
378
Workers
Workers
Workers (aka slaves) are running Spark instances where executors live to execute tasks.
They are the compute nodes in Spark.
It hosts a local Block Manager that serves blocks to other workers in a Spark cluster.
Workers communicate among themselves using their Block Manager instances.
Explain task execution in Spark and understand Spark’s underlying execution model.
When you create SparkContext, each worker starts an executor. This is a separate process
(JVM), and it loads your jar, too. The executors connect back to your driver program. Now
the driver can send them commands, like flatMap , map and reduceByKey . When the
driver quits, the executors shut down.
A new process is not started for each step. A new process is started on each worker when
the SparkContext is constructed.
The executor deserializes the command (this is possible because it has loaded your jar),
and executes it on a partition.
1. Create RDD graph, i.e. DAG (directed acyclic graph) of RDDs to represent entire
computation.
2. Create stage graph, i.e. a DAG of stages that is a logical execution plan based on the
RDD graph. Stages are created by breaking the RDD graph at shuffle boundaries.
379
Workers
Based on this graph, two stages are created. The stage creation rule is based on the idea of
pipelining as many narrow transformations as possible. RDD operations with "narrow"
dependencies, like map() and filter() , are pipelined together into one set of tasks in
each stage.
In the end, every stage will only have shuffle dependencies on other stages, and may
compute multiple operations inside it.
In the WordCount example, the narrow transformation finishes at per-word count. Therefore,
you get two stages:
Once stages are defined, Spark will generate tasks from stages. The first stage will create
ShuffleMapTasks with the last stage creating ResultTasks because in the last stage, one
action operation is included to produce results.
The number of tasks to be generated depends on how your files are distributed. Suppose
that you have 3 three different files in three different nodes, the first stage will generate 3
tasks: one task per partition.
Therefore, you should not map your steps to tasks directly. A task belongs to a stage, and is
related to a partition.
The number of tasks being generated in each stage will be equal to the number of partitions.
Cleanup
Caution FIXME
Settings
spark.worker.cleanup.enabled (default: false ) Cleanup enabled.
380
Anatomy of Spark Application
A Spark application is uniquely identified by a pair of the application and application attempt
ids.
For it to work, you have to create a Spark configuration using SparkConf or use a custom
SparkContext constructor.
package pl.japila.spark
object SparkMeApp {
def main(args: Array[String]) {
381
Anatomy of Spark Application
Tip Spark shell creates a Spark context and SQL context for you at startup.
You can then create RDDs, transform them to other RDDs and ultimately execute actions.
You can also cache interim RDDs to speed up data processing.
After all the data processing is completed, the Spark application finishes by stopping the
Spark context.
382
SparkConf — Programmable Configuration for Spark Applications
TODO
Describe SparkConf object for the application configuration.
Caution
the default configs
system properties
…
setIfMissing Method
Caution FIXME
isExecutorStartupConf Method
Caution FIXME
set Method
Caution FIXME
Spark Properties
Every user program starts with creating an instance of SparkConf that holds the master
URL to connect to ( spark.master ), the name for your Spark application (that is later
displayed in web UI and becomes spark.app.name ) and other Spark properties required for
383
SparkConf — Programmable Configuration for Spark Applications
Start Spark shell with --conf spark.logConf=true to log the effective Spark
configuration as INFO when SparkContext is started.
You can query for the values of Spark properties in Spark shell as follows:
scala> sc.getConf.getOption("spark.local.dir")
res0: Option[String] = None
scala> sc.getConf.getOption("spark.app.name")
res1: Option[String] = Some(Spark shell)
scala> sc.getConf.get("spark.master")
res2: String = local[*]
Read spark-defaults.conf.
--conf or -c - the command-line option used by spark-submit (and other shell scripts
SparkConf
Default Configuration
The default Spark configuration is created when you execute the following code:
384
SparkConf — Programmable Configuration for Spark Applications
import org.apache.spark.SparkConf
val conf = new SparkConf
You can use conf.toDebugString or conf.getAll to have the spark.* system properties
loaded printed out.
scala> conf.getAll
res0: Array[(String, String)] = Array((spark.app.name,Spark shell), (spark.jars,""), (
spark.master,local[*]), (spark.submit.deployMode,client))
scala> conf.toDebugString
res1: String =
spark.app.name=Spark shell
spark.jars=
spark.master=local[*]
spark.submit.deployMode=client
scala> println(conf.toDebugString)
spark.app.name=Spark shell
spark.jars=
spark.master=local[*]
spark.submit.deployMode=client
getAppId: String
Settings
385
SparkConf — Programmable Configuration for Spark Applications
386
Spark Properties and spark-defaults.conf Properties File
387
Spark Properties and spark-defaults.conf Properties File
388
Deploy Mode
Deploy Mode
Deploy mode specifies the location of where driver executes in the deployment
environment.
client (default) - the driver runs on the machine that the Spark application was
launched.
Note cluster deploy mode is only available for non-local cluster deployments.
You can control the deploy mode of a Spark application using spark-submit’s --deploy-mode
command-line option or spark.submit.deployMode Spark property.
Caution FIXME
spark.submit.deployMode
spark.submit.deployMode (default: client ) can be client or cluster .
389
SparkContext
Note You could also assume that a SparkContext instance is a Spark application.
Spark context sets up internal services and establishes a connection to a Spark execution
environment.
Once a SparkContext is created you can use it to create RDDs, accumulators and
broadcast variables, access Spark services and run jobs (until SparkContext is stopped).
A Spark context is essentially a client of Spark’s execution environment and acts as the
master of your Spark application (don’t get confused with the other meaning of Master in
Spark, though).
SparkEnv
SparkConf
390
SparkContext
application name
deploy mode
default level of parallelism that specifies the number of partitions in RDDs when
they are created without specifying the number explicitly by a user.
Spark user
URL of web UI
Spark version
Storage status
Setting Configuration
master URL
RDDs
Accumulators
Broadcast variables
Cancelling a job
Cancelling a stage
Closure cleaning
391
SparkContext
Registering SparkListener
persistRDD
persistentRdds
getRDDStorageInfo
getPersistentRDDs
unpersistRDD
Refer to Logging.
addFile Method
392
SparkContext
unpersistRDD requests BlockManagerMaster to remove the blocks for the RDD (given
rddId ).
Caution FIXME
Caution FIXME
postApplicationEnd Method
Caution FIXME
393
SparkContext
clearActiveContext Method
Caution FIXME
cancelJob(jobId: Int)
reason ).
executors:
requestExecutors
killExecutors
requestTotalExecutors
394
SparkContext
(private!) getExecutorIds
CoarseGrainedSchedulerBackend.
Caution FIXME
requestTotalExecutors(
numExecutors: Int,
localityAwareTasks: Int,
hostToLocalTaskCount: Map[String, Int]): Boolean
When called for other scheduler backends you should see the following WARN message in
the logs:
contract. It simply passes the call on to the current coarse-grained scheduler backend, i.e.
calls getExecutorIds .
395
SparkContext
When called for other scheduler backends you should see the following WARN message in
the logs:
You may want to read Inside Creating SparkContext to learn what happens
Note
behind the scenes when SparkContext is created.
getOrCreate(): SparkContext
getOrCreate(conf: SparkConf): SparkContext
getOrCreate methods allow you to get the existing SparkContext or create a new one.
import org.apache.spark.SparkContext
val sc = SparkContext.getOrCreate()
The no-param getOrCreate method requires that the two mandatory Spark settings - master
and application name - are specified using spark-submit.
Constructors
396
SparkContext
SparkContext()
SparkContext(conf: SparkConf)
SparkContext(master: String, appName: String, conf: SparkConf)
SparkContext(
master: String,
appName: String,
sparkHome: String = null,
jars: Seq[String] = Nil,
environment: Map[String, String] = Map())
import org.apache.spark.SparkConf
val conf = new SparkConf()
.setMaster("local[*]")
.setAppName("SparkMe App")
import org.apache.spark.SparkContext
val sc = new SparkContext(conf)
When a Spark context starts up you should see the following INFO in the logs (amongst the
other messages that come from the Spark services):
Only one SparkContext may be running in a single JVM (check out SPARK-
2243 Support multiple SparkContexts in the same JVM). Sharing access to a
Note
SparkContext in the JVM is the solution to share data within Spark (without
relying on other means of data sharing using external data stores).
Caution FIXME
getConf: SparkConf
Changing the SparkConf object does not change the current configuration (as
Note
the method returns a copy).
397
SparkContext
master: String
master method returns the current value of spark.master which is the deployment
environment in use.
appName: String
applicationAttemptId: Option[String]
application.
getExecutorStorageStatus: Array[StorageStatus]
BlockManagers).
398
SparkContext
deployMode: String
set.
getSchedulingMode: SchedulingMode.SchedulingMode
Note getPoolForName is part of the Developer’s API and may change in the future.
Internally, it requests the TaskScheduler for the root pool and looks up the Schedulable by
the pool name.
getAllPools: Seq[Schedulable]
399
SparkContext
getAllPools is used to calculate pool names for Stages tab in web UI with
Note
FAIR scheduling mode used.
defaultParallelism: Int
taskScheduler: TaskScheduler
taskScheduler_=(ts: TaskScheduler): Unit
version: String
makeRDD Method
Caution FIXME
400
SparkContext
submitJob[T, U, R](
rdd: RDD[T],
processPartition: Iterator[T] => U,
partitions: Seq[Int],
resultHandler: (Int, U) => Unit,
resultFunc: => R): SimpleFutureAction[R]
It is used in:
AsyncRDDActions methods
Spark Configuration
Caution FIXME
When an RDD is created, it belongs to and is completely owned by the Spark context it
originated from. RDDs can’t by design be shared between SparkContexts.
401
SparkContext
Caution FIXME
unpersist removes an RDD from the master’s Block Manager (calls removeRdd(rddId: Int,
402
SparkContext
setCheckpointDir(directory: String)
Caution FIXME
register registers the acc accumulator. You can optionally give an accumulator a name .
You can create built-in accumulators for longs, doubles, and collection types
Tip
using specialized methods.
longAccumulator: LongAccumulator
longAccumulator(name: String): LongAccumulator
doubleAccumulator: DoubleAccumulator
doubleAccumulator(name: String): DoubleAccumulator
collectionAccumulator[T]: CollectionAccumulator[T]
collectionAccumulator[T](name: String): CollectionAccumulator[T]
java.util.List[T] .
403
SparkContext
scala> counter.value
res0: Long = 0
scala> counter.value
res3: Long = 45
The name input parameter allows you to give a name to an accumulator and have it
displayed in Spark UI (under Stages tab for a given stage).
broadcast method creates a broadcast variable. It is a shared memory with value (as
404
SparkContext
Spark transfers the value to Spark executors once, and tasks can share it without incurring
repetitive network transmissions when the broadcast variable is used multiple times.
405
SparkContext
Once created, the broadcast variable (and other blocks) are displayed per executor and the
driver in web UI (under Executors tab).
scala> sc.addJar("build.sbt")
15/11/11 21:54:54 INFO SparkContext: Added JAR build.sbt at http://192.168.1.4:49427/j
ars/build.sbt with timestamp 1447275294457
406
SparkContext
shuffle ids using nextShuffleId internal counter for registering shuffle dependencies to
Shuffle Service.
runJob[T, U](
rdd: RDD[T],
func: (TaskContext, Iterator[T]) => U,
partitions: Seq[Int],
resultHandler: (Int, U) => Unit): Unit
runJob[T, U](
rdd: RDD[T],
func: (TaskContext, Iterator[T]) => U,
partitions: Seq[Int]): Array[U]
runJob[T, U](
rdd: RDD[T],
func: Iterator[T] => U,
partitions: Seq[Int]): Array[U]
runJob[T, U](rdd: RDD[T], func: (TaskContext, Iterator[T]) => U): Array[U]
runJob[T, U](rdd: RDD[T], func: Iterator[T] => U): Array[U]
runJob[T, U](
rdd: RDD[T],
processPartition: (TaskContext, Iterator[T]) => U,
resultHandler: (Int, U) => Unit)
runJob[T, U: ClassTag](
rdd: RDD[T],
processPartition: Iterator[T] => U,
resultHandler: (Int, U) => Unit)
runJob executes a function on one or many partitions of a RDD (in a SparkContext space)
Internally, runJob first makes sure that the SparkContext is not stopped. If it is, you should
see the following IllegalStateException exception in the logs:
runJob then calculates the call site and cleans a func closure.
407
SparkContext
With spark.logLineage enabled (which is not by default), you should see the following INFO
message with toDebugString (executed on rdd ):
Tip runJob just prepares input parameters for DAGScheduler to run a job.
After DAGScheduler is done and the job has finished, runJob stops ConsoleProgressBar
and performs RDD checkpointing of rdd .
For some actions, e.g. first() and lookup() , there is no need to compute all
Tip
the partitions of the RDD in a job. And Spark knows it.
import org.apache.spark.TaskContext
scala> sc.runJob(lines, (t: TaskContext, i: Iterator[String]) => 1) (1)
res0: Array[Int] = Array(1, 1) (2)
1. Run a job using runJob on lines RDD with a function that returns 1 for every partition
(of lines RDD).
2. What can you say about the number of partitions of the lines RDD? Is your result
res0 different than mine? Why?
partition).
408
SparkContext
stop(): Unit
Internally, stop enables stopped internal flag. If already stopped, you should see the
following INFO message in the logs:
3. Stops web UI
5. Stops ContextCleaner
409
SparkContext
Ultimately, you should see the following INFO message in the logs:
Note You can also register custom listeners using spark.extraListeners setting.
Events
410
SparkContext
setLogLevel(logLevel: String)
setLogLevel allows you to set the root logging level in a Spark application, e.g. Spark shell.
Every time an action is called, Spark cleans up the closure, i.e. the body of the action, before
it is serialized and sent over the wire to executors.
Not only does ClosureCleaner.clean method clean the closure, but also does it transitively,
i.e. referenced closures are cleaned transitively.
411
SparkContext
Refer to Logging.
With DEBUG logging level you should see the following messages in the logs:
Serialization is verified using a new instance of Serializer (as closure Serializer). Refer to
Serialization.
Hadoop Configuration
While a SparkContext is being created, so is a Hadoop configuration (as an instance of
org.apache.hadoop.conf.Configuration that is available as _hadoopConfiguration ).
of AWS_ACCESS_KEY_ID
Every spark.hadoop. setting becomes a setting of the configuration with the prefix
spark.hadoop. removed for the key.
412
SparkContext
startTime: Long
scala> sc.startTime
res0: Long = 1464425605653
sparkUser: String
submitMapStage[K, V, C](
dependency: ShuffleDependency[K, V, C]): SimpleFutureAction[MapOutputStatistics]
returns a SimpleFutureAction .
413
SparkContext
Internally, submitMapStage calculates the call site first and submits it with localProperties .
Caution FIXME
cancelJobGroup(groupId: String)
Caution FIXME
setJobGroup(
groupId: String,
description: String,
interruptOnCancel: Boolean = false): Unit
spark.jobGroup.id as groupId
spark.job.description as description
414
SparkContext
spark.job.interruptOnCancel as interruptOnCancel
cleaner Method
cleaner: Option[ContextCleaner]
getPreferredLocs simply requests DAGScheduler for the preferred locations for partition .
415
SparkContext
getRDDStorageInfo takes all the RDDs (from persistentRdds registry) that match filter
getRDDStorageInfo then updates the RDDInfos with the current status of all BlockManagers
In the end, getRDDStorageInfo gives only the RDD that are cached (i.e. the sum of memory
and disk sizes as well as the number of partitions cached are greater than 0 ).
Note getRDDStorageInfo is used when RDD is requested for RDD lineage graph.
Settings
spark.driver.allowMultipleContexts
Quoting the scaladoc of org.apache.spark.SparkContext:
Only one SparkContext may be active per JVM. You must stop() the active
SparkContext before creating a new one.
If enabled (i.e. true ), Spark prints the following WARN message to the logs:
Only one SparkContext may be running in this JVM (see SPARK-2243). To ignore this erro
r, set spark.driver.allowMultipleContexts = true. The currently running SparkContext w
as created at:
[ctx.creationSite.longForm]
416
SparkContext
When creating an instance of SparkContext , Spark marks the current thread as having it
being created (very early in the instantiation process).
It’s not guaranteed that Spark will work properly with two or more
Caution
SparkContexts. Consider the feature a work in progress.
statusStore: AppStatusStore
uiWebUrl: Option[String]
Environment Variables
Table 3. Environment Variables
Environment Variable Default Value Description
417
HeartbeatReceiver RPC Endpoint
HeartbeatReceiver.
HeartbeatReceiver receives Heartbeat messages from executors that Spark uses as the
mechanism to receive accumulator updates (with task metrics and a Spark application’s
accumulators) and pass them along to TaskScheduler .
418
HeartbeatReceiver RPC Endpoint
ExpireDeadHosts FIXME
executorLastSeen
Executor ids and the timestamps of when the last
heartbeat was received.
scheduler TaskScheduler
Refer to Logging.
SparkContext
Clock
419
HeartbeatReceiver RPC Endpoint
ExecutorRegistered
ExecutorRegistered(executorId: String)
When received, HeartbeatReceiver registers the executorId executor and the current time
(in executorLastSeen internal registry).
Note HeartbeatReceiver uses the internal Clock to know the current time.
ExecutorRemoved
ExecutorRemoved(executorId: String)
ExpireDeadHosts
ExpireDeadHosts
When ExpireDeadHosts arrives the following TRACE is printed out to the logs:
Each executor (in executorLastSeen registry) is checked whether the time it was last seen is
not longer than spark.network.timeout.
For any such executor, the following WARN message is printed out to the logs:
420
HeartbeatReceiver RPC Endpoint
killExecutorThread).
Heartbeat
Heartbeat(executorId: String,
accumUpdates: Array[(Long, Seq[AccumulatorV2[_, _]])],
blockManagerId: BlockManagerId)
When the executor is found, HeartbeatReceiver updates the time the heartbeat was
received (in executorLastSeen).
Note HeartbeatReceiver uses the internal Clock to know the current time.
heartbeat was received from the executor (using TaskScheduler internal reference).
HeartbeatReceiver posts a HeartbeatResponse back to the executor (with the response from
TaskScheduler whether the executor has been registered already or not so it may eventually
need to re-register).
If however the executor was not found (in executorLastSeen registry), i.e. the executor was
not registered before, you should see the following DEBUG message in the logs and the
response is to notify the executor to re-register.
In a very rare case, when TaskScheduler is not yet assigned to HeartbeatReceiver , you
should see the following WARN message in the logs and the response is to notify the
executor to re-register.
421
HeartbeatReceiver RPC Endpoint
TaskSchedulerIsSet
TaskSchedulerIsSet
onExecutorAdded Method
registers an executor).
onExecutorRemoved Method
422
HeartbeatReceiver RPC Endpoint
executor).
When called, HeartbeatReceiver cancels the checking task (that sends a blocking
ExpireDeadHosts every spark.network.timeoutInterval on eventLoopThread - Heartbeat
Receiver Event Loop Thread - see Starting (onStart method)) and shuts down
eventLoopThread and killExecutorThread executors.
423
HeartbeatReceiver RPC Endpoint
expireDeadHosts(): Unit
Caution FIXME
Settings
Table 3. Spark Properties
Spark Property Default Value
spark.storage.blockManagerTimeoutIntervalMs 60s
spark.storage.blockManagerSlaveTimeoutMs 120s
spark.network.timeout spark.storage.blockManagerSlaveTimeoutMs
spark.network.timeoutInterval spark.storage.blockManagerTimeoutIntervalMs
424
Inside Creating SparkContext
The example uses Spark in local mode, but the initialization with the other
Note
cluster modes would follow similar steps.
SparkContext.markPartiallyConstructed(this, allowMultipleContexts)
Note
// the SparkContext code goes here
SparkContext.setActiveContext(this, allowMultipleContexts)
The very first information printed out is the version of Spark as an INFO message:
You can use version method to learn about the current Spark version or
Tip
org.apache.spark.SPARK_VERSION value.
425
Inside Creating SparkContext
Detected yarn cluster mode, but isn't running on a cluster. Deployment to YARN is not
supported directly by SparkContext. Please use spark-submit.
Caution FIXME How to "trigger" the exception? What are the steps?
The driver’s host and port are set if missing. spark.driver.host becomes the value of
Utils.localHostName (or an exception is thrown) while spark.driver.port is set to 0 .
426
Inside Creating SparkContext
It sets the jars and files based on spark.jars and spark.files , respectively. These are
files that are required for proper task execution on executors.
If event logging is enabled, i.e. spark.eventLog.enabled flag is true , the internal field
_eventLogDir is set to the value of spark.eventLog.dir setting or the default value
/tmp/spark-events .
Also, if spark.eventLog.compress is enabled (it is not by default), the short name of the
CompressionCodec is assigned to _eventLogCodec . The config key is
spark.io.compression.codec (default: lz4 ).
Creating LiveListenerBus
SparkContext creates a LiveListenerBus.
live Spark application) and requests LiveListenerBus to add the AppStatusListener to the
status queue.
Creating SparkEnv
SparkContext creates a SparkEnv and requests SparkEnv to use the instance as the
default SparkEnv.
MetadataCleaner is created.
Creating SparkStatusTracker
SparkContext creates a SparkStatusTracker (with itself and the AppStatusStore).
Creating ConsoleProgressBar
427
Inside Creating SparkContext
Creating SparkUI
SparkContext creates a SparkUI when spark.ui.enabled configuration property is enabled
AppStatusStore
Name of the Spark application that is exactly the value of spark.app.name configuration
property
If there are jars given through the SparkContext constructor, they are added using addJar .
At this point in time, the amount of memory to allocate to each executor (as
_executorMemory ) is calculated. It is the value of spark.executor.memory setting, or
CoarseMesosSchedulerBackend.
FIXME
What’s _executorMemory ?
What’s the unit of the value of _executorMemory exactly?
Caution
What are "SPARK_TESTING", "spark.testing"? How do they contribute
to executorEnvs ?
What’s executorEnvs ?
428
Inside Creating SparkContext
Starting TaskScheduler
SparkContext starts TaskScheduler .
Initializing BlockManager
429
Inside Creating SparkContext
Starting MetricsSystem
SparkContext requests the MetricsSystem to start.
FIXME It’d be quite useful to have all the properties with their default values
Caution in sc.getConf.toDebugString , so when a configuration is not included but
does change Spark runtime configuration, it should be added to _conf .
LiveListenerBus with information about Task Scheduler’s scheduling mode, added jar and
file paths, and other environmental details. They are displayed in web UI’s Environment tab.
430
Inside Creating SparkContext
1. DAGScheduler
2. BlockManager
431
Inside Creating SparkContext
createTaskScheduler(
sc: SparkContext,
master: String,
deployMode: String): (SchedulerBackend, TaskScheduler)
Caution FIXME
432
Inside Creating SparkContext
If there are two or more external cluster managers that could handle url , a
SparkException is thrown:
setupAndStartListenerBus
setupAndStartListenerBus(): Unit
It expects that the class name represents a SparkListenerInterface listener with one of the
following constructors (in this order):
a zero-argument constructor
When no single- SparkConf or zero-argument constructor could be found for a class name in
spark.extraListeners setting, a SparkException is thrown with the message:
433
Inside Creating SparkContext
createSparkEnv(
conf: SparkConf,
isLocal: Boolean,
listenerBus: LiveListenerBus): SparkEnv
createSparkEnv simply delegates the call to SparkEnv to create a SparkEnv for the driver.
It calculates the number of cores to 1 for local master URL, the number of processors
available for JVM for * or the exact number in the master URL, or 0 for the cluster
master URLs.
Utils.getCurrentUserName Method
getCurrentUserName(): String
getCurrentUserName computes the user name who has started the SparkContext instance.
Internally, it reads SPARK_USER environment variable and, if not set, reverts to Hadoop
Security API’s UserGroupInformation.getCurrentUser().getShortUserName() .
Note It is another place where Spark relies on Hadoop API for its operation.
Utils.localHostName Method
localHostName computes the local host name.
434
Inside Creating SparkContext
stopped Flag
435
ConsoleProgressBar
ConsoleProgressBar
ConsoleProgressBar shows the progress of active stages to standard error, i.e. stderr . It
uses SparkStatusTracker to poll the status of stages periodically and print out active stages
with more than one task. It keeps overwriting itself to hold in one line for at most 3 first
concurrent stages at a time.
The progress includes the stage id, the number of completed, active, and total tasks.
ConsoleProgressBar may be useful when you ssh to workers and want to see
Tip
the progress of active stages.
import org.apache.log4j._
Logger.getLogger("org.apache.spark.SparkContext").setLevel(Level.WARN)
The progress bar prints out the status after a stage has ran at least 500 milliseconds every
spark.ui.consoleProgress.update.interval milliseconds.
436
ConsoleProgressBar
import org.apache.log4j._
scala> Logger.getLogger("org.apache.spark.SparkContext").setLevel(Level.WARN) (3)
4. Run a job with 4 tasks with 500ms initial sleep and 200ms sleep chunks to see the
progress bar.
You may want to use the following example to see the progress bar in full glory - all 3
concurrent stages in console (borrowed from a comment to [SPARK-4017] show progress
bar in console #3029):
> ./bin/spark-shell
scala> val a = sc.makeRDD(1 to 1000, 10000).map(x => (x, x)).reduceByKey(_ + _)
scala> val b = sc.makeRDD(1 to 1000, 10000).map(x => (x, x)).reduceByKey(_ + _)
scala> a.union(b).count()
ConsoleProgressBar starts the internal timer refresh progress that does refresh and shows
progress.
437
ConsoleProgressBar
finishAll Method
Caution FIXME
stop Method
stop(): Unit
refresh(): Unit
refresh …FIXME
438
SparkStatusTracker
SparkStatusTracker
SparkStatusTracker is…FIXME
SparkContext
AppStatusStore
439
Local Properties — Creating Logical Job Groups
You can set a local property that will affect Spark jobs submitted from a thread, such as the
Spark fair scheduler pool. You can use your own custom properties. The properties are
propagated through to worker tasks and can be accessed there via
TaskContext.getLocalProperty.
Local properties is used to group jobs into pools in FAIR job scheduler by
Note spark.scheduler.pool per-thread property and in
SQLExecution.withNewExecutionId Helper Methods
A common use case for the local property concept is to set a local property in a thread, say
spark.scheduler.pool, after which all jobs submitted within the thread will be grouped, say
into a pool by FAIR job scheduler.
sc.setLocalProperty("spark.scheduler.pool", "myPool")
// these two jobs (one per action) will run in the myPool pool
rdd.count
rdd.collect
sc.setLocalProperty("spark.scheduler.pool", null)
localProperties: InheritableThreadLocal[Properties]
440
Local Properties — Creating Logical Job Groups
Tip When value is null the key property is removed from localProperties.
getLocalProperty gets a local property by key in this thread. It returns null if key is
missing.
getLocalProperties: Properties
setLocalProperties Method
441
RDD — Resilient Distributed Dataset
A RDD is a resilient and distributed collection of records spread over one or many partitions.
Using RDD Spark hides data partitioning and so distribution that in turn allowed them to
design parallel computational framework with a higher-level programming interface (API) for
four mainstream programming languages.
Resilient, i.e. fault-tolerant with the help of RDD lineage graph and so able to
recompute missing or damaged partitions due to node failures.
Dataset is a collection of partitioned data with primitive values or values of values, e.g.
tuples or other objects (that represent records of the data you work with).
Figure 1. RDDs
442
RDD — Resilient Distributed Dataset
From the original paper about RDD - Resilient Distributed Datasets: A Fault-Tolerant
Abstraction for In-Memory Cluster Computing:
Resilient Distributed Datasets (RDDs) are a distributed memory abstraction that lets
programmers perform in-memory computations on large clusters in a fault-tolerant
manner.
Beside the above traits (that are directly embedded in the name of the data abstraction -
RDD) it has the following additional traits:
In-Memory, i.e. data inside RDD is stored in memory as much (size) and long (time) as
possible.
Immutable or Read-Only, i.e. it does not change once created and can only be
transformed using transformations to new RDDs.
Lazy evaluated, i.e. the data inside RDD is not available or transformed until an action
is executed that triggers the execution.
Cacheable, i.e. you can hold all the data in a persistent "storage" like memory (default
and the most preferred) or disk (the least preferred due to access speed).
Partitioned — records are partitioned (split into logical partitions) and distributed across
nodes in a cluster.
Computing partitions in a RDD is a distributed process by design and to achieve even data
distribution as well as leverage data locality (in distributed systems like HDFS or
Cassandra in which data is partitioned by default), they are partitioned to a fixed number of
443
RDD — Resilient Distributed Dataset
partitions - logical chunks (parts) of data. The logical division is for processing only and
internally it is not divided whatsoever. Each partition comprises of records.
Figure 2. RDDs
Partitions are the units of parallelism. You can control the number of partitions of a RDD
using repartition or coalesce transformations. Spark tries to be as close to data as possible
without wasting time to send data across network by means of RDD shuffling, and creates
as many partitions as required to follow the storage layout and thus optimize data access. It
leads to a one-to-one mapping between (physical) data in distributed data storage, e.g.
HDFS or Cassandra, and partitions.
The motivation to create RDD were (after the authors) two types of applications that current
computing frameworks handle inefficiently:
Technically, RDDs follow the contract defined by the five main intrinsic properties:
444
RDD — Resilient Distributed Dataset
An optional Partitioner that defines how keys are hashed, and the pairs partitioned (for
key-value RDDs)
Optional preferred locations (aka locality info), i.e. hosts for a partition where the
records live or are the closest to read from.
This RDD abstraction supports an expressive set of operations without having to modify
scheduler for each one.
An RDD is a named (by name ) and uniquely identified (by id ) entity in a SparkContext
(available as context property).
RDDs live in one and only one SparkContext that creates a logical boundary.
An RDD can optionally have a friendly name accessible using name that can be changed
using = :
scala> ns.id
res0: Int = 2
scala> ns.name
res1: String = null
scala> ns.name
res2: String = Friendly name
scala> ns.toDebugString
res3: String = (8) Friendly name ParallelCollectionRDD[2] at parallelize at <console>:
24 []
RDDs are a container of instructions on how to materialize big (arrays of) distributed data,
and how to split it into partitions so Spark (using executors) can hold some of them.
In general data distribution can help executing processing in parallel so a task processes a
chunk of data that it could eventually keep in memory.
Spark does jobs in parallel, and RDDs are split into partitions to be processed and written in
parallel. Inside a partition, data is processed sequentially.
445
RDD — Resilient Distributed Dataset
Saving partitions results in part-files instead of one single file (unless there is a single
partition).
Caution FIXME
isCheckpointedAndMaterialized Method
Caution FIXME
getNarrowAncestors Method
Caution FIXME
toLocalIterator Method
Caution FIXME
cache Method
Caution FIXME
persist Methods
persist(): this.type
persist(newLevel: StorageLevel): this.type
Caution FIXME
persist is used when RDD is requested to persist itself and marks itself for
Note
local checkpointing.
446
RDD — Resilient Distributed Dataset
unpersist Method
Caution FIXME
localCheckpoint Method
localCheckpoint(): this.type
RDD Contract
getPartitions
Used exclusively when RDD is requested for its partitions
(called only once as the value is cached).
getDependencies
Used when RDD is requested for its dependencies
(called only once as the value is cached).
Types of RDDs
447
RDD — Resilient Distributed Dataset
ParallelCollectionRDD
CoGroupedRDD
HadoopRDD is an RDD that provides core functionality for reading data stored in HDFS
using the older MapReduce API. The most notable use case is the return RDD of
SparkContext.textFile .
SequenceFile .
Appropriate operations of a given RDD type are automatically available on a RDD of the
right type, e.g. RDD[(Int, Int)] , through implicit conversion in Scala.
Transformations
A transformation is a lazy operation on a RDD that returns another RDD, like map ,
flatMap , filter , reduceByKey , join , cogroup , etc.
Actions
An action is an operation that triggers execution of RDD transformations and returns a value
(to a Spark driver - the user program).
448
RDD — Resilient Distributed Dataset
Creating RDDs
SparkContext.parallelize
One way to create a RDD is with SparkContext.parallelize method. It accepts a collection
of elements as shown below ( sc is a SparkContext instance):
Given the reason to use Spark to process more data than your own laptop could handle,
SparkContext.parallelize is mainly used to learn Spark in the Spark shell.
SparkContext.makeRDD
Caution FIXME What’s the use case for makeRDD ?
SparkContext.textFile
One of the easiest ways to create an RDD is to use SparkContext.textFile to read files.
You can use the local README.md file (and then flatMap over the lines inside to have an
RDD of words):
449
RDD — Resilient Distributed Dataset
You cache it so the computation is not performed every time you work with
Note
words .
Transformations
RDD transformations by definition transform an RDD into another RDD and hence are the
way to create new ones.
RDDs in Web UI
It is quite informative to look at RDDs in the Web UI that is at http://localhost:4040 for Spark
shell.
Execute the following Spark application (type all the lines in spark-shell ):
3. Caches the RDD for performance reasons that also makes it visible in Storage tab in
the web UI
With the above executed, you should see the following in the Web UI:
450
RDD — Resilient Distributed Dataset
ints.repartition(2).count
451
RDD — Resilient Distributed Dataset
partitions: Array[Partition]
partitions requests CheckpointRDD for partitions (if the RDD is checkpointed) or finds
them itself and cache (in partitions_ internal registry that is used next time).
Partitions have the property that their internal index should be equal to their
Note
position in the owning RDD.
452
RDD — Resilient Distributed Dataset
The other usages are to define the locations by custom RDDs, e.g.
BlockRDD, CoalescedRDD , HadoopRDD, NewHadoopRDD,
Note ParallelCollectionRDD, ReliableCheckpointRDD , ShuffledRDD
Spark SQL’s KafkaSourceRDD , ShuffledRowRDD , FileScanRDD ,
StateStoreRDD
getNumPartitions: Int
scala> sc.textFile("README.md").getNumPartitions
res0: Int = 2
computes it yourself.
453
RDD — Resilient Distributed Dataset
iterator gets or computes the split partition when cached or computes it (possibly by
iterator is a final method that, despite being public, considered private and
Note
only available for implementing custom RDDs.
dependencies: Seq[Dependency[_]]
Note dependencies is a final method that no class in Spark can ever override.
Internally, dependencies checks out whether the RDD is checkpointed and acts accordingly.
454
RDD
RDD
RDD is a description of a distributed computation over dataset of records of type T .
RDD is identified by a unique identifier (aka RDD ID) that is unique among all RDDs in a
SparkContext.
id: Int
storageLevel: StorageLevel
getOrCompute creates a RDDBlockId for the RDD id and the partition index.
getOrCompute requests the BlockManager to getOrElseUpdate for the block ID (with the
getOrCompute branches off per the response from the BlockManager and whether the
internal readCachedBlock flag is now on or still off. In either case, getOrCompute creates an
InterruptibleIterator.
455
RDD
The abstract compute method computes the input split partition in the TaskContext to
produce a collection of values (of type T ).
compute is implemented by any type of RDD in Spark and is called every time the records
are requested unless RDD is cached or checkpointed (and the records can be read from an
external storage, but this time closer to the compute node).
When an RDD is cached, for specified storage levels (i.e. all but NONE ) CacheManager is
requested to get or compute partitions.
456
RDD Lineage — Logical Execution Plan
Note The execution DAG or physical execution plan is the DAG of stages.
The following diagram uses cartesian or zip for learning purposes only. You
Note
may use other operators to build a RDD graph.
A RDD lineage graph is hence a graph of what transformations need to be executed after an
action has been called.
You can learn about a RDD lineage graph using RDD.toDebugString method.
457
RDD Lineage — Logical Execution Plan
toDebugString: String
You can learn about a RDD lineage graph using toDebugString method.
scala> wordCount.toDebugString
res13: String =
(2) ShuffledRDD[21] at reduceByKey at <console>:24 []
+-(2) MapPartitionsRDD[20] at map at <console>:24 []
| MapPartitionsRDD[19] at flatMap at <console>:24 []
| README.md MapPartitionsRDD[18] at textFile at <console>:24 []
| README.md HadoopRDD[17] at textFile at <console>:24 []
The numbers in round brackets show the level of parallelism at each stage, e.g. (2) in the
above output.
scala> wordCount.getNumPartitions
res14: Int = 2
Settings
458
RDD Lineage — Logical Execution Plan
459
TaskLocation
TaskLocation
TaskLocation is a location where a task should run.
ExecutorCacheTaskLocation).
With ExecutorCacheTaskLocation the Spark scheduler prefers to launch the task on the given
executor, but the next level of preference is any executor on the same host if this is not
possible.
ExecutorCacheTaskLocation
A location that includes both a host and an executor id
on that host.
460
ParallelCollectionRDD
ParallelCollectionRDD
ParallelCollectionRDD is an RDD of a collection of elements with numSlices partitions and
optional locationPrefs .
methods.
It uses ParallelCollectionPartition .
461
MapPartitionsRDD
MapPartitionsRDD
MapPartitionsRDD is an RDD that applies the provided function f to every partition of the
parent RDD.
map
flatMap
filter
glom
mapPartitions
mapPartitionsWithIndex
PairRDDFunctions.mapValues
PairRDDFunctions.flatMapValues
462
OrderedRDDFunctions
OrderedRDDFunctions
repartitionAndSortWithinPartitions Operator
Caution FIXME
sortByKey Operator
Caution FIXME
463
CoGroupedRDD
CoGroupedRDD
A RDD that cogroups its pair RDD parents. For each key k in parent RDDs, the resulting
RDD contains a tuple with the list of values for that key.
getDependencies Method
Caution FIXME
compute …FIXME
464
SubtractedRDD
SubtractedRDD
Caution FIXME
compute …FIXME
getDependencies Method
Caution FIXME
465
HadoopRDD
HadoopRDD
HadoopRDD is an RDD that provides core functionality for reading data stored in HDFS, a
local file system (available on all nodes), or any Hadoop-supported file system URI using the
older MapReduce API (org.apache.hadoop.mapred).
hadoopFile
sequenceFile
When an HadoopRDD is computed, i.e. an action is called, you should see the INFO
message Input split: in the logs.
scala> sc.textFile("README.md").count
...
15/10/10 18:03:21 INFO HadoopRDD: Input split: file:/Users/jacek/dev/oss/spark/README.
md:0+1784
15/10/10 18:03:21 INFO HadoopRDD: Input split: file:/Users/jacek/dev/oss/spark/README.
md:1784+1784
...
mapred.task.is.map as true
mapred.task.partition - split id
mapred.job.id
466
HadoopRDD
FIXME
getPreferredLocations Method
Caution FIXME
getPartitions Method
The number of partition for HadoopRDD, i.e. the return value of getPartitions , is
calculated using InputFormat.getSplits(jobConf, minPartitions) where minPartitions is
only a hint of how many partitions one may want at minimum. As a hint it does not mean the
number of partitions will be exactly the number given.
FileInputFormat is the base class for all file-based InputFormats. This provides a
generic implementation of getSplits(JobConf, int). Subclasses of FileInputFormat can
also override the isSplitable(FileSystem, Path) method to ensure input-files are not
split-up and are processed as a whole by Mappers.
467
NewHadoopRDD
NewHadoopRDD
NewHadoopRDD is an RDD of K keys and V values.
SparkContext.newAPIHadoopFile
SparkContext.newAPIHadoopRDD
(indirectly) SparkContext.binaryFiles
(indirectly) SparkContext.wholeTextFiles
getPreferredLocations Method
Caution FIXME
SparkContext
HDFS' InputFormat[K, V]
K class name
V class name
468
ShuffledRDD
ShuffledRDD
ShuffledRDD is an RDD of key-value pairs that represents the shuffle step in a RDD
operators.
scala> rdd.getNumPartitions
res0: Int = 8
469
ShuffledRDD
mapSideCombine: Boolean
mapSideCombine internal flag is used to select the Serializer (for shuffling) when
If enabled (i.e. true ), mapSideCombine directs to find the Serializer for the types K and
C . Otherwise, getDependencies finds the Serializer for the types K and V .
Internally, compute makes sure that the input split is a ShuffleDependency. It then
requests ShuffleManager for a ShuffleReader to read key-value pairs (as Iterator[(K,
C)] ) for the split .
470
ShuffledRDD
ShuffledRDDPartition
ShuffledRDDPartition gets an index when it is created (that in turn is the index of
471
Operators
See https://issues.apache.org/jira/browse/SPARK-5063
472
Transformations
Transformations
Transformations are lazy operations on a RDD that create one or many new RDDs, e.g.
map , filter , reduceByKey , join , cogroup , randomSplit .
In other words, transformations are functions that take a RDD as the input and produce one
or many RDDs as the output. They do not change the input RDD (since RDDs are
immutable and hence cannot be modified), but always produce one or more new RDDs by
applying the computations they represent.
By applying transformations you incrementally build a RDD lineage with all the parent RDDs
of the final RDD(s).
Transformations are lazy, i.e. are not executed immediately. Only after calling an action are
transformations executed.
After executing a transformation, the result RDD(s) will always be different from their parents
and can be smaller (e.g. filter , count , distinct , sample ), bigger (e.g. flatMap ,
union , cartesian ) or the same size (e.g. map ).
There are transformations that may trigger jobs, e.g. sortBy , zipWithIndex,
Caution
etc.
473
Transformations
474
Transformations
narrow transformations
wide transformations
Narrow Transformations
Narrow transformations are the result of map , filter and such that is from the data from
a single partition only, i.e. it is self-sustained.
An output RDD has partitions with records that originate from a single partition in the parent
RDD. Only a limited subset of partitions used to calculate the result.
Wide Transformations
Wide transformations are the result of groupByKey and reduceByKey . The data required to
compute the records in a single partition may reside in many partitions of the parent RDD.
475
Transformations
All of the tuples with the same key must end up in the same partition, processed by the
same task. To satisfy these operations, Spark must execute RDD shuffle, which transfers
data across cluster and results in a new stage with a new set of partitions.
map
Caution FIXME
flatMap
Caution FIXME
filter
Caution FIXME
randomSplit
Caution FIXME
mapPartitions
Caution FIXME
Using an external key-value store (like HBase, Redis, Cassandra) and performing
lookups/updates inside of your mappers (creating a connection within a mapPartitions code
block to avoid the connection setup/teardown overhead) might be a better solution.
zipWithIndex
476
Transformations
If the number of partitions of the source RDD is greater than 1, it will submit
an additional job to calculate start indices.
scala> onePartition.partitions.length
res0: Int = 1
// no job submitted
onePartition.zipWithIndex
scala> eightPartitions.partitions.length
res1: Int = 8
// submits a job
Caution eightPartitions.zipWithIndex
477
PairRDDFunctions
PairRDDFunctions
Tip Read up the scaladoc of PairRDDFunctions.
PairRDDFunctions are available in RDDs of key-value pairs via Scala’s implicit conversion.
Partitioning is an advanced feature that is directly linked to (or inferred by) use
Tip
of PairRDDFunctions . Read up about it in Partitions and Partitioning.
countApproxDistinctByKey Transformation
Caution FIXME
foldByKey Transformation
Caution FIXME
aggregateByKey Transformation
Caution FIXME
combineByKey Transformation
Caution FIXME
partitionBy Operator
Caution FIXME
You may want to look at the number of partitions from another angle.
478
PairRDDFunctions
It may often not be important to have a given number of partitions upfront (at RDD creation
time upon loading data from data sources), so only "regrouping" the data by key after it is an
RDD might be…the key (pun not intended).
You can use groupByKey or another PairRDDFunctions method to have a key in one
processing flow.
You could use partitionBy that is available for RDDs to be RDDs of tuples, i.e. PairRDD :
rdd.keyBy(_.kind)
.partitionBy(new HashPartitioner(PARTITIONS))
.foreachPartition(...)
Think of situations where kind has low cardinality or highly skewed distribution and using
the technique for partitioning might be not an optimal solution.
rdd.keyBy(_.kind).reduceByKey(....)
mapValues, flatMapValues
Caution FIXME
combineByKeyWithClassTag Transformations
combineByKeyWithClassTag[C](
createCombiner: V => C,
mergeValue: (C, V) => C,
mergeCombiners: (C, C) => C)(implicit ct: ClassTag[C]): RDD[(K, C)] (1)
combineByKeyWithClassTag[C](
createCombiner: V => C,
mergeValue: (C, V) => C,
mergeCombiners: (C, C) => C,
numPartitions: Int)(implicit ct: ClassTag[C]): RDD[(K, C)] (2)
combineByKeyWithClassTag[C](
createCombiner: V => C,
mergeValue: (C, V) => C,
mergeCombiners: (C, C) => C,
partitioner: Partitioner,
mapSideCombine: Boolean = true,
serializer: Serializer = null)(implicit ct: ClassTag[C]): RDD[(K, C)]
479
PairRDDFunctions
1. FIXME
2. FIXME too
default. They create a ShuffledRDD with the value of mapSideCombine when the input
partitioner is different from the current one in an RDD.
480
Actions
Actions
Actions are RDD operations that produce non-RDD values. They materialize a value in a
Spark program. In other words, a RDD operation that returns a value of any type but
RDD[T] is an action.
They trigger execution of RDD transformations to return values. Simply put, an action
evaluates the RDD lineage graph.
You can think of actions as a valve and until action is fired, the data to be processed is not
even in the pipes, i.e. transformations. Only actions can materialize the entire processing
pipeline with real data.
Actions are one of two ways to send data from executors to the driver (the other being
accumulators).
Actions in org.apache.spark.rdd.RDD:
aggregate
collect
count
countApprox*
countByValue*
first
fold
foreach
foreachPartition
max
min
reduce
481
Actions
take
takeOrdered
takeSample
toLocalIterator
top
treeAggregate
treeReduce
You should cache RDDs you work with when you want to execute two or more
Tip
actions on it for a better performance. Refer to RDD Caching and Persistence.
AsyncRDDActions
AsyncRDDActions class offers asynchronous actions that you can use on RDDs (thanks to
countAsync
collectAsync
takeAsync
foreachAsync
482
Actions
foreachPartitionAsync
FutureActions
Caution FIXME
483
Caching and Persistence
RDDs can be cached using cache operation. They can also be persisted using persist
operation.
The difference between cache and persist operations is purely syntactic. cache is a
synonym of persist or persist(MEMORY_ONLY) , i.e. cache is merely persist with the
default storage level MEMORY_ONLY .
Due to the very small and purely syntactic difference between caching and
Note persistence of RDDs the two terms are often used interchangeably and I will
follow the "pattern" here.
RDDs can also be unpersisted to remove RDD from a permanent storage like memory
and/or disk.
persist(): this.type
persist(newLevel: StorageLevel): this.type
You can only change the storage level once or persist reports an
UnsupportedOperationException :
Cannot change storage level of an RDD after it was already assigned a level
484
Caching and Persistence
You can pretend to change the storage level of an RDD with already-assigned
Note
storage level only if the storage level is the same as it is currently assigned.
If the RDD is marked as persistent the first time, the RDD is registered to ContextCleaner (if
available) and SparkContext .
The internal storageLevel attribute is set to the input newLevel storage level.
When called, unpersist prints the following INFO message to the logs:
It then calls SparkContext.unpersistRDD(id, blocking) and sets NONE storage level as the
current storage level.
485
StorageLevel
StorageLevel
StorageLevel describes how an RDD is persisted (and addresses the following concerns):
There are the following StorageLevel (number _2 in the name denotes 2 replicas):
NONE (default)
DISK_ONLY
DISK_ONLY_2
MEMORY_ONLY_2
MEMORY_ONLY_SER
MEMORY_ONLY_SER_2
MEMORY_AND_DISK
MEMORY_AND_DISK_2
MEMORY_AND_DISK_SER
MEMORY_AND_DISK_SER_2
OFF_HEAP
You can check out the storage level using getStorageLevel() operation.
scala> lines.getStorageLevel
res0: org.apache.spark.storage.StorageLevel = StorageLevel(disk=false, memory=false, o
ffheap=false, deserialized=false, replication=1)
486
StorageLevel
StorageLevel can indicate to use memory for data storage using useMemory flag.
useMemory: Boolean
StorageLevel can indicate to use disk for data storage using useDisk flag.
useDisk: Boolean
StorageLevel can indicate to store data in deserialized format using deserialized flag.
deserialized: Boolean
StorageLevel can indicate to replicate the data to other block managers using replication
property.
replication: Int
487
Partitions and Partitioning
FIXME
1. How does the number of partitions map to the number of tasks? How to
Caution verify it?
2. How does the mapping between partitions and tasks correspond to data
locality if any?
Spark manages data using partitions that helps parallelize distributed data processing with
minimal network traffic for sending data between executors.
By default, Spark tries to read data into an RDD from the nodes that are close to it. Since
Spark usually accesses distributed partitioned data, to optimize transformation operations it
creates partitions to hold the data chunks.
There is a one-to-one correspondence between how data is laid out in data storage like
HDFS or Cassandra (it is partitioned for the same reasons).
Features:
size
number
partitioning scheme
node distribution
repartitioning
Read the following documentations to learn what experts say on the topic:
488
Partitions and Partitioning
By default, a partition is created for each HDFS partition, which by default is 64MB (from
Spark’s Programming Guide).
RDDs get partitioned automatically without programmer intervention. However, there are
times when you’d like to adjust the size and number of partitions or the partitioning scheme
according to the needs of your application.
You use def getPartitions: Array[Partition] method on a RDD to know the set of
partitions in this RDD.
When a stage executes, you can see the number of partitions for a given stage in the
Spark UI.
When you execute the Spark job, i.e. sc.parallelize(1 to 100).count , you should see the
following in Spark shell application UI.
$ sysctl -n hw.ncpu
8
You can request for the minimum number of partitions, using the second input parameter to
many transformations.
489
Partitions and Partitioning
scala> ints.partitions.size
res2: Int = 4
Increasing partitions count will make each partition to have less data (or not at all!)
Spark can only run 1 concurrent task for every partition of an RDD, up to the number of
cores in your cluster. So if you have a cluster with 50 cores, you want your RDDs to at least
have 50 partitions (and probably 2-3x times that).
As far as choosing a "good" number of partitions, you generally want at least as many as the
number of executors for parallelism. You can get this computed value by calling
sc.defaultParallelism .
Also, the number of partitions determines how many files get generated by actions that save
RDDs to files.
The maximum size of a partition is ultimately limited by the available memory of an executor.
In the first RDD transformation, e.g. reading from a file using sc.textFile(path, partition) ,
the partition parameter will be applied to all further transformations and actions on this
RDD.
490
Partitions and Partitioning
Partitions get redistributed among nodes whenever shuffle occurs. Repartitioning may
cause shuffle to occur in some situations, but it is not guaranteed to occur in all cases.
And it usually happens during action stage.
number of blocks as you see in HDFS, but if the lines in your file are too long (longer than
the block size), there will be fewer partitions.
Preferred way to set up the number of partitions for an RDD is to directly pass it as the
second input parameter in the call like rdd = sc.textFile("hdfs://…/file.txt", 400) , where
400 is the number of partitions. In this case, the partitioning makes for 400 splits that would
be done by the Hadoop’s TextInputFormat , not Spark and it would work much faster. It’s
also that the code spawns 400 concurrent tasks to try to load file.txt directly into 400
partitions.
When using textFile with compressed files ( file.txt.gz not file.txt or similar), Spark
disables splitting that makes for an RDD with only 1 partition (as reads against gzipped files
cannot be parallelized). In this case, to change the number of partitions you should do
repartitioning.
With the following computation you can see that repartition(5) causes 5 tasks to be
started using NODE_LOCAL data locality.
491
Partitions and Partitioning
scala> lines.repartition(5).count
...
15/10/07 08:10:00 INFO DAGScheduler: Submitting 5 missing tasks from ResultStage 7 (Ma
pPartitionsRDD[19] at repartition at <console>:27)
15/10/07 08:10:00 INFO TaskSchedulerImpl: Adding task set 7.0 with 5 tasks
15/10/07 08:10:00 INFO TaskSetManager: Starting task 0.0 in stage 7.0 (TID 17, localho
st, partition 0,NODE_LOCAL, 2089 bytes)
15/10/07 08:10:00 INFO TaskSetManager: Starting task 1.0 in stage 7.0 (TID 18, localho
st, partition 1,NODE_LOCAL, 2089 bytes)
15/10/07 08:10:00 INFO TaskSetManager: Starting task 2.0 in stage 7.0 (TID 19, localho
st, partition 2,NODE_LOCAL, 2089 bytes)
15/10/07 08:10:00 INFO TaskSetManager: Starting task 3.0 in stage 7.0 (TID 20, localho
st, partition 3,NODE_LOCAL, 2089 bytes)
15/10/07 08:10:00 INFO TaskSetManager: Starting task 4.0 in stage 7.0 (TID 21, localho
st, partition 4,NODE_LOCAL, 2089 bytes)
...
You can see a change after executing repartition(1) causes 2 tasks to be started using
PROCESS_LOCAL data locality.
scala> lines.repartition(1).count
...
15/10/07 08:14:09 INFO DAGScheduler: Submitting 2 missing tasks from ShuffleMapStage 8
(MapPartitionsRDD[20] at repartition at <console>:27)
15/10/07 08:14:09 INFO TaskSchedulerImpl: Adding task set 8.0 with 2 tasks
15/10/07 08:14:09 INFO TaskSetManager: Starting task 0.0 in stage 8.0 (TID 22, localho
st, partition 0,PROCESS_LOCAL, 2058 bytes)
15/10/07 08:14:09 INFO TaskSetManager: Starting task 1.0 in stage 8.0 (TID 23, localho
st, partition 1,PROCESS_LOCAL, 2058 bytes)
...
Please note that Spark disables splitting for compressed files and creates RDDs with only 1
partition. In such cases, it’s helpful to use sc.textFile('demo.gz') and do repartitioning
using rdd.repartition(100) as follows:
rdd = sc.textFile('demo.gz')
rdd = rdd.repartition(100)
With the lines, you end up with rdd to be exactly 100 partitions of roughly equal in size.
If partitioning scheme doesn’t work for you, you can write your own custom
Tip
partitioner.
492
Partitions and Partitioning
coalesce Transformation
The coalesce transformation is used to change the number of partitions. It can trigger RDD
shuffling depending on the shuffle flag (disabled by default, i.e. false ).
In the following sample, you parallelize a local 10-number sequence and coalesce it first
without and then with shuffling (note the shuffle parameter being false and true ,
respectively).
scala> rdd.partitions.size
res0: Int = 8
scala> res1.toDebugString
res2: String =
(8) CoalescedRDD[1] at coalesce at <console>:27 []
| ParallelCollectionRDD[0] at parallelize at <console>:24 []
scala> res3.toDebugString
res4: String =
(8) MapPartitionsRDD[5] at coalesce at <console>:27 []
| CoalescedRDD[4] at coalesce at <console>:27 []
| ShuffledRDD[3] at coalesce at <console>:27 []
+-(8) MapPartitionsRDD[2] at coalesce at <console>:27 []
| ParallelCollectionRDD[0] at parallelize at <console>:24 []
1. shuffle is false by default and it’s explicitly used here for demo purposes. Note the
number of partitions that remains the same as the number of partitions in the source
RDD rdd .
493
Partitions and Partitioning
Settings
Table 1. Spark Properties
Default
Spark Property Description
Value
Sets up the number of partitions to use
for HashPartitioner. It corresponds to
default parallelism of a scheduler
backend.
More specifically,
spark.default.parallelism
(varies per corresponds to:
spark.default.parallelism deployment
The number of threads for
environment)
LocalSchedulerBackend.
the number of CPU cores in Spark
on Mesos and defaults to 8 .
Maximum of totalCoreCount and
2 in
CoarseGrainedSchedulerBackend.
494
Partition
Partition
Partition is a contract of a partition index of a RDD.
RDD.
index: Int
495
Partitioner
Partitioner
Caution FIXME
Partitioner captures data distribution at the output. A scheduler can optimize future
The contract of partitioner ensures that records for a given key have to reside on a single
partition.
numPartitions Method
Caution FIXME
getPartition Method
Caution FIXME
496
HashPartitioner
HashPartitioner
HashPartitioner is a Partitioner that uses partitions configurable number of partitions
It is possible to re-shuffle data despite all the records for the key k being already on a
single Spark executor (i.e. BlockManager to be precise). When HashPartitioner 's result for
k1 is 3 the key k1 will go to the third executor.
497
Shuffling
RDD shuffling
Read the official documentation about the topic Shuffle operations. It is still better
Tip
than this page.
Shuffling is a process of redistributing data across partitions (aka repartitioning) that may or
may not cause moving data across JVM processes or even over the wire (between
executors on separate machines).
Avoid shuffling at all cost. Think about ways to leverage existing partitions.
Tip
Leverage partial aggregation to reduce data transfer.
By default, shuffling doesn’t change the number of partitions, but their content.
data.
Example - join
PairRDD offers join transformation that (quoting the official documentation):
When called on datasets of type (K, V) and (K, W), returns a dataset of (K, (V, W)) pairs
with all pairs of elements for each key.
Let’s have a look at an example and see how it works under the covers:
498
Shuffling
scala> joined.toDebugString
res7: String =
(8) MapPartitionsRDD[10] at join at <console>:32 []
| MapPartitionsRDD[9] at join at <console>:32 []
| CoGroupedRDD[8] at join at <console>:32 []
+-(8) ParallelCollectionRDD[3] at parallelize at <console>:26 []
+-(8) ParallelCollectionRDD[4] at parallelize at <console>:26 []
It doesn’t look good when there is an "angle" between "nodes" in an operation graph. It
appears before the join operation so shuffle is expected.
499
Shuffling
join operation is one of the cogroup operations that uses defaultPartitioner , i.e. walks
through the RDD lineage graph (sorted by the number of partitions decreasing) and picks
the partitioner with positive number of output partitions. Otherwise, it checks
spark.default.parallelism property and if defined picks HashPartitioner with the default
parallelism of the SchedulerBackend.
500
Checkpointing
Checkpointing
Checkpointing is a process of truncating RDD lineage graph and saving it to a reliable
distributed (HDFS) or local file system.
reliable - in Spark (core), RDD checkpointing that saves the actual intermediate RDD
data to a reliable distributed file system, e.g. HDFS.
local - in Spark Streaming or GraphX - RDD checkpointing that truncates RDD lineage
graph.
It’s up to a Spark application developer to decide when and how to checkpoint using
RDD.checkpoint() method.
Before checkpointing is used, a Spark developer has to set the checkpoint directory using
SparkContext.setCheckpointDir(directory: String) method.
Reliable Checkpointing
You call SparkContext.setCheckpointDir(directory: String) to set the checkpoint directory
- the directory where RDDs are checkpointed. The directory must be a HDFS path if
running on a cluster. The reason is that the driver may attempt to reconstruct the
checkpointed RDD from its own local file system, which is incorrect because the checkpoint
files are actually on the executor machines.
You mark an RDD for checkpointing by calling RDD.checkpoint() . The RDD will be saved to
a file inside the checkpoint directory and all references to its parent RDDs will be removed.
This function has to be called before any job has been executed on this RDD.
When an action is called on a checkpointed RDD, the following INFO message is printed out
in the logs:
ReliableRDDCheckpointData
501
Checkpointing
ReliableCheckpointRDD
After RDD.checkpoint the RDD has ReliableCheckpointRDD as the new parent with the exact
number of partitions as the RDD.
localCheckpoint(): this.type
localCheckpoint marks a RDD for local checkpointing using Spark’s caching layer.
localCheckpoint is for users who wish to truncate RDD lineage graph while skipping the
expensive step of replicating the materialized data in a reliable distributed file system. This is
useful for RDDs with long lineages that need to be truncated periodically, e.g. GraphX.
LocalRDDCheckpointData
FIXME
LocalCheckpointRDD
FIXME
doCheckpoint Method
Caution FIXME
502
CheckpointRDD
CheckpointRDD
Caution FIXME
503
RDD Dependencies
RDD Dependencies
Dependency class is the base (abstract) class to model a dependency relationship between
Dependency has a single method rdd to access the RDD that is behind a dependency.
Whenever you apply a transformation (e.g. map , flatMap ) to a RDD you build the so-
called RDD lineage graph. Dependency -ies represent the edges in a lineage graph.
ShuffleDependency
OneToOneDependency
PruneDependency
RangeDependency
504
RDD Dependencies
// A demo RDD
scala> val myRdd = sc.parallelize(0 to 9).groupBy(_ % 2)
myRdd: org.apache.spark.rdd.RDD[(Int, Iterable[Int])] = ShuffledRDD[8] at groupBy at <conso
scala> myRdd.foreach(println)
(0,CompactBuffer(0, 2, 4, 6, 8))
(1,CompactBuffer(1, 3, 5, 7, 9))
scala> myRdd.dependencies
res5: Seq[org.apache.spark.Dependency[_]] = List(org.apache.spark.ShuffleDependency@27ace61
You use toDebugString method to print out the RDD lineage in a user-friendly way.
scala> myRdd.toDebugString
res6: String =
(8) ShuffledRDD[8] at groupBy at <console>:24 []
+-(8) MapPartitionsRDD[7] at groupBy at <console>:24 []
| ParallelCollectionRDD[6] at parallelize at <console>:24 []
505
NarrowDependency — Narrow Dependencies
NarrowDependency — Narrow Dependencies
NarrowDependency is a base (abstract) Dependency with narrow (limited) number of
partitions of the parent RDD that are required to compute a partition of the child RDD.
PruneDependency
RangeDependency
NarrowDependency Contract
NarrowDependency contract assumes that extensions implement getParents method.
getParents returns the partitions of the parent RDD that the input partitionId depends
on.
OneToOneDependency
OneToOneDependency is a narrow dependency that represents a one-to-one dependency
506
NarrowDependency — Narrow Dependencies
scala> r3.dependencies
res32: Seq[org.apache.spark.Dependency[_]] = List(org.apache.spark.OneToOneDependency@
7353a0fb)
scala> r3.toDebugString
res33: String =
(8) MapPartitionsRDD[19] at map at <console>:20 []
| ParallelCollectionRDD[13] at parallelize at <console>:18 []
PruneDependency
PruneDependency is a narrow dependency that represents a dependency between the
RangeDependency
RangeDependency is a narrow dependency that represents a one-to-one dependency
scala> unioned.dependencies
res19: Seq[org.apache.spark.Dependency[_]] = ArrayBuffer(org.apache.spark.RangeDepende
ncy@28408ad7, org.apache.spark.RangeDependency@6e1d2e9f)
scala> unioned.toDebugString
res18: String =
(16) UnionRDD[16] at union at <console>:22 []
| ParallelCollectionRDD[13] at parallelize at <console>:18 []
| ParallelCollectionRDD[14] at parallelize at <console>:18 []
507
NarrowDependency — Narrow Dependencies
508
ShuffleDependency — Shuffle Dependencies
ShuffleDependency — Shuffle Dependency
ShuffleDependency is a RDD Dependency on the output of a ShuffleMapStage for a key-
SubtractedRDD but only when partitioners (of the RDD’s and after transformations) are
different.
A ShuffleDependency is created for a key-value pair RDD, i.e. RDD[Product2[K, V]] with K
and V being the types of keys and values, respectively.
scala> rdd.dependencies
res0: Seq[org.apache.spark.Dependency[_]] = List(org.apache.spark.ShuffleDependency@45
4f6cc5)
keyOrdering Property
Caution FIXME
serializer Property
Caution FIXME
509
ShuffleDependency — Shuffle Dependencies
3. Serializer,
5. Optional Aggregator,
rdd Property
rdd returns a key-value pair RDD this ShuffleDependency was created for.
510
ShuffleDependency — Shuffle Dependencies
partitioner Property
partitioner property is a Partitioner that is used to partition the shuffle output.
5. FIXME
shuffleHandle Property
shuffleHandle: ShuffleHandle
511
ShuffleDependency — Shuffle Dependencies
combine).
aggregator Property
Usage
The places where ShuffleDependency is used:
The RDD operations that may or may not use the above RDDs and hence shuffling:
coalesce
repartition
cogroup
intersection
subtractByKey
subtract
sortByKey
sortBy
512
ShuffleDependency — Shuffle Dependencies
repartitionAndSortWithinPartitions
combineByKeyWithClassTag
combineByKey
aggregateByKey
foldByKey
reduceByKey
countApproxDistinctByKey
groupByKey
partitionBy
Note There may be other dependent methods that use the above.
513
Map/Reduce-side Aggregator
Map/Reduce-side Aggregator
Aggregator is a set of functions used to aggregate distributed data sets:
createCombiner: V => C
mergeValue: (C, V) => C
mergeCombiners: (C, C) => C
Caution FIXME
combineValuesByKey Method
Caution FIXME
combineCombinersByKey Method
Caution FIXME
514
AppStatusStore
AppStatusStore
AppStatusStore is…FIXME
SparkContext is created (that triggers creating a live store (i.e. an AppStatusStore for
When created for a live Spark application, AppStatusStore creates a AppStatusListener that
is later used to get the active stages.
streamBlocksList Method
streamBlocksList(): Seq[StreamBlockData]
streamBlocksList …FIXME
activeStages Method
activeStages(): Seq[v1.StageData]
activeStages …FIXME
515
AppStatusStore
KVStore
close(): Unit
rddList Method
516
AppStatusStore
In the end, rddList takes RDDStorageInfos with at least one partition cached (when
cachedOnly flag is on) or all RDDStorageInfos (when cachedOnly flag is off).
Note cachedOnly flag is on and therefore rddList gives RDDs cached only.
517
AppStatusPlugin
AppStatusPlugin — Contract for
AppStatusPlugin is the contract for…FIXME
package org.apache.spark.status
trait AppStatusPlugin {
def setupListeners(
conf: SparkConf,
store: KVStore,
addListenerFn: SparkListener => Unit,
live: Boolean): Unit
setupUI
loadPlugins Method
loadPlugins(): Iterable[AppStatusPlugin]
loadPlugins …FIXME
518
AppStatusListener
AppStatusListener
AppStatusListener is a SparkListener that AppStatusStore uses to…FIXME
SparkListenerApplicationEnd onApplicationEnd
SparkListenerBlockManagerAdded onBlockManagerAdded
SparkListenerBlockManagerRemoved onBlockManagerRemoved
SparkListenerBlockUpdated onBlockUpdated
SparkListenerEnvironmentUpdate onEnvironmentUpdate
SparkListenerEvent onOtherEvent
SparkListenerExecutorAdded onExecutorAdded
SparkListenerExecutorBlacklisted onExecutorBlacklisted
SparkListenerExecutorMetricsUpdate onExecutorMetricsUpdate
SparkListenerExecutorRemoved onExecutorRemoved
SparkListenerExecutorUnblacklisted onExecutorUnblacklisted
SparkListenerJobStart onJobStart
SparkListenerJobEnd onJobEnd
SparkListenerNodeBlacklisted onNodeBlacklisted
SparkListenerNodeUnblacklisted onNodeUnblacklisted
519
AppStatusListener
SparkListenerStageCompleted onStageCompleted
SparkListenerStageSubmitted onStageSubmitted
SparkListenerTaskEnd onTaskEnd
SparkListenerTaskGettingResult onTaskGettingResult
SparkListenerTaskStart onTaskStart
SparkListenerUnpersistRDD onUnpersistRDD
appSummary AppSummary
liveUpdatePeriodNs
coresPerTask
Default: 1
onStageSubmitted Method
520
AppStatusListener
onStageSubmitted …FIXME
update simply requests the LiveEntity to write (with the ElementTrackingStore as the
flush(): Unit
flush …FIXME
maybeUpdate …FIXME
liveUpdate …FIXME
521
AppStatusListener
updateStreamBlock …FIXME
type):
ElementTrackingStore
SparkConf
live flag
updateRDDBlock …FIXME
522
AppStatusListener
523
KVStore
KVStore
KVStore is the contract of…FIXME
package org.apache.spark.util.kvstore;
Table 2. KVStores
KVStore Description
ElementTrackingStore
InMemoryStore
LevelDB
524
KVStoreView
KVStoreView
KVStoreView …FIXME
index Method
index …FIXME
525
ElementTrackingStore
ElementTrackingStore
ElementTrackingStore is a KVStore that…FIXME
Functions that…FIXME
flushTriggers
Used when…FIXME
write Method
write …FIXME
Trigger
Trigger is…FIXME
526
ElementTrackingStore
KVStore
SparkConf
527
InMemoryStore
InMemoryStore
InMemoryStore is…FIXME
528
LevelDB
LevelDB
LevelDB is…FIXME
529
InterruptibleIterator — Iterator With Support For Task Cancellation
Iterators are data structures that allow to iterate over a sequence of elements. They
have a hasNext method for checking if there is a next element available, and a next
method which returns the next element and discards it from the iterator.
task
TaskContext
Scala Iterator[T]
hasNext Method
hasNext: Boolean
530
InterruptibleIterator — Iterator With Support For Task Cancellation
hasNext is part of Iterator Contract to test whether this iterator can provide
Note
another element.
hasNext requests the TaskContext to kill the task if interrupted (that simply throws a
next Method
next(): T
Note next is part of Iterator Contract to produce the next element of this iterator.
531
Broadcast variables
Broadcast Variables
From the official documentation about Broadcast Variables:
Broadcast variables allow the programmer to keep a read-only variable cached on each
machine rather than shipping a copy of it with tasks.
Explicitly creating broadcast variables is only useful when tasks across multiple stages
need the same data or when caching the data in deserialized form is important.
The Broadcast feature in Spark uses SparkContext to create broadcast values and
BroadcastManager and ContextCleaner to manage their lifecycle.
532
Broadcast variables
533
Broadcast variables
With DEBUG logging level enabled, you should see the following messages in the logs:
After creating an instance of a broadcast variable, you can then reference the value using
value method.
scala> b.value
res0: Int = 1
Note value method is the only way to access the value of a broadcast variable.
With DEBUG logging level enabled, you should see the following messages in the logs:
When you are done with a broadcast variable, you should destroy it to release memory.
scala> b.destroy
With DEBUG logging level enabled, you should see the following messages in the logs:
534
Broadcast variables
scala> b.unpersist
value: T
value returns the value of a broadcast variable. You can only access the value until it is
destroyed after which you will see the following SparkException exception in the logs:
Internally, value makes sure that the broadcast variable is valid, i.e. destroy was not
called, and, if so, calls the abstract getValue method.
unpersist(): Unit
unpersist(blocking: Boolean): Unit
destroy(): Unit
Note Once a broadcast variable has been destroyed, it cannot be used again.
If you try to destroy a broadcast variable more than once, you will see the following
SparkException exception in the logs:
535
Broadcast variables
scala> b.destroy
org.apache.spark.SparkException: Attempted to use Broadcast(0) after it was destroyed
(destroy at <console>:27)
at org.apache.spark.broadcast.Broadcast.assertValid(Broadcast.scala:144)
at org.apache.spark.broadcast.Broadcast.destroy(Broadcast.scala:107)
at org.apache.spark.broadcast.Broadcast.destroy(Broadcast.scala:98)
... 48 elided
Internally, destroy marks a broadcast variable destroyed, i.e. the internal _isValid flag is
disabled.
In the end, doDestroy method is executed (that broadcast implementations are supposed to
provide).
Introductory Example
Let’s start with an introductory example to check out how to use broadcast variables and
build your initial understanding.
You’re going to use a static mapping of interesting projects with their websites, i.e.
Map[String, String] that the tasks, i.e. closures (anonymous functions) in transformations,
use.
536
Broadcast variables
scala> val pws = Map("Apache Spark" -> "http://spark.apache.org/", "Scala" -> "http://
www.scala-lang.org/")
pws: scala.collection.immutable.Map[String,String] = Map(Apache Spark -> http://spark.
apache.org/, Scala -> http://www.scala-lang.org/)
It works, but is very ineffective as the pws map is sent over the wire to executors while it
could have been there already. If there were more tasks that need the pws map, you could
improve their performance by minimizing the number of bytes that are going to be sent over
the network for task execution.
Semantically, the two computations - with and without the broadcast value - are exactly the
same, but the broadcast-based one wins performance-wise when there are more executors
spawned to execute many tasks that use pws map.
Introduction
Broadcast is part of Spark that is responsible for broadcasting information across nodes in
a cluster.
You use broadcast variable to implement map-side join, i.e. a join using a map . For this,
lookup tables are distributed across nodes in a cluster using broadcast and then looked up
inside map (to do the join implicitly).
When you broadcast a value, it is copied to executors only once (while it is copied multiple
times for tasks otherwise). It means that broadcast can help to get your Spark application
faster if you have a large value to use in tasks or there are more tasks than executors.
It appears that a Spark idiom emerges that uses broadcast with collectAsMap to create a
Map for broadcast. When an RDD is map over to a smaller dataset (column-wise not
record-wise), collectAsMap , and broadcast , using the very big RDD to map its elements to
the broadcast RDDs is computationally faster.
537
Broadcast variables
Use large broadcasted HashMaps over RDDs whenever possible and leave RDDs with a
key to lookup necessary data as demonstrated above.
Broadcast Contract
The Broadcast contract is made up of the following methods that custom Broadcast
implementations are supposed to provide:
1. getValue
2. doUnpersist
3. doDestroy
538
Accumulators
Accumulators
Accumulators are variables that are "added" to through an associative and commutative
"add" operation. They act as a container for accumulating partial values across multiple
tasks (running on executors). They are designed to be used safely and efficiently in parallel
and distributed Spark computations and are meant for distributed counters and sums (e.g.
task metrics).
You can create built-in accumulators for longs, doubles, or collections or register custom
accumulators using the SparkContext.register methods. You can create accumulators with
or without a name, but only named accumulators are displayed in web UI (under Stages tab
for a given stage).
executor1: accumulator.add(incByExecutor1)
executor2: accumulator.add(incByExecutor2)
driver: println(accumulator.value)
Accumulators are not thread-safe. They do not really have to since the
DAGScheduler.updateAccumulators method that the driver uses to update the values of
accumulators after a task completes (successfully or with a failure) is only executed on a
single thread that runs scheduling loop. Beside that, they are write-only data structures for
workers that have their own local accumulator reference whereas accessing the value of an
accumulator is only allowed by the driver.
539
Accumulators
Accumulators are serializable so they can safely be referenced in the code executed in
executors and then safely send over the wire for execution.
Flag whether…FIXME
atDriverSide
Used when…FIXME
merge Method
Caution FIXME
AccumulatorV2
register(
sc: SparkContext,
name: Option[String] = None,
countFailedValues: Boolean = false): Unit
register creates a AccumulatorMetadata metadata object for the accumulator (with a new
540
Accumulators
In the end, register registers the accumulator for cleanup (only when ContextCleaner is
defined in the SparkContext ).
AccumulatorMetadata
AccumulatorMetadata is a container object with the metadata of an accumulator:
Accumulator ID
(optional) name
Named Accumulators
An accumulator can have an optional name that you can specify when creating an
accumulator.
AccumulableInfo
AccumulableInfo contains information about a task’s local updates to an Accumulable.
id of the accumulator
541
Accumulators
value
whether or not to countFailedValues to the final value of the accumulator for failed
tasks
optional metadata
AccumulableInfo is used to transfer accumulator updates from executors to the driver every
Examples
var counter = 0
ints.foreach { n =>
println(s"int: $n")
counter = counter + 1
}
println(s"The number of elements is $counter")
542
Accumulators
543
AccumulatorContext
AccumulatorContext
AccumulatorContext is a private[spark] internal object used to track accumulators by
Spark itself using an internal originals lookup table. Spark uses the AccumulatorContext
object to register and unregister accumulators.
The originals lookup table maps accumulator identifier to the accumulator itself.
Every accumulator has its own unique accumulator id that is assigned using the internal
nextId counter.
register Method
Caution FIXME
newId Method
Caution FIXME
AccumulatorContext.SQL_ACCUM_IDENTIFIER
AccumulatorContext.SQL_ACCUM_IDENTIFIER is an internal identifier for Spark SQL’s internal
accumulators. The value is sql and Spark uses it to distinguish Spark SQL metrics from
others.
544
SerializerManager
SerializerManager
Caution FIXME
SerializerManager automatically selects the "best" serializer for shuffle blocks that could
either be KryoSerializer when a RDD’s types are known to be compatible with Kryo or the
default Serializer .
The common idiom in Spark’s code is to access the current SerializerManager using
SparkEnv.
SparkEnv.get.serializerManager
Caution FIXME
wrapStream Method
Caution FIXME
dataDeserializeStream Method
Caution FIXME
Caution FIXME
SerializerManager will automatically pick a Kryo serializer for ShuffledRDDs whose key,
getSerializer selects the "best" Serializer given the input types for keys and values (in a
RDD).
getSerializer returns KryoSerializer when the types of keys and values are compatible
Settings
Table 1. Spark Properties
Default
Name Description
value
The flag to control
spark.shuffle.compress true
whether to compress
shuffle output when
stored
spark.block.failures.beforeLocationRefresh 5
spark.io.encryption.enabled false
The flag to enable IO
encryption
546
SerializerManager
547
MemoryManager — Memory Management
MemoryManager — Memory Management
System
MemoryManager is the base of memory managers that manage shared memory for task
package org.apache.spark.memory
548
MemoryManager — Memory Management
acquireExecutionMemory
Used exclusively when TaskMemoryManager is requested
to acquireExecutionMemory
Used when:
UnifiedMemoryManager is requested to
acquireStorageMemory acquireUnrollMemory
MemoryStore is requested to putBytes,
putIteratorAsValues and putIteratorAsBytes
acquireUnrollMemory
Used exclusively when MemoryStore is requested to
reserveUnrollMemoryForThisTask
Used when:
UnifiedMemoryManager is requested to
acquireStorageMemory
maxOffHeapStorageMemory
BlockManager is created
Used when:
BlockManager is created
UnifiedMemoryManager is requested to
maxOnHeapStorageMemory
acquireStorageMemory
MemoryStore is requested for the total amount of
memory available for storage (in bytes)
Execution memory is used for computation in shuffles, joins, sorts and aggregations.
Storage memory is used for caching and propagating internal data across the nodes in a
cluster.
549
MemoryManager — Memory Management
Table 2. MemoryManagers
MemoryManager Description
StaticMemoryManager (legacy)
FIXME
offHeapStorageMemoryPool
Used when…FIXME
FIXME
pageSizeBytes
Used when…FIXME
FIXME
Used when…FIXME
tungstenMemoryAllocator
tungstenMemoryAllocator is a Scala final
Note val and cannot be changed by custom
MemoryManagers.
SparkConf
onHeapStorageMemory
onHeapExecutionMemory
releaseExecutionMemory Method
550
MemoryManager — Memory Management
releaseExecutionMemory(
numBytes: Long,
taskAttemptId: Long,
memoryMode: MemoryMode): Unit
releaseExecutionMemory …FIXME
releaseAllExecutionMemoryForTask Method
releaseAllExecutionMemoryForTask …FIXME
tungstenMemoryMode Flag
tungstenMemoryMode: MemoryMode
tungstenMemoryMode returns OFF_HEAP only when the following are all met:
JVM supports unaligned memory access (aka unaligned Unsafe, i.e. sun.misc.Unsafe
package is available and the underlying system has unaligned-access capability)
551
MemoryManager — Memory Management
TaskMemoryManager is created
Note
MemoryManager is created (and initializes the pageSizeBytes and
tungstenMemoryAllocator internal properties)
freePage Method
freePage …FIXME
552
UnifiedMemoryManager — Spark’s Memory Manager
UnifiedMemoryManager — Spark’s Memory
Manager
UnifiedMemoryManager is the default MemoryManager with onHeapStorageMemory being ???
getMaxMemory calculates the maximum memory to use for execution and storage.
import org.apache.spark.network.util.JavaUtils
scala> JavaUtils.byteStringAsMb(maxMemory + "b")
res1: Long = 912
getMaxMemory reads the maximum amount of memory that the Java virtual machine will
attempt to use and decrements it by reserved system memory (for non-storage and non-
execution purposes).
1. System memory is not smaller than about 1,5 of the reserved system memory.
2. spark.executor.memory is not smaller than about 1,5 of the reserved system memory.
553
UnifiedMemoryManager — Spark’s Memory Manager
class UnifiedMemoryManager(
conf: SparkConf,
val maxHeapMemory: Long,
onHeapStorageRegionSize: Long,
numCores: Int)
onHeapStorageRegionSize
numCores
Internally, apply calculates the maximum memory to use (given conf ). It then creates a
UnifiedMemoryManager with the following values:
3. numCores as configured.
554
UnifiedMemoryManager — Spark’s Memory Manager
acquireStorageMemory Method
acquireStorageMemory(
blockId: BlockId,
numBytes: Long,
memoryMode: MemoryMode): Boolean
It makes sure that the requested number of bytes numBytes (for a block to store) fits the
available memory. If it is not the case, you should see the following INFO message in the
logs and the method returns false .
INFO Will not store [blockId] as the required space ([numBytes] bytes) exceeds our mem
ory limit ([maxMemory] bytes)
If the requested number of bytes numBytes is greater than memoryFree in the storage pool,
acquireStorageMemory will attempt to use the free memory from the execution pool.
Note The storage pool can use the free memory from the execution pool.
It will take as much memory as required to fit numBytes from memoryFree in the execution
pool (up to the whole free memory in the pool).
Ultimately, acquireStorageMemory requests the storage pool for numBytes for blockId .
555
UnifiedMemoryManager — Spark’s Memory Manager
acquireUnrollMemory Method
acquireExecutionMemory Method
acquireExecutionMemory(
numBytes: Long,
taskAttemptId: Long,
memoryMode: MemoryMode): Long
acquireExecutionMemory does…FIXME
onHeapStorageRegionSize
storageRegionSize offHeapStorageMemory
<1>
Caution FIXME
maxOnHeapStorageMemory Method
maxOnHeapStorageMemory: Long
556
UnifiedMemoryManager — Spark’s Memory Manager
Settings
Table 2. Spark Properties
Spark Property Default Value Description
Fraction of JVM
heap space
spark.memory.fraction 0.6 used for
execution and
storage.
spark.memory.storageFraction 0.5
Java’s
spark.testing.memory System memory
Runtime.getRuntime.maxMemory
557
StaticMemoryManager — Legacy Memory Manager
StaticMemoryManager — Legacy Memory
Manager
StaticMemoryManager is…FIXME
acquireUnrollMemory Method
acquireUnrollMemory(
blockId: BlockId,
numBytes: Long,
memoryMode: MemoryMode): Boolean
acquireUnrollMemory …FIXME
acquireStorageMemory Method
acquireStorageMemory(
blockId: BlockId,
numBytes: Long,
memoryMode: MemoryMode): Boolean
acquireStorageMemory …FIXME
558
MemoryManager Configuration Properties
Note It may be that the page should be somewhere else. Any suggestions?
import org.apache.spark.internal.config._
import org.apache.spark.SparkConf
// all properties are private[spark], sorry
// the following won't work unless the code is under org.apache.spark package
val sparkConf = new SparkConf().set(MEMORY_OFFHEAP_ENABLED.key, "false")
import org.apache.spark.SparkConf
val sparkConf = new SparkConf()
scala> println(sparkConf.getOption("spark.memory.offHeap.enabled"))
None
object Config {
import org.apache.spark.SparkConf
import org.apache.spark.internal.config._
def with_MEMORY_OFFHEAP_ENABLED(sparkConf: SparkConf = new SparkConf()): SparkConf =
{
sparkConf.set(MEMORY_OFFHEAP_ENABLED, true)
}
}
// END
import org.apache.spark.japila.Config
val sparkConf_OffHeap = Config.with_MEMORY_OFFHEAP_ENABLED(sparkConf)
scala> println(sparkConf_OffHeap.get("spark.memory.offHeap.enabled"))
true
559
MemoryManager Configuration Properties
spark.memory.offHeap.size
560
SparkEnv — Spark Runtime Environment
Spark Runtime Environment is represented by a SparkEnv object that holds all the required
runtime services for a running Spark application with separate environments for the driver
and executors.
The idiomatic way in Spark to access the current SparkEnv when on the driver or executors
is to use get method.
import org.apache.spark._
scala> SparkEnv.get
res0: org.apache.spark.SparkEnv = org.apache.spark.SparkEnv@49322d04
561
SparkEnv — Spark Runtime Environment
rpcEnv RpcEnv
serializer Serializer
closureSerializer Serializer
serializerManager SerializerManager
mapOutputTracker MapOutputTracker
shuffleManager ShuffleManager
broadcastManager BroadcastManager
blockManager BlockManager
securityManager SecurityManager
metricsSystem MetricsSystem
memoryManager MemoryManager
outputCommitCoordinator OutputCommitCoordinator
driverTmpDir
Refer to Logging.
562
SparkEnv — Spark Runtime Environment
create(
conf: SparkConf,
executorId: String,
hostname: String,
port: Int,
isDriver: Boolean,
isLocal: Boolean,
numUsableCores: Int,
listenerBus: LiveListenerBus = null,
mockOutputCommitCoordinator: Option[OutputCommitCoordinator] = None): SparkEnv
executor).
numUsableCores
Used to create MemoryManager,
NettyBlockTransferService and BlockManager.
563
SparkEnv — Spark Runtime Environment
564
SparkEnv — Spark Runtime Environment
executors, respectively.
Caution FIXME
It creates a CacheManager.
It initializes userFiles temporary directory used for downloading dependencies for a driver
while this is the executor’s current working directory for an executor.
An OutputCommitCoordinator is created.
create is used when SparkEnv is requested for the SparkEnv for the driver
Note
and executors.
565
SparkEnv — Spark Runtime Environment
If called from the driver, you should see the following INFO message in the logs:
createDriverEnv(
conf: SparkConf,
isLocal: Boolean,
listenerBus: LiveListenerBus,
numCores: Int,
mockOutputCommitCoordinator: Option[OutputCommitCoordinator] = None): SparkEnv
566
SparkEnv — Spark Runtime Environment
LiveListenerBus, the number of cores to use for execution in local mode or 0 otherwise,
and a OutputCommitCoordinator (default: none).
It then passes the call straight on to the create helper method (with driver executor id,
isDriver enabled, and the input parameters).
createExecutorEnv(
conf: SparkConf,
executorId: String,
hostname: String,
port: Int,
numCores: Int,
ioEncryptionKey: Option[Array[Byte]],
isLocal: Boolean): SparkEnv
567
SparkEnv — Spark Runtime Environment
createExecutorEnv simply creates the base SparkEnv (passing in all the input parameters)
get: SparkEnv
import org.apache.spark._
scala> SparkEnv.get
res0: org.apache.spark.SparkEnv = org.apache.spark.SparkEnv@49322d04
568
SparkEnv — Spark Runtime Environment
stop(): Unit
stop checks isStopped internal flag and does nothing when enabled.
Otherwise, stop turns isStopped flag on, stops all pythonWorkers and requests the
following services to stop:
1. MapOutputTracker
2. ShuffleManager
3. BroadcastManager
4. BlockManager
5. BlockManagerMaster
6. MetricsSystem
7. OutputCommitCoordinator
Only on the driver, stop deletes the temporary directory. You can see the following WARN
message in the logs if the deletion fails.
Note stop is used when SparkContext stops (on the driver) and Executor stops.
set Method
set saves the input SparkEnv to env internal registry (as the default SparkEnv ).
Settings
569
SparkEnv — Spark Runtime Environment
Serializer
TIP: Enable DEBUG
logging level for
org.apache.spark.Spar
spark.serializer org.apache.spark.serializer.JavaSerializer
logger to see the curre
value.
` DEBUG SparkEnv: Usi
serializer: [serialize
570
DAGScheduler — Stage-Oriented Scheduler
DAGScheduler — Stage-Oriented Scheduler
The introduction that follows was highly influenced by the scaladoc of
org.apache.spark.scheduler.DAGScheduler. As DAGScheduler is a private
class it does not appear in the official API documentation. You are strongly
Note encouraged to read the sources and only then read this and the related pages
afterwards.
"Reading the sources", I say?! Yes, I am kidding!
Introduction
DAGScheduler is the scheduling layer of Apache Spark that implements stage-oriented
scheduling. It transforms a logical execution plan (i.e. RDD lineage of dependencies built
using RDD transformations) to a physical execution plan (using stages).
571
DAGScheduler — Stage-Oriented Scheduler
After an action has been called, SparkContext hands over a logical plan to DAGScheduler
that it in turn translates to a set of stages that are submitted as TaskSets for execution (see
Execution Model).
572
DAGScheduler — Stage-Oriented Scheduler
DAGScheduler computes a directed acyclic graph (DAG) of stages for each job, keeps track
of which RDDs and stage outputs are materialized, and finds a minimal schedule to run jobs.
It then submits stages to TaskScheduler.
In addition to coming up with the execution DAG, DAGScheduler also determines the
preferred locations to run each task on, based on the current cache status, and passes the
information to TaskScheduler.
DAGScheduler tracks which RDDs are cached (or persisted) to avoid "recomputing" them,
i.e. redoing the map side of a shuffle. DAGScheduler remembers what ShuffleMapStages
have already produced output files (that are stored in BlockManagers).
DAGScheduler is only interested in cache location coordinates, i.e. host and executor id, per
partition of a RDD.
573
DAGScheduler — Stage-Oriented Scheduler
Furthermore, it handles failures due to shuffle output files being lost, in which case old
stages may need to be resubmitted. Failures within a stage that are not caused by shuffle
file loss are handled by the TaskScheduler itself, which will retry each task a small number of
times before cancelling the whole stage.
reads and executes sequentially. See the section Internal Event Loop - dag-scheduler-event-
loop.
failedEpoch
The lookup table of lost executors and the epoch of the
event.
failedStages
Stages that failed due to fetch failures (when a task fails
with FetchFailed exception).
574
DAGScheduler — Stage-Oriented Scheduler
shuffleIdToMapStage
The lookup table of ShuffleMapStages per
ShuffleDependency.
Refer to Logging.
DAGScheduler reports metrics about its execution (refer to the section Metrics).
575
DAGScheduler — Stage-Oriented Scheduler
createResultStage(
rdd: RDD[_],
func: (TaskContext, Iterator[_]) => _,
partitions: Array[Int],
jobId: Int,
callSite: CallSite): ResultStage
Caution FIXME
updateJobIdStageIdMaps Method
Caution FIXME
SparkContext
TaskScheduler
LiveListenerBus
MapOutputTrackerMaster
BlockManagerMaster
SparkEnv
576
DAGScheduler — Stage-Oriented Scheduler
DAGScheduler sets itself in the given TaskScheduler and in the end starts DAGScheduler
Event Bus.
DAGScheduler can reference all the services through a single SparkContext with
Note
or without specifying explicit TaskScheduler.
listenerBus: LiveListenerBus
DAGScheduler is created.
executorHeartbeatReceived Method
executorHeartbeatReceived(
execId: String,
accumUpdates: Array[(Long, Int, Int, Seq[AccumulableInfo])],
blockManagerId: BlockManagerId): Boolean
and informs BlockManagerMaster that blockManagerId block manager is alive (by posting
BlockManagerHeartbeat).
cleanupStateForJobAndIndependentStages cleans up the state for job and any stages that
registry.
If no stages are found, the following ERROR is printed out to the logs:
577
DAGScheduler — Stage-Oriented Scheduler
For each stage, cleanupStateForJobAndIndependentStages reads the jobs the stage belongs
to.
If the job does not belong to the jobs of the stage, the following ERROR is printed out to
the logs:
ERROR Job [jobId] not registered for stage [stageId] even though that stage was regist
ered for the job
If the job was the only job for the stage, the stage (and the stage id) gets cleaned up from
the registries, i.e. runningStages, shuffleIdToMapStage, waitingStages, failedStages and
stageIdToStage.
While removing from runningStages, you should see the following DEBUG message in the
logs:
While removing from waitingStages, you should see the following DEBUG message in the
logs:
While removing from failedStages, you should see the following DEBUG message in the
logs:
After all cleaning (using stageIdToStage as the source registry), if the stage belonged to the
one and only job , you should see the following DEBUG message in the logs:
578
DAGScheduler — Stage-Oriented Scheduler
markMapStageJobAsFinished marks the active job finished and notifies Spark listeners.
Internally, markMapStageJobAsFinished marks the zeroth partition finished and increases the
number of tasks finished in job .
The state of the job and independent stages are cleaned up.
submitJob[T, U](
rdd: RDD[T],
func: (TaskContext, Iterator[T]) => U,
partitions: Seq[Int],
callSite: CallSite,
resultHandler: (Int, U) => Unit,
properties: Properties): JobWaiter[U]
579
DAGScheduler — Stage-Oriented Scheduler
Figure 4. DAGScheduler.submitJob
Internally, submitJob does the following:
You may see a IllegalArgumentException thrown when the input partitions references
partitions not in the input rdd :
submitJob assumes that the partitions of a RDD are indexed from 0 onwards in
Note
sequential order.
580
DAGScheduler — Stage-Oriented Scheduler
submitMapStage[K, V, C](
dependency: ShuffleDependency[K, V, C],
callback: MapOutputStatistics => Unit,
callSite: CallSite,
properties: Properties): JobWaiter[MapOutputStatistics]
Internally, submitMapStage increments nextJobId internal counter to get the job id.
submitMapStage then creates a JobWaiter (with the job id and with one artificial task that will
MapStageSubmitted to LiveListenerBus).
cancelStage(stageId: Int)
581
DAGScheduler — Stage-Oriented Scheduler
cancelJobGroup prints the following INFO message to the logs followed by posting a
cancelAllJobs(): Unit
taskGettingResult(taskInfo: TaskInfo)
582
DAGScheduler — Stage-Oriented Scheduler
Bus.
taskEnded(
task: Task[_],
reason: TaskEndReason,
result: Any,
accumUpdates: Map[Long, Any],
taskInfo: TaskInfo,
taskMetrics: TaskMetrics): Unit
taskSetFailed(
taskSet: TaskSet,
reason: String,
exception: Option[Throwable]): Unit
583
DAGScheduler — Stage-Oriented Scheduler
cancelJob prints the following INFO message and posts a JobCancelled to DAGScheduler
Event Bus.
584
DAGScheduler — Stage-Oriented Scheduler
getOrCreateParentStages finds all direct parent ShuffleDependencies of the input rdd and
Caution FIXME
runJob[T, U](
rdd: RDD[T],
func: (TaskContext, Iterator[T]) => U,
partitions: Seq[Int],
callSite: CallSite,
resultHandler: (Int, U) => Unit,
properties: Properties): Unit
runJob submits an action job to the DAGScheduler and waits for a result.
Internally, runJob executes submitJob and then waits until a result comes using JobWaiter.
When the job succeeds, you should see the following INFO message in the logs:
When the job fails, you should see the following INFO message in the logs and the
exception (that led to the failure) is thrown.
585
DAGScheduler — Stage-Oriented Scheduler
getOrCreateShuffleMapStage(
shuffleDep: ShuffleDependency[_, _, _],
firstJobId: Int): ShuffleMapStage
ShuffleDependency.
Note All the new ShuffleMapStage stages are associated with the input firstJobId .
createShuffleMapStage(
shuffleDep: ShuffleDependency[_, _, _],
jobId: Int): ShuffleMapStage
jobId (of a ActiveJob) possibly copying shuffle map output locations from previous jobs to
586
DAGScheduler — Stage-Oriented Scheduler
internal counter).
Note The RDD of the new ShuffleMapStage is from the input ShuffleDependency.
587
DAGScheduler — Stage-Oriented Scheduler
clearCacheLocs(): Unit
clearCacheLocs clears the internal registry of the partition locations per RDD.
DAGScheduler clears the cache while resubmitting failed stages, and as a result
Note
of JobSubmitted, MapStageSubmitted, CompletionEvent, ExecutorLost events.
getShuffleDependencies finds direct parent shuffle dependencies for the given RDD.
588
DAGScheduler — Stage-Oriented Scheduler
failJobAndIndependentStages(
job: ActiveJob,
failureReason: String,
exception: Option[Throwable] = None): Unit
The internal failJobAndIndependentStages method fails the input job and all the stages that
are only used by the job.
If no stages could be found, you should see the following ERROR message in the logs:
Otherwise, for every stage, failJobAndIndependentStages finds the job ids the stage belongs
to.
If no stages could be found or the job is not referenced by the stages, you should see the
following ERROR message in the logs:
589
DAGScheduler — Stage-Oriented Scheduler
ERROR Job [id] not registered for stage [id] even though that stage was registered for
the job
Only when there is exactly one job registered for the stage and the stage is in RUNNING
state (in runningStages internal registry), TaskScheduler is requested to cancel the stage’s
tasks and marks the stage finished.
abortStage(
failedStage: Stage,
reason: String,
exception: Option[Throwable]): Unit
abortStage is an internal method that finds all the active jobs that depend on the
If it was, abortStage finds all the active jobs (in the internal activeJobs registry) with the
final stage depending on the failedStage stage.
At this time, the completionTime property (of the failed stage’s StageInfo) is assigned to the
current time (millis).
All the active jobs that depend on the failed stage (as calculated above) and the stages that
do not belong to other jobs (aka independent stages) are failed (with the failure reason being
"Job aborted due to stage failure: [reason]" and the input exception ).
If there are no jobs depending on the failed stage, you should see the following INFO
message in the logs:
INFO Ignoring failure of [failedStage] because all jobs depending on it are done
590
DAGScheduler — Stage-Oriented Scheduler
stageDependsOn compares two stages and returns whether the stage depends on target
Internally, stageDependsOn walks through the graph of RDDs of the input stage . For every
RDD in the RDD’s dependencies (using RDD.dependencies ) stageDependsOn adds the RDD
of a NarrowDependency to a stack of RDDs to visit while for a ShuffleDependency it finds
ShuffleMapStage stages for a ShuffleDependency for the dependency and the stage 's first
job id that it later adds to a stack of RDDs to visit if the map stage is ready, i.e. all the
partitions have shuffle outputs.
After all the RDDs of the input stage are visited, stageDependsOn checks if the target 's
RDD is among the RDDs of the stage , i.e. whether the stage depends on target stage.
schedule their execution. Later on, TaskSetManager talks back to DAGScheduler to inform
about the status of the tasks using the same "communication channel".
It allows Spark to release the current thread when posting happens and let the event loop
handle events on a separate thread - asynchronously.
…IMAGE…FIXME
submitWaitingChildStages submits for execution all waiting stages for which the input
591
DAGScheduler — Stage-Oriented Scheduler
Note Waiting stages are the stages registered in waitingStages internal registry.
When executed, you should see the following TRACE messages in the logs:
submitWaitingChildStages finds child stages of the input parent stage, removes them from
waitingStages internal registry, and submits one by one sorted by their job ids.
submitStage(stage: Stage)
submitStage is an internal method that DAGScheduler uses to submit the input stage or its
missing parents (if there any stages not computed yet before the input stage could).
Internally, submitStage first finds the earliest-created job id that needs the stage .
A stage itself tracks the jobs (their ids) it belongs to (using the internal jobIds
Note
registry).
If there are no jobs that require the stage , submitStage aborts it with the reason:
If however there is a job for the stage , you should see the following DEBUG message in
the logs:
592
DAGScheduler — Stage-Oriented Scheduler
submitStage checks the status of the stage and continues when it was not recorded in
With the stage ready for submission, submitStage calculates the list of missing parent
stages of the stage (sorted by their job ids). You should see the following DEBUG message
in the logs:
When the stage has no parent stages missing, you should see the following INFO
message in the logs:
submitStage submits the stage (with the earliest-created job id) and finishes.
If however there are missing parent stages for the stage , submitStage submits all the
parent stages, and the stage is recorded in the internal waitingStages registry.
If TaskScheduler reports that a task failed because a map output file from a previous stage
was lost, the DAGScheduler resubmits the lost stage. This is detected through a
CompletionEvent with FetchFailed , or an ExecutorLost event. DAGScheduler will wait a
small amount of time to see whether other nodes or tasks fail, then resubmit TaskSets for
any lost stage(s) that compute the missing tasks.
Please note that tasks from the old attempts of a stage could still be running.
A stage object tracks multiple StageInfo objects to pass to Spark listeners or the web UI.
The latest StageInfo for the most recent attempt for a stage is accessible through
latestInfo .
Preferred Locations
593
DAGScheduler — Stage-Oriented Scheduler
DAGScheduler computes where to run each task in a stage based on the preferred locations
Internally, getMissingParentStages starts with the stage 's RDD and walks up the tree of all
parent RDDs to find uncached partitions.
getMissingParentStages traverses the parent dependencies of the RDD and acts according
594
DAGScheduler — Stage-Oriented Scheduler
A ShuffleMapStage is available when all its partitions are computed, i.e. results
Note
are available (as blocks).
submitMissingTasks …FIXME
Caution FIXME
When executed, you should see the following DEBUG message in the logs:
The input stage 's pendingPartitions internal field is cleared (it is later filled out with the
partitions to run tasks for).
595
DAGScheduler — Stage-Oriented Scheduler
submitMissingTasks requests the stage for missing partitions, i.e. the indices of the
partitions to compute.
registry).
For the missing partitions, submitMissingTasks computes their task locality preferences,
i.e. pairs of missing partition ids and their task locality information. HERE NOTE: The locality
information of a RDD is called preferred locations.
In case of non-fatal exceptions at this time (while getting the locality information),
submitMissingTasks creates a new stage attempt.
Despite the failure to submit any tasks, submitMissingTasks does announce that at least
there was an attempt on LiveListenerBus by posting a SparkListenerStageSubmitted
message.
submitMissingTasks then aborts the stage (with the reason being "Task creation failed"
The stage is removed from the internal runningStages collection of stages and
submitMissingTasks exits.
When no exception was thrown (while computing the locality information for tasks),
submitMissingTasks creates a new stage attempt and announces it on LiveListenerBus by
Yes, that is correct. Whether there was a task submission failure or not,
Note submitMissingTasks creates a new stage attempt and posts a
SparkListenerStageSubmitted . That makes sense, doesn’t it?
At that time, submitMissingTasks serializes the RDD (of the stage for which tasks are
submitted for) and, depending on the type of the stage, the ShuffleDependency (for
ShuffleMapStage ) or the function (for ResultStage ).
596
DAGScheduler — Stage-Oriented Scheduler
The serialized so-called task binary bytes are "wrapped" as a broadcast variable (to make it
available for executors to execute later on).
That exact moment should make clear how important broadcast variables are
Note for Spark itself that you, a Spark developer, can use, too, to distribute data
across the nodes in a Spark application in a very efficient way.
Any NotSerializableException exceptions lead to aborting the stage (with the reason being
"Task not serializable: [exception]") and removing the stage from the internal runningStages
collection of stages. submitMissingTasks exits.
Any non-fatal exceptions lead to aborting the stage (with the reason being "Task serialization
failed" followed by the exception) and removing the stage from the internal runningStages
collection of stages. submitMissingTasks exits.
Caution FIXME Image with creating tasks for partitions in the stage.
Any non-fatal exceptions lead to aborting the stage (with the reason being "Task creation
failed" followed by the exception) and removing the stage from the internal runningStages
collection of stages. submitMissingTasks exits.
If there are tasks to submit for execution (i.e. there are missing partitions in the stage), you
should see the following INFO message in the logs:
submitMissingTasks records the partitions (of the tasks) in the stage 's pendingPartitions
property.
597
DAGScheduler — Stage-Oriented Scheduler
submitMissingTasks submits the tasks to TaskScheduler for execution (with the id of the
stage , attempt id, the input jobId , and the properties of the ActiveJob with jobId ).
Caution FIXME What are the ActiveJob properties for? Where are they used?
submitMissingTasks records the submission time in the stage’s StageInfo and exits.
If however there are no tasks to submit for execution, submitMissingTasks marks the stage
as finished (with no errorMessage ).
You should see a DEBUG message that varies per the type of the input stage which are:
or
In the end, with no tasks to submit for execution, submitMissingTasks submits waiting child
stages for execution and exits.
598
DAGScheduler — Stage-Oriented Scheduler
getCacheLocs gives TaskLocations (block locations) for the partitions of the input rdd .
The size of the collection from getCacheLocs is exactly the number of partitions
Note
in rdd RDD.
The size of every TaskLocation collection (i.e. every entry in the result of
Note getCacheLocs ) is exactly the number of blocks managed using BlockManagers
on executors.
Internally, getCacheLocs finds rdd in the cacheLocs internal registry (of partition locations
per RDD).
If rdd is not in cacheLocs internal registry, getCacheLocs branches per its storage level.
For NONE storage level (i.e. no caching), the result is an empty locations (i.e. no location
preference).
For other non- NONE storage levels, getCacheLocs requests BlockManagerMaster for block
locations that are then mapped to TaskLocations with the hostname of the owning
BlockManager for a block (of a partition) and the executor id.
getCacheLocs records the computed block locations per partition (as TaskLocation) in
599
DAGScheduler — Stage-Oriented Scheduler
getPreferredLocsInternal(
rdd: RDD[_],
partition: Int,
visited: HashSet[(RDD[_], Int)]): Seq[TaskLocation]
getPreferredLocsInternal first finds the TaskLocations for the partition of the rdd
Otherwise, if not found, getPreferredLocsInternal requests rdd for the preferred locations
of partition and returns them.
If all the attempts fail to yield any non-empty result, getPreferredLocsInternal returns an
empty collection of TaskLocations.
stop(): Unit
TaskScheduler.
The private updateAccumulators method merges the partial values of accumulators from a
completed task into their "source" accumulators on the driver.
600
DAGScheduler — Stage-Oriented Scheduler
For each AccumulableInfo in the CompletionEvent , a partial value from a task is obtained
(from AccumulableInfo.update ) and added to the driver’s accumulator (using
Accumulable.++= method).
For named accumulators with the update value being a non-zero value, i.e. not
Accumulable.zero :
Settings
Table 3. Spark Properties
Spark Property Default Value Description
When enabled (i.e. true ), task
spark.test.noStageRetry false
failures with FetchFailed exceptions
will not cause stage retries, in order to
surface the problem. Used for testing.
601
Jobs
ActiveJob
A job (aka action job or active job) is a top-level work item (computation) submitted to
DAGScheduler to compute the result of an action (or for Adaptive Query Planning / Adaptive
Scheduling).
A job starts with a single target RDD, but can ultimately include other RDDs that are all part
of the target RDD’s lineage graph.
602
Jobs
FIXME
Caution
Where are instances of ActiveJob used?
A job can be one of two logical types (that are only distinguished by an internal finalStage
field of ActiveJob ):
Map-stage job that computes the map output files for a ShuffleMapStage (for
submitMapStage ) before any downstream stages are submitted.
It is also used for Adaptive Query Planning / Adaptive Scheduling, to look at map output
statistics before submitting later stages.
Jobs track how many partitions have already been computed (using finished array of
Boolean elements).
603
Stage — Physical Unit Of Execution
A stage is a set of parallel tasks — one task per partition (of an RDD that computes partial
results of a function executed as part of a Spark job).
A stage can only work on the partitions of a single RDD (identified by rdd ), but can be
associated with many other dependent parent stages (via internal field parents ), with the
boundary of a stage marked by shuffle dependencies.
Submitting a stage can therefore trigger execution of a series of dependent parent stages
(refer to RDDs, Job Execution, Stages, and Partitions).
Figure 2. Submitting a job triggers execution of the stage and its parent stages
Finally, every stage has a firstJobId that is the id of the job that submitted the stage.
604
Stage — Physical Unit Of Execution
ShuffleMapStage is an intermediate stage (in the execution DAG) that produces data for
other stage(s). It writes map output files for a shuffle. It can also be the final stage in a
job in Adaptive Query Planning / Adaptive Scheduling.
ResultStage is the final stage that executes a Spark action in a user program by running
a function on an RDD.
When a job is submitted, a new stage is created with the parent ShuffleMapStage linked —
they can be created from scratch or linked to, i.e. shared, if other jobs use them already.
DAGScheduler splits up a job into a collection of stages. Each stage contains a sequence of
narrow transformations that can be completed without shuffling the entire data set,
separated at shuffle boundaries, i.e. where shuffle occurs. Stages are thus a result of
breaking the RDD graph at shuffle boundaries.
605
Stage — Physical Unit Of Execution
In the end, every stage will have only shuffle dependencies on other stages, and may
compute multiple operations inside it. The actual pipelining of these operations happens in
the RDD.compute() functions of various RDDs, e.g. MappedRDD , FilteredRDD , etc.
606
Stage — Physical Unit Of Execution
At some point of time in a stage’s life, every partition of the stage gets transformed into a
task - ShuffleMapTask or ResultTask for ShuffleMapStage and ResultStage, respectively.
Partitions are computed in jobs, and result stages may not always need to compute all
partitions in their target RDD, e.g. for actions like first() and lookup() .
DAGScheduler prints the following INFO message when there are tasks to submit:
When no tasks in a stage can be submitted, the following DEBUG message shows in the
logs:
FIXME
607
Stage — Physical Unit Of Execution
FIXME
fetchFailedAttemptIds
Used when…FIXME
Number of partitions
numPartitions
Used when…FIXME
Stage Contract
608
Stage — Physical Unit Of Execution
findMissingPartitions Method
Stage.findMissingPartitions() calculates the ids of the missing partitions, i.e. partitions for
which the ActiveJob knows they are not finished (and so they are missing).
A ResultStage stage knows it by querying the active job about partition ids ( numPartitions )
that are not finished (using ActiveJob.finished array of booleans).
failedOnFetchAndShouldAbort Method
Stage.failedOnFetchAndShouldAbort(stageAttemptId: Int): Boolean checks whether the
latestInfo: StageInfo
latestInfo simply returns the most recent StageInfo (i.e. makes it accessible).
makeNewStageAttempt(
numPartitionsToCompute: Int,
taskLocalityPreferences: Seq[Seq[TaskLocation]] = Seq.empty): Unit
609
Stage — Physical Unit Of Execution
Note makeNewStageAttempt uses rdd that was defined when Stage was created.
610
ShuffleMapStage — Intermediate Stage in Execution DAG
ShuffleMapStage — Intermediate Stage in
Execution DAG
ShuffleMapStage (aka shuffle map stage or simply map stage) is an intermediate stage in
Note The logical DAG or logical execution plan is the RDD lineage.
When executed, a ShuffleMapStage saves map output files that can later be fetched by
reduce tasks. When all map outputs are available, the ShuffleMapStage is considered
available (or ready).
Output locations can be missing, i.e. partitions have not been calculated or are lost.
ShuffleMapStage is an input for the other following stages in the DAG of stages and is also
A ShuffleMapStage may contain multiple pipelined operations, e.g. map and filter ,
before shuffle operation.
611
ShuffleMapStage — Intermediate Stage in Execution DAG
_mapStageJobs
A new ActiveJob can be registered and deregistered.
The list of ActiveJobs registered are available using
mapStageJobs.
1. id identifier
3. numTasks — the number of tasks (that is exactly the number of partitions in the rdd )
612
ShuffleMapStage — Intermediate Stage in Execution DAG
addOutputLoc adds the input status to the output locations for the input partition .
removeOutputLoc removes the MapStatus for the input partition and bmAddress
findMissingPartitions(): Seq[Int]
613
ShuffleMapStage — Intermediate Stage in Execution DAG
ShuffleMapStage Sharing
A ShuffleMapStage can be shared across multiple jobs, if these jobs reuse the same RDDs.
1. Shuffle at sortByKey()
3. Intentionally repeat the last action that submits a new job with two stages with one being
shared as already-being-computed
614
ShuffleMapStage — Intermediate Stage in Execution DAG
numAvailableOutputs: Int
mapStageJobs: Seq[ActiveJob]
615
ShuffleMapStage — Intermediate Stage in Execution DAG
removeOutputsOnExecutor removes all MapStatuses with the input execId executor from the
If the input execId had the last registered MapStatus for a partition,
removeOutputsOnExecutor decrements _numAvailableOutputs counter and you should see
outputLocInMapOutputTrackerFormat(): Array[MapStatus]
outputLocInMapOutputTrackerFormat returns the first (if available) element for every partition
from outputLocs internal registry. If there is no entry for a partition, that position is filled with
null .
616
ResultStage — Final Stage in Job
A ResultStage is the final stage in a job that applies a function on one or many partitions of
the target RDD to compute the result of an action.
617
ResultStage — Final Stage in Job
func Property
Caution FIXME
setActiveJob Method
Caution FIXME
removeActiveJob Method
Caution FIXME
activeJob Method
activeJob: Option[ActiveJob]
618
StageInfo
StageInfo
Caution FIXME
fromStage Method
Caution FIXME
619
DAGSchedulerSource — Metrics Source for DAGScheduler
DAGScheduler uses Spark Metrics System to report metrics about internal status.
620
DAGScheduler Event Bus
DAGSchedulerEventProcessLoop —
DAGScheduler Event Bus
DAGSchedulerEventProcessLoop (dag-scheduler-event-loop) is an EventLoop single
TaskSetManager informs
BeginEvent handleBeginEvent DAGScheduler that a task is
starting (through taskStarted).
Posted to inform
DAGScheduler that a task has
completed (successfully or
not).
CompletionEvent conveys the
following information:
4. Accumulator updates
5. TaskInfo
621
DAGScheduler Event Bus
2. ExecutorLossReason
NOTE: The input filesLost
for handleExecutorLost is
ExecutorLost handleExecutorLost
enabled when
ExecutorLossReason is
SlaveLost with workerLost
enabled (it is disabled by
default).
NOTE: handleExecutorLost is
also called when
DAGScheduler is informed that
a task has failed due to
FetchFailed exception.
TaskSetManager informs
DAGScheduler (through
GettingResultEvent taskGettingResult) that a task
has completed and results are
being fetched remotely.
622
DAGScheduler Event Bus
7. Properties of the
execution
Posted to inform
DAGScheduler that
SparkContext submitted a
MapStage for execution
(through submitMapStage).
MapStageSubmitted conveys
the following information:
MapStageSubmitted handleMapStageSubmitted 1. A job identifier (as jobId )
2. The ShuffleDependency
3. A CallSite (as callSite )
4. The JobListener to inform
about the status of the stage.
5. Properties of the
execution
Caution FIXME
623
DAGScheduler Event Bus
separate thread).
handleGetTaskResult Handler
Event Bus).
handleBeginEvent Handler
compute the last attempt id (or -1 if not available) and posts SparkListenerTaskStart (to
listenerBus event bus).
624
DAGScheduler Event Bus
separate thread).
handleJobGroupCancelled Handler
Internally, handleJobGroupCancelled computes all the active jobs (registered in the internal
collection of active jobs) that have spark.jobGroup.id scheduling property set to groupId .
handleJobGroupCancelled then cancels every active job in the group one by one and the
handleMapStageSubmitted(
jobId: Int,
dependency: ShuffleDependency[_, _, _],
callSite: CallSite,
listener: JobListener,
properties: Properties): Unit
625
DAGScheduler Event Bus
INFO DAGScheduler: Got map stage job [id] ([callSite]) with [number] output partitions
INFO DAGScheduler: Final stage: [stage] ([name])
INFO DAGScheduler: Parents of final stage: [parents]
INFO DAGScheduler: Missing parents: [missingStages]
handleMapStageSubmitted finds all the registered stages for the input jobId and collects
When handleMapStageSubmitted could not find or create a ShuffleMapStage , you should see
the following WARN message in the logs.
626
DAGScheduler Event Bus
handleMapStageSubmitted INFO logs Got map stage job %s (%s) with %d output
partitions with dependency.rdd.partitions.length while handleJobSubmitted
does Got job %s (%s) with %d output partitions with partitions.length .
Tip FIXME: Could the above be cut to ActiveJob.numPartitions ?
handleMapStageSubmitted adds a new job with finalStage.addActiveJob(job)
while handleJobSubmitted sets with finalStage.setActiveJob(job) .
handleMapStageSubmitted checks if the final stage has already finished, tells the
listener and removes it using the code:
if (finalStage.isAvailable) {
markMapStageJobAsFinished(job, mapOutputTracker.getStatistics(dependency))
}
TaskSetFailed(
taskSet: TaskSet,
reason: String,
exception: Option[Throwable])
extends DAGSchedulerEvent
handleTaskSetFailed Handler
handleTaskSetFailed(
taskSet: TaskSet,
reason: String,
exception: Option[Throwable]): Unit
627
DAGScheduler Event Bus
handleTaskSetFailed looks the stage (of the input taskSet ) up in the internal
resubmitFailedStages Handler
resubmitFailedStages(): Unit
resubmitFailedStages iterates over the internal collection of failed stages and submits them.
Note resubmitFailedStages does nothing when there are no failed stages reported.
resubmitFailedStages clears the internal cache of RDD partition locations first. It then
makes a copy of the collection of failed stages so DAGScheduler can track failed stages
afresh.
The previously-reported failed stages are sorted by the corresponding job ids in incremental
order and resubmitted.
628
DAGScheduler Event Bus
handleExecutorLost(
execId: String,
filesLost: Boolean,
maybeEpoch: Option[Long] = None): Unit
handleExecutorLost checks whether the input optional maybeEpoch is defined and if not
Figure 2. DAGScheduler.handleExecutorLost
Recurring ExecutorLost events lead to the following repeating DEBUG message in the
logs:
Otherwise, when the executor execId is not in the list of executor lost or the executor
failure’s epoch is smaller than the input maybeEpoch , the executor’s lost event is recorded in
failedEpoch internal registry.
629
DAGScheduler Event Bus
handleExecutorLost exits unless the ExecutorLost event was for a map output fetch
operation (and the input filesLost is true ) or external shuffle service is not used.
In such a case, you should see the following INFO message in the logs:
1. ShuffleMapStage.removeOutputsOnExecutor(execId) is called
2. MapOutputTrackerMaster.registerMapOutputs(shuffleId,
stage.outputLocInMapOutputTrackerFormat(), changeEpoch = true) is called.
separate thread).
handleJobCancellation Handler
handleJobCancellation first makes sure that the input jobId has been registered earlier
If the input jobId is not known to DAGScheduler , you should see the following DEBUG
message in the logs:
630
DAGScheduler Event Bus
Otherwise, handleJobCancellation fails the active job and all independent stages (by looking
up the active job using jobIdToActiveJob) with failure reason:
task
Completed Task instance for a stage, partition and stage
attempt.
taskInfo TaskInfo
631
DAGScheduler Event Bus
input event ).
SparkListenerTaskEnd to LiveListenerBus).
handleTaskCompletion checks the stage of the task out in the stageIdToStage internal
Resubmitted
FetchFailed
Note A Stage tracks its own pending partitions using pendingPartitions property.
handleTaskCompletion branches off given the type of the task that completed, i.e.
632
DAGScheduler Event Bus
If there is no job for the ResultStage , you should see the following INFO message in the
logs:
INFO DAGScheduler: Ignoring result from [task] because its job has finished
ActiveJob tracks task completions in finished property with flags for every
partition in a stage. When the flag for a partition is enabled (i.e. true ), it is
Note
assumed that the partition has been computed (and no results from any
ResultTask are expected and hence simply ignored).
Caution FIXME Describe why could a partition has more ResultTask running.
handleTaskCompletion ignores the CompletionEvent when the partition has already been
The partition for the ActiveJob (of the ResultStage ) is marked as computed and the
number of partitions calculated increased.
ActiveJob tracks what partitions have already been computed and their
Note
number.
If the ActiveJob has finished (when the number of partitions computed is exactly the
number of partitions in a stage) handleTaskCompletion does the following (in order):
In the end, handleTaskCompletion notifies JobListener of the ActiveJob that the task
succeeded.
633
DAGScheduler Event Bus
Note A task succeeded notification holds the output index and the result.
SparkDriverExecutionException exception).
The task’s result is assumed MapStatus that knows the executor where the task has
finished.
If the executor is registered in failedEpoch internal registry and the epoch of the completed
task is not greater than that of the executor (as in failedEpoch registry), you should see the
following INFO message in the logs:
INFO DAGScheduler: Ignoring possibly bogus [task] completion from executor [executorId]
Otherwise, handleTaskCompletion registers the MapStatus result for the partition with the
stage (of the completed task).
still running (in runningStages internal registry) and the ShuffleMapStage stage has no
pending partitions to compute.
MapOutputTrackerMaster (with the epoch incremented) and clears internal cache of the
634
DAGScheduler Event Bus
If the ShuffleMapStage stage is ready, all active jobs of the stage (aka map-stage jobs) are
marked as finished (with MapOutputStatistics from MapOutputTrackerMaster for the
ShuffleDependency ).
If however the ShuffleMapStage is not ready, you should see the following INFO message in
the logs:
TaskEndReason: Resubmitted
For Resubmitted case, you should see the following INFO message in the logs:
The task (by task.partitionId ) is added to the collection of pending partitions of the stage
(using stage.pendingPartitions ).
A stage knows how many partitions are yet to be calculated. A task knows about
Tip
the partition id for which it was launched.
FetchFailed(
bmAddress: BlockManagerId,
shuffleId: Int,
mapId: Int,
reduceId: Int,
message: String)
extends TaskFailedReason
635
DAGScheduler Event Bus
When FetchFailed happens, stageIdToStage is used to access the failed stage (using
task.stageId and the task is available in event in handleTaskCompletion(event:
INFO Ignoring fetch failure from [task] as it's from [failedStage] attempt [task.stage
AttemptId] and there is a more recent attempt for that stage (attempt ID [failedStage.
latestInfo.attemptId]) running
If the failed stage is in runningStages , the following INFO message shows in the logs:
If the failed stage is not in runningStages , the following DEBUG message shows in the logs:
DEBUG Received fetch failure from [task], but its from [failedStage] which is no longe
r running
636
DAGScheduler Event Bus
If the number of fetch failed attempts for the stage exceeds the allowed number, the failed
stage is aborted with the reason:
[failedStage] ([name]) has failed the maximum allowable number of times: 4. Most recen
t failure reason: [failureMessage]
messageScheduler.schedule(
new Runnable {
override def run(): Unit = eventProcessLoop.post(ResubmitFailedStages)
}, DAGScheduler.RESUBMIT_TIMEOUT, TimeUnit.MILLISECONDS)
For all the cases, the failed stage and map stages are both added to the internal registry of
failed stages.
If mapId (in the FetchFailed object for the case) is provided, the map stage output is
cleaned up (as it is broken) using mapStage.removeOutputLoc(mapId, bmAddress) and
MapOutputTrackerMaster.unregisterMapOutput(shuffleId, mapId, bmAddress) methods.
637
DAGScheduler Event Bus
separate thread).
handleStageCancellation Handler
handleStageCancellation checks if the input stageId was registered earlier (in the internal
stageIdToStage registry) and if it was attempts to cancel the associated jobs (with "because
Stage [stageId] was cancelled" cancellation reason).
If the stage stageId was not registered earlier, you should see the following INFO message
in the logs:
handleJobSubmitted Handler
handleJobSubmitted(
jobId: Int,
finalRDD: RDD[_],
func: (TaskContext, Iterator[_]) => _,
partitions: Array[Int],
callSite: CallSite,
listener: JobListener,
properties: Properties)
handleJobSubmitted creates a new ResultStage (as finalStage in the picture below) given
638
DAGScheduler Event Bus
INFO DAGScheduler: Got job [id] ([callSite]) with [number] output partitions
INFO DAGScheduler: Final stage: [stage] ([name])
INFO DAGScheduler: Parents of final stage: [parents]
INFO DAGScheduler: Missing parents: [missingStages]
handleJobSubmitted then registers the new job in jobIdToActiveJob and activeJobs internal
handleJobSubmitted finds all the registered stages for the input jobId and collects their
latest StageInfo .
639
DAGScheduler Event Bus
separate thread).
640
JobListener
JobListener
Spark subscribes for job completion or failure events (after submitting a job to
DAGScheduler) using JobListener trait.
1. JobWaiter waits until DAGScheduler completes a job and passes the results of tasks to
a resultHandler function.
2. ApproximateActionListener …FIXME
In ActiveJob as a listener to notify if tasks in this job finish or the job fails.
In JobSubmitted
JobListener Contract
JobListener is a private[spark] contract with the following two methods:
A JobListener object is notified each time a task succeeds (by taskSucceeded ) and when
the whole job fails (by jobFailed ).
641
JobWaiter
JobWaiter
JobWaiter[T](
dagScheduler: DAGScheduler,
val jobId: Int,
totalTasks: Int,
resultHandler: (Int, T) => Unit)
extends JobListener
map stage.
You can use a JobWaiter to block until the job finishes executing or to cancel it.
While the methods execute, JobSubmitted and MapStageSubmitted events are posted that
reference the JobWaiter .
the number of partitions to compute) equals the number of taskSucceeded , the JobWaiter
instance is marked successful. A jobFailed event marks the JobWaiter instance failed.
642
TaskScheduler — Spark Scheduler
TaskScheduler — Spark Scheduler
TaskScheduler is responsible for submitting tasks for execution in a Spark application (per
scheduling policy).
executorHeartbeatReceived and executorLost methods that are to inform about active and
lost executors, respectively.
TaskScheduler Contract
643
TaskScheduler — Spark Scheduler
trait TaskScheduler {
def applicationAttemptId(): Option[String]
def applicationId(): String
def cancelTasks(stageId: Int, interruptThread: Boolean): Unit
def defaultParallelism(): Int
def executorHeartbeatReceived(
execId: String,
accumUpdates: Array[(Long, Seq[AccumulatorV2[_, _]])],
blockManagerId: BlockManagerId): Boolean
def executorLost(executorId: String, reason: ExecutorLossReason): Unit
def postStartHook(): Unit
def rootPool: Pool
def schedulingMode: SchedulingMode
def setDAGScheduler(dagScheduler: DAGScheduler): Unit
def start(): Unit
def stop(): Unit
def submitTasks(taskSet: TaskSet): Unit
}
644
TaskScheduler — Spark Scheduler
executorHeartbeatReceived(
execId: String,
accumUpdates: Array[(Long, Seq[AccumulatorV2[_, _
]])],
blockManagerId: BlockManagerId): Boolean
executorHeartbeatReceived
Post-start initialization.
Does nothing by default, but allows custom
implementations for some additional post-start
postStartHook
initialization.
Used exclusively when SparkContext is created (right
before SparkContext is considered fully initialized).
Scheduling mode.
Assigns DAGScheduler.
setDAGScheduler Used exclusively when DAGScheduler is created (and
passes on a reference to itself).
Starts TaskScheduler .
start
Used exclusively when SparkContext is created.
Stops TaskScheduler .
stop
Used exclusively when DAGScheduler is stopped.
645
TaskScheduler — Spark Scheduler
TaskScheduler’s Lifecycle
A TaskScheduler is created while SparkContext is being created (by calling
SparkContext.createTaskScheduler for a given master URL and deploy mode).
The TaskScheduler is started right after the blocking TaskSchedulerIsSet message receives
a response.
The application ID and the application’s attempt ID are set at this point (and SparkContext
uses the application id to set spark.app.id Spark property, and configure SparkUI, and
BlockManager).
The internal _taskScheduler is cleared (i.e. set to null ) while SparkContext is being
stopped.
646
TaskScheduler — Spark Scheduler
647
Tasks
Task
Task (aka command) is the smallest individual unit of execution that is launched to
ShuffleMapTask that executes a task and divides the task’s output to multiple buckets
(based on the task’s partitioner).
ResultTask that executes a task and sends the task’s output back to the driver
application.
The very last stage in a Spark job consists of multiple ResultTasks, while earlier stages can
only be ShuffleMapTasks.
Caution FIXME You could have a Spark job with ShuffleMapTask being the last.
In other (more technical) words, a task is a computation on the records in a RDD partition in
a stage of a RDD in a Spark job.
648
Tasks
TaskMetrics
metrics
Created lazily when Task is created from
serializedTaskMetrics.
Used when ???
A task can only belong to one stage and operate on a single partition. All tasks in a stage
must be completed before the stages that follow can start.
Tasks are spawned one by one for each stage and partition.
Task Contract
649
Tasks
Collection of TaskLocations.
Stage ID
Partition ID
(optional) Job ID
(optional) Application ID
run(
taskAttemptId: Long,
attemptNumber: Int,
metricsSystem: MetricsSystem): T
run registers the task (identified as taskAttemptId ) with the local BlockManager .
650
Tasks
run checks _killed flag and, if enabled, kills the task (with interruptThread flag disabled).
Note This is the moment when the custom Task 's runTask is executed.
In the end, run notifies TaskContextImpl that the task has completed (regardless of the
final outcome — a success or a failure).
In case of any exceptions, run notifies TaskContextImpl that the task has failed. run
requests MemoryStore to release unroll memory for this task (for both ON_HEAP and
OFF_HEAP memory modes).
run uses SparkEnv to access the current BlockManager that it uses to access
Note
MemoryStore.
run requests MemoryManager to notify any tasks waiting for execution memory to be freed
run is used exclusively when TaskRunner starts. The Task instance has just
Note been deserialized from taskBytes that were sent over the wire to an executor.
localProperties and TaskMemoryManager are already assigned.
Task States
A task can be in one of the following states (as described by TaskState enumeration):
LAUNCHING
651
Tasks
LOST
Task status updates are sent from executors to the driver through
Note
ExecutorBackend.
internal accumulators whose current value is not the zero value and the RESULT_SIZE
accumulator (regardless whether the value is its zero or not).
initialized.
652
Tasks
kill(interruptThread: Boolean)
kill marks the task to be killed, i.e. it sets the internal _killed flag to true .
If interruptThread is enabled and the internal taskThread is available, kill interrupts it.
653
ShuffleMapTask — Task for ShuffleMapStage
records in a RDD partition to the shuffle system and returns information about the
BlockManager and estimated size of the result shuffle blocks.
ShuffleMapStage .
taskBinary — the broadcast variable with the serialized task (as an array of bytes)
Partition
Collection of TaskLocations
654
ShuffleMapTask — Task for ShuffleMapStage
ShuffleMapTask calculates preferredLocs internal attribute that is the input locs if defined.
Otherwise, it is empty.
preferredLocs and locs are transient so they are not sent over the wire with
Note
the task.
runTask computes a MapStatus (which is the BlockManager and an estimated size of the
result shuffle block) after the records of the Partition were written to the shuffle system.
Internally, runTask uses the current closure Serializer to deserialize the taskBinary
serialized task (into a pair of RDD and ShuffleDependency).
runTask measures the thread and CPU time for deserialization (using the System clock and
runTask gets the records in the RDD partition (as an Iterator ) and writes them (to the
shuffle system).
This is the moment in Task 's lifecycle (and its corresponding RDD) when a
Note RDD partition is computed and in turn becomes a sequence of records (i.e. real
data) on a executor.
runTask stops the ShuffleWriter (with success flag enabled) and returns the MapStatus .
655
ShuffleMapTask — Task for ShuffleMapStage
When the record writing was not successful, runTask stops the ShuffleWriter (with
success flag disabled) and the exception is re-thrown.
You may also see the following DEBUG message in the logs when the ShuffleWriter could
not be stopped.
preferredLocations Method
preferredLocations: Seq[TaskLocation]
656
ResultTask
ResultTask
ResultTask is a Task that executes a function on the records in a RDD partition.
ResultStage .
ResultTask is created with a broadcast variable with the RDD and the function to execute it
Broadcast variable with the serialized task (as Array[Byte] ). The broadcast contains
of a serialized pair of RDD and the function to execute.
Partition to compute
outputId
local Properties
(optional) Job id
(optional) Application id
657
ResultTask
preferredLocations Method
preferredLocations: Seq[TaskLocation]
runTask(context: TaskContext): U
runTask deserializes a RDD and a function from the broadcast and then executes the
Internally, runTask starts by tracking the time required to deserialize a RDD and a function
to execute.
runTask requests the closure Serializer to deserialize an RDD and the function to
In the end, runTask executes the function (passing in the input context and the records
from partition of the RDD).
658
ResultTask
659
FetchFailedException
FetchFailedException
FetchFailedException exception may be thrown when a task runs (and
shuffleId
mapId
reduceId
The root cause of the FetchFailedException is usually because the executor (with the
BlockManager for the shuffle blocks) is lost (i.e. no longer available) due to:
2. The cluster manager that manages the workers with the executors of your Spark
application, e.g. YARN, enforces the container memory limits and eventually decided to
kill the executor due to excessive memory usage.
You should review the logs of the Spark application using web UI, Spark History Server or
cluster-specific tools like yarn logs -applicationId for Hadoop YARN.
toTaskFailedReason Method
Caution FIXME
660
MapStatus — Shuffle Map Output Status
When the number of blocks (the size of uncompressedSizes ) is greater than 2000,
HighlyCompressedMapStatus is chosen.
Caution FIXME What exactly is 2000? Is this the number of tasks in a job?
MapStatus Contract
trait MapStatus {
def location: BlockManagerId
def getSizeForBlock(reduceId: Int): Long
}
location
The BlockManager where a ShuffleMapTask ran and the
result is stored.
getSizeForBlock The estimated size for the reduce block (in bytes).
661
TaskSet — Set of Tasks for Stage
The pair of a stage and a stage attempt uniquely describes a TaskSet and that is what you
can see in the logs when a TaskSet is used:
TaskSet [stageId].[stageAttemptId]
A TaskSet contains a fully-independent sequence of tasks that can run right away based on
the data that is already on the cluster, e.g. map output files from previous stages, though it
may fail if this data becomes unavailable.
removeRunningTask
Caution FIXME Review TaskSet.removeRunningTask(tid)
TaskSchedulerImpl.submitTasks
TaskSchedulerImpl.createTaskSetManager
The priority field is used in FIFOSchedulingAlgorithm in which equal priorities give stages
an advantage (not to say priority).
662
TaskSet — Set of Tasks for Stage
Effectively, the priority field is the job’s id of the first job this stage was part of (for FIFO
scheduling).
663
TaskSetManager
TaskSetManager
TaskSetManager is a Schedulable that manages scheduling of tasks in a TaskSet.
TaskSetManager uses maxTaskFailures to control how many times a single task can fail
before an entire TaskSet gets aborted that can take the following values:
664
TaskSetManager
665
TaskSetManager
Name Description
currentLocalityIndex
lastLaunchTime
localityWaits
666
TaskSetManager
name
priority
recentExceptions
667
TaskSetManager
speculatableTasks
All tasks start with their flags disabled, i.e. false , when
TaskSetManager is created.
successful
The flag for a task is turned on, i.e. true , when a task
finishes successfully but also with a failure.
taskInfos
execution (given resource offer).
NOTE: It appears that the entires stay forever, i.e. are
never removed (perhaps because the maintenance
overhead is not needed given a TaskSetManager is a
short-lived entity).
tasksSuccessful
The current total size of the result of all the tasks that
have finished.
totalResultSize
Starts from 0 when TaskSetManager is created.
668
TaskSetManager
Refer to Logging.
isTaskBlacklistedOnExecOrNode Method
Caution FIXME
getLocalityIndex Method
Caution FIXME
dequeueSpeculativeTask Method
Caution FIXME
executorAdded Method
executorAdded simply calls recomputeLocality method.
abortIfCompletelyBlacklisted Method
Caution FIXME
TaskSetManager is Schedulable
TaskSetManager is a Schedulable with the following implementation:
name is TaskSet_[taskSet.stageId.toString]
It means that it can only be a leaf in the tree of Schedulables (with Pools being the
nodes).
669
TaskSetManager
schedule).
weight is always 1 .
minShare is always 0 .
itself.
executorLost
checkSpeculatableTasks
handleTaskGettingResult finds TaskInfo for tid task in taskInfos internal registry and
addRunningTask adds tid to runningTasksSet internal registry and requests the parent
670
TaskSetManager
removeRunningTask removes tid from runningTasksSet internal registry and requests the
It then checks whether the number is equal or greater than the number of tasks completed
successfully (using tasksSuccessful ).
Having done that, it computes the median duration of all the successfully completed tasks
(using taskInfos internal registry) and task length threshold using the median duration
multiplied by spark.speculation.multiplier that has to be equal or less than 100 .
671
TaskSetManager
For each task (using taskInfos internal registry) that is not marked as successful yet (using
successful ) for which there is only one copy running (using copiesRunning ) and the task
takes more time than the calculated threshold, but it was not in speculatableTasks it is
assumed speculatable.
INFO Marking task [index] in stage [taskSet.id] (on [info.host]) as speculatable becau
se it ran more than [threshold] ms
The task gets added to the internal speculatableTasks collection. The method responds
positively.
getAllowedLocalityLevel Method
Caution FIXME
resourceOffer(
execId: String,
host: String,
maxLocality: TaskLocality): Option[TaskDescription]
When TaskSetManager is a zombie or the resource offer (as executor and host) is
blacklisted, resourceOffer finds no tasks to execute (and returns no TaskDescription).
resourceOffer calculates the allowed task locality for task selection. When the input
current time) and sets it as the current task locality if more localized (specific).
672
TaskSetManager
If a task (index) is found, resourceOffer takes the Task (from tasks registry).
resourceOffer increments the number of the copies of the task that are currently running
and finds the task attempt number (as the size of taskAttempts entries for the task index).
If the task serialization fails, you should see the following ERROR message in the logs:
resourceOffer aborts the TaskSet with the following message and reports a
TaskNotSerializableException .
resourceOffer checks the size of the serialized task. If it is greater than 100 kB, you
WARN Stage [id] contains a task of very large size ([size] KB).
The maximum recommended task size is 100 KB.
Note The size of the serializable task, i.e. 100 kB, is not configurable.
If however the serialization went well and the size is fine too, resourceOffer registers the
task as running.
673
TaskSetManager
For example:
dequeueTask tries to find the higest task index (meeting localization requirements) using
tasks (indices) registered for execution on execId executor. If a task is found, dequeueTask
returns its index, PROCESS_LOCAL task locality and the speculative marker disabled.
dequeueTask then goes over all the possible task localities and checks what locality is
dequeueTask checks out NODE_LOCAL , NO_PREF , RACK_LOCAL and ANY in that order.
For NODE_LOCAL dequeueTask tries to find the higest task index (meeting localization
requirements) using tasks (indices) registered for execution on host host and if found
returns its index, NODE_LOCAL task locality and the speculative marker disabled.
For NO_PREF dequeueTask tries to find the higest task index (meeting localization
requirements) using pendingTasksWithNoPrefs internal registry and if found returns its
index, PROCESS_LOCAL task locality and the speculative marker disabled.
For RACK_LOCAL dequeueTask finds the rack for the input host and if available tries to find
the higest task index (meeting localization requirements) using tasks (indices) registered for
execution on the rack. If a task is found, dequeueTask returns its index, RACK_LOCAL task
locality and the speculative marker disabled.
674
TaskSetManager
For ANY dequeueTask tries to find the higest task index (meeting localization requirements)
using allPendingTasks internal registry and if found returns its index, ANY task locality and
the speculative marker disabled.
The speculative marker is enabled for a task only when dequeueTask did not
Note manage to find a task for the available task localities and did find a speculative
task.
dequeueTaskFromList(
execId: String,
host: String,
list: ArrayBuffer[Int]): Option[Int]
dequeueTaskFromList takes task indices from the input list backwards (from the last to the
first entry). For every index dequeueTaskFromList checks if it is not blacklisted on the input
execId executor and host and if not, checks that:
If dequeueTaskFromList has checked all the indices and no index has passed the checks,
dequeueTaskFromList returns None (to indicate that no index has met the requirements).
675
TaskSetManager
getPendingTasksForHost finds pending tasks (indices) registered for execution on the input
getPendingTasksForRack finds pending tasks (indices) registered for execution on the input
676
TaskSetManager
Caution FIXME
TaskSetManager keeps track of the tasks pending execution per executor, host, rack or with
no locality preferences.
Events
Once a task has finished, TaskSetManager informs DAGScheduler.
Caution FIXME
handleSuccessfulTask records the tid task as finished, notifies the DAGScheduler that the
Internally, handleSuccessfulTask finds TaskInfo (in taskInfos internal registry) and marks it
as FINISHED .
677
TaskSetManager
handleSuccessfulTask notifies DAGScheduler that tid task ended successfully (with the
Task object from tasks internal registry and the result as Success ).
At this point, handleSuccessfulTask finds the other running task attempts of tid task and
requests SchedulerBackend to kill them (since they are no longer necessary now when at
least one task attempt has completed successfully). You should see the following INFO
message in the logs:
tid task is marked as successful. If the number of task that have finished successfully is
exactly the number of the tasks to execute (in the TaskSet ), the TaskSetManager becomes
a zombie.
If tid task was already recorded as successful, you should merely see the following INFO
message in the logs:
maybeFinishTaskSet(): Unit
678
TaskSetManager
maybeFinishTaskSet notifies TaskSchedulerImpl that a TaskSet has finished when there are
Caution FIXME
Up to spark.task.maxFailures attempts
In the following example, you are going to execute a job with two partitions and keep one
failing at all times (by throwing an exception). The aim is to learn the behavior of retrying
task execution in a stage in TaskSet. You will only look at a single task execution, namely
0.0 .
679
TaskSetManager
Zombie state
A TaskSetManager is in zombie state when all tasks in a taskset have completed
successfully (regardless of the number of task attempts), or if the taskset has been aborted.
680
TaskSetManager
While in zombie state, a TaskSetManager can launch no new tasks and responds with no
TaskDescription to resourceOffers.
A TaskSetManager remains in the zombie state until all tasks have finished running, i.e. to
continue to track and account for the running tasks.
TaskSchedulerImpl
Acceptable number of task failure, i.e. how many times a single task can fail before an
entire TaskSet gets aborted.
(optional) BlacklistTracker
TaskSetManager requests the current epoch from MapOutputTracker and sets it on all tasks
in the taskset.
681
TaskSetManager
TaskSetManager adds the tasks as pending execution (in reverse order from the highest
FIXME Why is reverse order important? The code says it’s to execute tasks
Caution
with low indices first.
handleFailedTask(
tid: Long,
state: TaskState,
reason: TaskFailedReason): Unit
handleFailedTask finds TaskInfo of tid task in taskInfos internal registry and simply quits if
handleFailedTask unregisters tid task from the internal registry of running tasks and then
handleFailedTask decrements the number of the running copies of tid task (in
682
TaskSetManager
Lost task [id] in stage [taskSetId] (TID [tid], [host], executor [executorId]): [reaso
n]
handleFailedTask then calculates the failure exception per the input reason (follow the
FetchFailed
ExceptionFailure
ExecutorLostFailure
other TaskFailedReasons
handleFailedTask informs DAGScheduler that tid task has ended (passing on the Task
instance from tasks internal registry, the input reason , null result, calculated
accumUpdates per failure, and the TaskInfo).
If tid task has already been marked as completed (in successful internal registry) you
should see the following INFO message in the logs:
INFO Task [id] in stage [id] (TID [tid]) failed, but the task
will not be re-executed (either because the task failed with a
shuffle data fetch failure, so the previous stage needs to be
re-run, or because a different copy of the task has already
succeeded).
Read up on Speculative Execution of Tasks to find out why a single task could be
Tip
executed multiple times.
683
TaskSetManager
If the TaskSetManager is not a zombie and the task failed reason should be counted
towards the maximum number of times the task is allowed to fail before the stage is aborted
(i.e. TaskFailedReason.countTowardsTaskFailures attribute is enabled), the optional
TaskSetBlacklist is notified (passing on the host, executor and the task’s index).
handleFailedTask then increments the number of failures for tid task and checks if the
number of failures is equal or greater than the allowed number of task failures per TaskSet
(as defined when the TaskSetManager was created).
If so, i.e. the number of task failures of tid task reached the maximum value, you should
see the following ERROR message in the logs:
ERROR Task [id] in stage [id] failed [maxTaskFailures] times; aborting job
And handleFailedTask aborts the TaskSet with the following message and then quits:
Task [index] in stage [id] failed [maxTaskFailures] times, most recent failure: [failu
reReason]
In the end (except when the number of failures of tid task grew beyond the acceptable
number), handleFailedTask attempts to mark the TaskSet as finished.
FetchFailed TaskFailedReason
For FetchFailed you should see the following WARN message in the logs:
WARN Lost task [id] in stage [id] (TID [tid], [host], executor [id]): [reason]
Unless tid has already been marked as successful (in successful internal registry), it
becomes so and the number of successful tasks in TaskSet gets increased.
ExceptionFailure TaskFailedReason
684
TaskSetManager
ERROR Task [id] in stage [id] (TID [tid]) had a not serializable result: [description]
; not retrying
For full printout of the ExceptionFailure , the following WARN appears in the logs:
WARN Lost task [id] in stage [id] (TID [tid], [host], executor [id]): [reason]
INFO Lost task [id] in stage [id] (TID [tid]) on [host], executor [id]: [className] ([
description]) [duplicate [dupCount]]
ExecutorLostFailure TaskFailedReason
For ExecutorLostFailure if not exitCausedByApp , you should see the following INFO in the
logs:
INFO Task [tid] failed because while it was being computed, its executor exited for a
reason unrelated to the task. Not counting this failure towards the maximum number of
failures for the task.
Other TaskFailedReasons
For the other TaskFailedReasons, you should see the following WARN message in the logs:
WARN Lost task [id] in stage [id] (TID [tid], [host], executor [id]): [reason]
685
TaskSetManager
addPendingTask registers a index task in the pending-task lists that the task should be
Internally, addPendingTask takes the preferred locations of the task (given index ) and
registers the task in the internal pending-task registries for every preferred location:
pendingTasksForRack for the racks from TaskSchedulerImpl per the host (of a
TaskLocation).
INFO Pending task [index] has a cached location at [host] , where there are executors
[executors]
DEBUG Pending task [index] has a cached location at [host] , but there are no executor
s alive there.
686
TaskSetManager
executorLost re-enqueues all the ShuffleMapTasks that have completed already on the lost
executor (when external shuffle service is not in use) and reports all currently-running tasks
on the lost executor as failed.
Internally, executorLost first checks whether the tasks are ShuffleMapTasks and whether
an external shuffle service is enabled (that could serve the map shuffle outputs in case of
failure).
executorLost checks out the first task in tasks as it is assumed the other
Note belong to the same stage. If the task is a ShuffleMapTask, the entire TaskSet is
for a ShuffleMapStage.
If executorLost is indeed due to an executor lost that executed tasks for a ShuffleMapStage
(that this TaskSetManager manages) and no external shuffle server is enabled,
executorLost finds all the tasks that were scheduled on this lost executor and marks the
executorLost uses records every tasks on the lost executor in successful (as
Note false ) and decrements [copiesRunning copiesRunning], and tasksSuccessful
for every task.
executorLost registers every task as pending execution (per preferred locations) and
informs DAGScheduler that the tasks (on the lost executor) have ended (with Resubmitted
reason).
687
TaskSetManager
Regardless of whether this TaskSetManager manages ShuffleMapTasks or not (it could also
manage ResultTasks) and whether the external shuffle service is used or not, executorLost
finds all currently-running tasks on this lost executor and reports them as failed (with the task
state FAILED ).
executorLost finds out if the reason for the executor lost is due to application
fault, i.e. assumes ExecutorExited 's exit status as the indicator,
Note
ExecutorKilled for non-application’s fault and any other reason is an
application fault.
recomputeLocality(): Unit
currentLocalityIndex.
Caution FIXME But why are the caches important (and have to be recomputed)?
currentLocalityIndex in myLocalityLevels).
recomputeLocality computes locality levels (for scheduled tasks) and saves the result in
recomputeLocality computes localityWaits (by finding locality wait for every locality level in
In the end, recomputeLocality getLocalityIndex of the previous locality level and records it in
currentLocalityIndex.
688
TaskSetManager
computeValidLocalityLevels(): Array[TaskLocality]
computeValidLocalityLevels computes valid locality levels for tasks that were registered in
NODE_LOCAL pendingTasksForHost
NO_PREF pendingTasksWithNoPrefs
RACK_LOCAL pendingTasksForRack
computes locality wait for the corresponding TaskLocality and proceeds with it only when
the locality wait is not 0 .
In the end, you should see the following DEBUG message in the logs:
689
TaskSetManager
NODE_LOCAL spark.locality.wait.node
RACK_LOCAL spark.locality.wait.rack
canFetchMoreResults checks whether there is enough memory to fetch the result of a task.
Internally, canFetchMoreResults increments the internal totalResultSize with the input size
(which is the size of the result of a task) and increments the internal calculatedTasks.
If the current internal totalResultSize is bigger than the maximum result size,
canFetchMoreResults prints out the following ERROR message to the logs:
690
TaskSetManager
Settings
691
TaskSetManager
692
TaskSetManager
693
Schedulable
Schedulable
Schedulable is a contract of schedulable entities.
Pool
TaskSetManager
Schedulable Contract
Every Schedulable follows the following contract:
It has a name .
name: String
parent: Pool
schedulingMode: SchedulingMode
weight: Int
minShare: Int
runningTasks: Int
priority: Int
stageId: Int
schedulableQueue: ConcurrentLinkedQueue[Schedulable]
addSchedulable(schedulable: Schedulable): Unit
removeSchedulable(schedulable: Schedulable): Unit
694
Schedulable
checkSpeculatableTasks(): Boolean
getSortedTaskSetQueue
getSortedTaskSetQueue: ArrayBuffer[TaskSetManager]
schedulableQueue
schedulableQueue: ConcurrentLinkedQueue[Schedulable]
695
Schedulable Pool
Schedulable Pool
Pool is a Schedulable entity that represents a tree of TaskSetManagers, i.e. it contains a
A Pool has a mandatory name, a scheduling mode, initial minShare and weight that are
defined when it is created.
The TaskScheduler Contract and Schedulable Contract both require that their
Note
entities have rootPool of type Pool .
increaseRunningTasks Method
Caution FIXME
decreaseRunningTasks Method
Caution FIXME
taskSetSchedulingAlgorithm Attribute
Using the scheduling mode (given when a Pool object is created), Pool selects
SchedulingAlgorithm and sets taskSetSchedulingAlgorithm :
696
Schedulable Pool
Schedulables by Name
— schedulableNameToSchedulable Registry
used in SparkContext.getPoolForName.
addSchedulable Method
schedulableNameToSchedulable.
removeSchedulable Method
schedulableNameToSchedulable.
SchedulingAlgorithm
SchedulingAlgorithm is the interface for a sorting algorithm to sort Schedulables.
697
Schedulable Pool
FIFOSchedulingAlgorithm
FIFOSchedulingAlgorithm is a scheduling algorithm that compares Schedulables by their
Caution FIXME A picture is worth a thousand words. How to picture the algorithm?
FairSchedulingAlgorithm
FairSchedulingAlgorithm is a scheduling algorithm that compares Schedulables by their
Figure 1. FairSchedulingAlgorithm
For each input Schedulable , minShareRatio is computed as runningTasks by minShare
(but at least 1 ) while taskToWeightRatio is runningTasks by weight .
698
Schedulable Pool
699
Schedulable Builders
Schedulable Builders
SchedulableBuilder is a contract of schedulable builders that operate on a pool of
Schedulable builders can build pools and add new Schedulable entities to the pool.
FairSchedulableBuilder
SchedulableBuilder Contract
Every SchedulableBuilder provides the following services:
rootPool: Pool
buildPools(): Unit
700
Schedulable Builders
rootPool.
701
Schedulable Builders
FIFOSchedulableBuilder - SchedulableBuilder
for FIFO Scheduling Mode
FIFOSchedulableBuilder is a SchedulableBuilder that holds a single Pool (that is given when
FIFOSchedulableBuilder is created).
addTaskSetManager passes the input Schedulable to the one and only rootPool Pool
rootPool Pool
702
Schedulable Builders
FairSchedulableBuilder - SchedulableBuilder
for FAIR Scheduling Mode
FairSchedulableBuilder is a SchedulableBuilder with the pools configured in an optional
Refer to Logging.
buildPools
buildPools builds the rootPool based on the allocations configuration file from the
addTaskSetManager
addTaskSetManager looks up the default pool (using Pool.getSchedulableByName).
703
Schedulable Builders
If the pool name is not available, it is registered with the pool name, FIFO scheduling mode,
minimum share 0 , and weight 1 .
After the new pool was registered, you should see the following INFO message in the logs:
The manager schedulable is registered to the pool (either the one that already existed or
was created just now).
spark.scheduler.pool Property
SparkContext.setLocalProperty allows for setting properties per thread to group jobs in
logical groups. This mechanism is used by FairSchedulableBuilder to watch for
spark.scheduler.pool property to group jobs from threads and submit them to a non-default
pool.
704
Schedulable Builders
<?xml version="1.0"?>
<allocations>
<pool name="production">
<schedulingMode>FAIR</schedulingMode>
<weight>1</weight>
<minShare>2</minShare>
</pool>
<pool name="test">
<schedulingMode>FIFO</schedulingMode>
<weight>2</weight>
<minShare>3</minShare>
</pool>
</allocations>
The top-level element’s name allocations can be anything. Spark does not
Tip
insist on allocations and accepts any name.
the default pool with FIFO scheduling mode, minimum share 0 , and weight 1 .
buildFairSchedulerPool(is: InputStream)
For each pool element, it reads its name (from name attribute) and assumes the default
pool configuration to be FIFO scheduling mode, minimum share 0 , and weight 1 (unless
overrode later).
705
Schedulable Builders
If schedulingMode element exists and is not empty for the pool it becomes the current pool’s
scheduling mode. It is case sensitive, i.e. with all uppercase letters.
If minShare element exists and is not empty for the pool it becomes the current pool’s
minShare . It must be an integer number.
If weight element exists and is not empty for the pool it becomes the current pool’s
weight . It must be an integer number.
If all is successful, you should see the following INFO message in the logs:
Settings
spark.scheduler.allocation.file
spark.scheduler.allocation.file is the file path of an optional scheduler configuration file
706
Scheduling Mode — spark.scheduler.mode Spark Property
FIFO with no pools but a single top-level unnamed pool with elements being
TaskSetManager objects; lower priority gets Schedulable sooner or earlier stage wins.
FAIR with a hierarchy of Schedulable (sub)pools with the rootPool at the top.
Out of three possible SchedulingMode policies only FIFO and FAIR modes are
Note
supported by TaskSchedulerImpl.
After the root pool is initialized, the scheduling mode is no longer relevant (since
the Schedulable that represents the root pool is fully set up).
Note
The root pool is later used when TaskSchedulerImpl submits tasks (as
TaskSets ) for execution.
707
TaskInfo
TaskInfo
TaskInfo is information about a running task attempt inside a TaskSet.
TaskSetManager dequeues a task for execution (given resource offer) (and records the
task as running)
Task ID
Index of the task within its TaskSet that may not necessarily be the same as the ID of
the RDD partition that the task is computing.
Task attempt ID
Executor that has been offered (as a resource) to run the task
708
TaskInfo
markFinished marks TaskInfo as failed when the input state is FAILED or killed for
709
TaskDescription — Metadata of Single Task
Task ID
Executor ID
Task name
Properties
TaskSetManager is requested to find a task ready for execution (given a resource offer)
as a ByteBuffer )
TaskDescription(TID=[taskId], index=[index])
decode …FIXME
710
TaskDescription — Metadata of Single Task
encode …FIXME
711
TaskSchedulerImpl — Default TaskScheduler
TaskSchedulerImpl — Default TaskScheduler
TaskSchedulerImpl is the default TaskScheduler.
TaskSchedulerImpl can schedule tasks for multiple types of cluster managers by means of
SchedulerBackends.
started.
TaskSchedulerImpl can track racks per host and port (that however is only used with
SchedulerBackend
backend
Set when TaskSchedulerImpl is initialized.
DAGScheduler
dagScheduler
Used when…FIXME
712
TaskSchedulerImpl — Default TaskScheduler
executorIdToTaskCount
Lookup table of the number of running tasks by
executor.
Flag…FIXME
hasLaunchedTask
Used when…FIXME
Schedulable Pool
rootPool
Used when TaskSchedulerImpl …
SchedulingMode
schedulingMode
Used when TaskSchedulerImpl …
Refer to Logging.
713
TaskSchedulerImpl — Default TaskScheduler
applicationId(): String
nodeBlacklist Method
Caution FIXME
cleanupTaskState Method
Caution FIXME
newTaskId Method
Caution FIXME
getExecutorsAliveOnHost Method
Caution FIXME
isExecutorAlive Method
Caution FIXME
hasExecutorsAliveOnHost Method
Caution FIXME
hasHostAliveOnRack Method
Caution FIXME
714
TaskSchedulerImpl — Default TaskScheduler
executorLost Method
Caution FIXME
mapOutputTracker
Caution FIXME
starvationTimer
Caution FIXME
executorHeartbeatReceived Method
executorHeartbeatReceived(
execId: String,
accumUpdates: Array[(Long, Seq[AccumulatorV2[_, _]])],
blockManagerId: BlockManagerId): Boolean
executorHeartbeatReceived is…
Caution FIXME
handleSuccessfulTask Method
715
TaskSchedulerImpl — Default TaskScheduler
handleSuccessfulTask(
taskSetManager: TaskSetManager,
tid: Long,
taskResult: DirectTaskResult[_]): Unit
handleSuccessfulTask simply forwards the call to the input taskSetManager (passing tid
and taskResult ).
handleTaskGettingResult Method
applicationAttemptId Method
applicationAttemptId(): Option[String]
Caution FIXME
schedulableBuilder Attribute
schedulableBuilder is a SchedulableBuilder for the TaskSchedulerImpl .
It is set up when a TaskSchedulerImpl is initialized and can be one of two available builders:
getRackForHost is a method to know about the racks per hosts and ports. By default, it
assumes that racks are unknown (i.e. the method returns None ).
TaskSchedulerImpl.removeExecutor to…FIXME
SparkContext
optional BlacklistTracker
optional isLocal flag to differentiate between local and cluster run modes (defaults to
false )
(defaults to FIFO ).
717
TaskSchedulerImpl — Default TaskScheduler
718
TaskSchedulerImpl — Default TaskScheduler
start(): Unit
statusUpdate finds TaskSetManager for the input tid task (in taskIdToTaskSetManager).
When state is one of the finished states, i.e. FINISHED , FAILED , KILLED or LOST ,
statusUpdate cleanupTaskState for the input tid .
the task result (and notify TaskSchedulerImpl back) for tid in FINISHED state and
schedule an asynchrounous task to deserialize TaskFailedReason (and notify
719
TaskSchedulerImpl — Default TaskScheduler
TaskSchedulerImpl back) for tid in the other finished states (i.e. FAILED , KILLED ,
LOST ).
If a task is in LOST state, statusUpdate notifies DAGScheduler that the executor was lost
(with SlaveLost and the reason Task [tid] was lost, so marking the executor as lost as
well. ) and requests SchedulerBackend to revive offers.
In case the TaskSetManager for tid could not be found (in taskIdToTaskSetManager
registry), you should see the following ERROR message in the logs:
ERROR Ignoring update with state [state] for TID [tid] because its task set is gone (t
his is likely the result of receiving duplicate task finished status updates)
When TaskSchedulerImpl starts (in non-local run mode) with spark.speculation enabled,
speculationScheduler is used to schedule checkSpeculatableTasks to execute periodically
720
TaskSchedulerImpl — Default TaskScheduler
checkSpeculatableTasks(): Unit
checkSpeculatableTasks requests rootPool to check for speculatable tasks (if they ran for
more than 100 ms) and, if there any, requests SchedulerBackend to revive offers.
removeExecutor removes the executorId executor from the following internal registries:
Contract that waits until a scheduler backend is ready (using the internal blocking
waitBackendReady).
721
TaskSchedulerImpl — Default TaskScheduler
stop(): Unit
stop() stops all the internal services, i.e. task-scheduler-speculation executor service,
defaultParallelism(): Int
Default level of parallelism is a hint for sizing jobs that SparkContext uses to
Note
create RDDs with the right number of partitions when not specified explicitly.
722
TaskSchedulerImpl — Default TaskScheduler
Figure 4. TaskSchedulerImpl.submitTasks
When executed, you should see the following INFO message in the logs:
submitTasks creates a TaskSetManager (for the input taskSet and acceptable number of
task failures).
submitTasks registers the TaskSetManager per stage and stage attempt id (in
taskSetsByStageIdAndAttempt).
Note The stage and the stage attempt id are attributes of a TaskSet.
Note submitTasks assumes that only one TaskSet can be active for a Stage .
If there is more than one active TaskSetManager for the stage, submitTasks reports a
IllegalStateException with the message:
more than one active taskSet for stage [stage]: [TaskSet ids]
723
TaskSchedulerImpl — Default TaskScheduler
The root pool can be a single flat linked queue (in FIFO scheduling mode) or a
Note
hierarchy of pools of Schedulables (in FAIR scheduling mode).
submitTasks makes sure that the requested resources, i.e. CPU and memory, are assigned
When submitTasks is called the very first time ( hasReceivedTask is false ) in cluster mode
only (i.e. isLocal of the TaskSchedulerImpl is false ), starvationTimer is scheduled to
execute after spark.starvation.timeout to ensure that the requested resources, i.e. CPUs and
memory, were assigned by a cluster manager.
Every time the starvation timer thread is executed and hasLaunchedTask flag is false , the
following WARN message is printed out to the logs:
WARN Initial job has not accepted any resources; check your cluster UI to ensure that
workers are registered and have sufficient resources
Otherwise, when the hasLaunchedTask flag is true the timer thread cancels itself.
In the end, submitTasks requests the current SchedulerBackend to revive offers (available
as backend).
BlacklistTracker ).
724
TaskSchedulerImpl — Default TaskScheduler
handleFailedTask(
taskSetManager: TaskSetManager,
tid: Long,
taskState: TaskState,
reason: TaskFailedReason): Unit
handleFailedTask notifies taskSetManager that tid task has failed and, only when
taskSetManager is not in zombie state and tid is not in KILLED state, requests
taskSetFinished Method
registry) and removes the stage attempt from them, possibly with removing the entire stage
record from taskSetsByStageIdAndAttempt registry completely (if there are no other attempts
registered).
INFO Removed TaskSet [id], whose tasks have all completed, from pool [name]
725
TaskSchedulerImpl — Default TaskScheduler
waitBackendReady(): Unit
726
TaskSchedulerImpl — Default TaskScheduler
resourceOffers takes the resources offers (as WorkerOffers) and generates a collection
resourceOffers then randomly shuffles offers (to evenly distribute tasks across executors
and avoid over-utilizing some executors) and initializes the local data structures tasks and
availableCpus (as shown in the figure below).
727
TaskSchedulerImpl — Default TaskScheduler
For every TaskSetManager (in scheduling order), you should see the following DEBUG
message in the logs:
Only if a new executor was added, resourceOffers notifies every TaskSetManager about the
change (to recompute locality preferences).
728
TaskSchedulerImpl — Default TaskScheduler
resourceOffers then takes every TaskSetManager (in scheduling order) and offers them
each node in increasing order of locality levels (per TaskSetManager’s valid locality levels).
For every TaskSetManager and the TaskSetManager 's valid locality level, resourceOffers
tries to find tasks to schedule (on executors) as long as the TaskSetManager manages to
launch a task (given the locality level).
When resourceOffers managed to launch a task, the internal hasLaunchedTask flag gets
enabled (that effectively means what the name says "there were executors and I managed
to launch a task").
resourceOfferSingleTaskSet(
taskSet: TaskSetManager,
maxLocality: TaskLocality,
shuffledOffers: Seq[WorkerOffer],
availableCpus: Array[Int],
tasks: Seq[ArrayBuffer[TaskDescription]]): Boolean
(only if the number of available CPU cores (using the input availableCpus ) is at least
spark.task.cpus) requests TaskSetManager (as the input taskSet ) to find a Task to
execute (given the resource offer) (as an executor, a host, and the input maxLocality ).
729
TaskSchedulerImpl — Default TaskScheduler
registries:
taskIdToTaskSetManager
taskIdToExecutorId
executorIdToRunningTaskIds
the WorkerOffer ).
If there is a TaskNotSerializableException , you should see the following ERROR in the logs:
ERROR Resource offer failed, task set [name] was not serializable
1. PROCESS_LOCAL
2. NODE_LOCAL
3. NO_PREF
4. RACK_LOCAL
5. ANY
WorkerOffer represents a resource offer with free CPU cores available on an executorId
executor on a host .
730
TaskSchedulerImpl — Default TaskScheduler
Settings
Table 2. Spark Properties
Spark Property Default Value Description
4 in cluster
mode
The number of individual task
spark.task.maxFailures 1 in local failures before giving up on the
entire TaskSet and the job
maxFailures afterwards.
in local-with-
retries
731
Speculative Execution of Tasks
When enabled, you should see the following INFO message in the logs:
The job with speculatable tasks should finish while speculative tasks are running, and it will
leave these tasks running - no KILL command yet.
It uses checkSpeculatableTasks method that asks rootPool to check for speculatable tasks.
If there are any, SchedulerBackend is called for reviveOffers.
FIXME How does Spark handle repeated results of speculative tasks since
Caution
there are copies launched?
Settings
732
Speculative Execution of Tasks
spark.speculation.interval 100ms
The time interval to use before
checking for speculative tasks.
spark.speculation.multiplier 1.5
733
TaskResultGetter
TaskResultGetter
TaskResultGetter is a helper class of TaskSchedulerImpl for asynchronous deserialization
of task results of tasks that have finished successfully (possibly fetching remote blocks) or
the failures for failed tasks.
Tip Consult Task States in Tasks to learn about the different task states.
Refer to Logging.
getTaskResultExecutor: ExecutorService
stop Method
stop(): Unit
734
TaskResultGetter
serializer Attribute
serializer: ThreadLocal[SerializerInstance]
When created for a new thread, serializer is initialized with a new instance of Serializer
(using SparkEnv.closureSerializer).
taskResultSerializer Attribute
taskResultSerializer: ThreadLocal[SerializerInstance]
When created for a new thread, taskResultSerializer is initialized with a new instance of
Serializer (using SparkEnv.serializer).
enqueueSuccessfulTask(
taskSetManager: TaskSetManager,
tid: Long,
serializedData: ByteBuffer): Unit
successfully or not.
735
TaskResultGetter
Internally, the enqueued task first deserializes serializedData to a TaskResult (using the
internal thread-local serializer).
For a DirectTaskResult, the task checks the available memory for the task result and, when
the size overflows spark.driver.maxResultSize, it simply returns.
Otherwise, when there is enough memory to hold the task result, it deserializes the
DirectTaskResult (using the internal thread-local taskResultSerializer).
For a IndirectTaskResult, the task checks the available memory for the task result and, when
the size could overflow the maximum result size, it removes the block and simply returns.
Otherwise, when there is enough memory to hold the task result, you should see the
following DEBUG message in the logs:
The task notifies TaskSchedulerImpl that it is about to fetch a remote block for a task result.
It then gets the block from remote block managers (as serialized bytes).
When the block could not be fetched, TaskSchedulerImpl is informed (with TaskResultLost
task failure reason) and the task simply returns.
The task result (as a serialized byte buffer) is then deserialized to a DirectTaskResult (using
the internal thread-local serializer) and deserialized again using the internal thread-local
taskResultSerializer (just like for the DirectTaskResult case). The block is removed from
BlockManagerMaster and simply returns.
736
TaskResultGetter
enqueueFailedTask(
taskSetManager: TaskSetManager,
tid: Long,
taskState: TaskState.TaskState,
serializedData: ByteBuffer): Unit
Any ClassNotFoundException leads to the following ERROR message in the logs (without
breaking the flow of enqueueFailedTask ):
Settings
Table 1. Spark Properties
Spark Property Default Value Description
spark.resultGetter.threads 4
The number of threads for
TaskResultGetter .
737
TaskResultGetter
738
TaskContext
TaskContext
TaskContext is the base for contextual information about a task.
You can access the active TaskContext instance using TaskContext.get method.
import org.apache.spark.TaskContext
val ctx = TaskContext.get
TaskContext allows for registering task listeners and accessing local properties that were
package org.apache.spark
Registers a TaskCompletionListener
addTaskCompletionListener
Used when…
739
TaskContext
Used when…
getLocalProperty
Accesses local properties set by the driver using
SparkContext.setLocalProperty.
getMetricsSources
Gives all the metrics sources by sourceName which are
associated with the instance that runs the task.
740
TaskContext
unset Method
Caution FIXME
setTaskContext Method
Caution FIXME
get(): TaskContext
get method returns the TaskContext instance for an active task (as a TaskContextImpl).
There can only be one instance and tasks can use the object to access contextual
information about themselves.
scala> rdd.partitions.size
res0: Int = 3
rdd.foreach { n =>
import org.apache.spark.TaskContext
val tc = TaskContext.get
val msg = s"""|-------------------
|partitionId: ${tc.partitionId}
|stageId: ${tc.stageId}
|attemptNum: ${tc.attemptNumber}
|taskAttemptId: ${tc.taskAttemptId}
|-------------------""".stripMargin
println(msg)
}
741
TaskContext
addTaskCompletionListener Method
import org.apache.spark.TaskContext
val printTaskInfo = (tc: TaskContext) => {
val msg = s"""|-------------------
|partitionId: ${tc.partitionId}
|stageId: ${tc.stageId}
|attemptNum: ${tc.attemptNumber}
|taskAttemptId: ${tc.taskAttemptId}
|-------------------""".stripMargin
println(msg)
}
rdd.foreachPartition { _ =>
val tc = TaskContext.get
tc.addTaskCompletionListener(printTaskInfo)
}
addTaskFailureListener Method
742
TaskContext
task failure only. It can be executed multiple times since a task can be re-attempted when it
fails.
import org.apache.spark.TaskContext
val printTaskErrorInfo = (tc: TaskContext, error: Throwable) => {
val msg = s"""|-------------------
|partitionId: ${tc.partitionId}
|stageId: ${tc.stageId}
|attemptNum: ${tc.attemptNumber}
|taskAttemptId: ${tc.taskAttemptId}
|error: ${error.toString}
|-------------------""".stripMargin
println(msg)
}
getPartitionId(): Int
getPartitionId gets the active TaskContext and returns partitionId or 0 (if TaskContext
not available).
743
TaskContext
744
TaskContextImpl
TaskContextImpl
TaskContextImpl is the one and only TaskContext.
Caution FIXME
stage
partition
task attempt
attempt number
runningLocally = false
taskMemoryManager
taskMetrics Property
Caution FIXME
markTaskCompleted Method
Caution FIXME
markTaskFailed Method
Caution FIXME
Caution FIXME
markInterrupted Method
Caution FIXME
745
TaskContextImpl
746
TaskResults — DirectTaskResult and IndirectTaskResult
TaskResults — DirectTaskResult and
IndirectTaskResult
TaskResult models a task result. It has exactly two concrete implementations:
1. DirectTaskResult is the TaskResult to be serialized and sent over the wire to the driver
together with the result bytes and accumulators.
The decision of the concrete TaskResult is made when a TaskRunner finishes running a
task and checks the size of the result.
DirectTaskResult[T](
var valueBytes: ByteBuffer,
var accumUpdates: Seq[AccumulatorV2[_, _]])
extends TaskResult[T] with Externalizable
DirectTaskResult is the TaskResult of running a task (that is later returned serialized to the
driver) when the size of the task’s result is smaller than spark.driver.maxResultSize and
spark.task.maxDirectResultSize (or spark.rpc.message.maxSize whatever is smaller).
747
TaskResults — DirectTaskResult and IndirectTaskResult
748
TaskMemoryManager — Memory Manager of Single Task
TaskMemoryManager — Memory Manager of
Single Task
TaskMemoryManager manages the memory allocated to an individual task.
The number of bits to encode offsets in data pages (aka OFFSET_BITS ) is 51 (i.e. 64
bits - PAGE_NUMBER_BITS )
The number of entries in the page table and allocated pages (aka PAGE_TABLE_SIZE ) is
8192 (i.e. 1 << PAGE_NUMBER_BITS )
The maximum page size (aka MAXIMUM_PAGE_SIZE_BYTES ) is 15GB (i.e. ((1L << 31) - 1)
* 8L )
When created, TaskMemoryManager is given a MemoryManager that is used for the following:
749
TaskMemoryManager — Memory Manager of Single Task
cleanUpAllAllocatedMemory
releaseExecutionMemory
showMemoryUsage
pageSizeBytes
freePage
getMemoryConsumptionForThisTask
consumers MemoryConsumers
750
TaskMemoryManager — Memory Manager of Single Task
Refer to Logging.
cleanUpAllAllocatedMemory Method
long cleanUpAllAllocatedMemory()
Caution FIXME
All recorded consumers are queried for the size of used memory. If the memory used is
greater than 0, the following WARN message is printed out to the logs:
751
TaskMemoryManager — Memory Manager of Single Task
memory could be allocated, it calls spill on every consumer, itself including. Finally,
acquireExecutionMemory returns the allocated memory.
taskAttemptId, mode) .
Caution FIXME
When the memory obtained is less than requested (by required ), acquireExecutionMemory
requests all consumers to release memory (by spilling it to disk).
You may see the following DEBUG message when spill released some memory:
It does the memory acquisition until it gets enough memory or there are no more consumers
to request spill from.
You may also see the following ERROR message in the logs when there is an error while
requesting spill with OutOfMemoryError followed.
If the earlier spill on the consumers did not work out and there is still memory to be
acquired, acquireExecutionMemory requests the input consumer to spill memory to disk (that
in fact requested more memory!)
If the consumer releases some memory, you should see the following DEBUG message in
the logs:
752
TaskMemoryManager — Memory Manager of Single Task
It then acquires execution memory (for the input size and consumer ).
With the execution memory acquired, it finds the smallest unallocated page index and
records the page number (using allocatedPages registry).
753
TaskMemoryManager — Memory Manager of Single Task
When successful, MemoryBlock gets assigned pageNumber and it gets added to the internal
pageTable registry.
And acquiredButNotUsed gets acquired memory space with the pageNumber cleared in
allocatedPages (i.e. the index for pageNumber gets false ).
Caution FIXME Why is there a hope for being able to allocate a page?
MemoryManager
Task ID
releaseExecutionMemory Method
754
TaskMemoryManager — Memory Manager of Single Task
releaseExecutionMemory …FIXME
getMemoryConsumptionForThisTask Method
long getMemoryConsumptionForThisTask()
getMemoryConsumptionForThisTask …FIXME
showMemoryUsage Method
void showMemoryUsage()
showMemoryUsage …FIXME
pageSizeBytes Method
long pageSizeBytes()
755
TaskMemoryManager — Memory Manager of Single Task
getPage …FIXME
getPage …FIXME
756
MemoryConsumer
MemoryConsumer
MemoryConsumer is the contract for memory consumers of TaskMemoryManager with support
for spilling.
spill Method
Caution FIXME
Internally, it decrements used registry by the size of page and frees the page.
757
MemoryConsumer
Internally, it allocates a page for the requested size . The size is recorded in the internal
used counter.
However, if it was not possible to allocate the size memory, it shows the current memory
usage and a OutOfMemoryError is thrown.
acquireMemory acquires execution memory of size size. The memory is recorded in used
registry.
throwOom …FIXME
allocatePage Method
allocatePage …FIXME
758
TaskMetrics
TaskMetrics
TaskMetrics is a collection of metrics tracked during execution of a Task.
TaskMetrics uses accumulators to represent the metrics and offers "increment" methods to
increment them.
The local values of the accumulators for a task (as accumulated while the task
Note runs) are sent from the executor to the driver when the task completes (and
DAGScheduler re-creates TaskMetrics ).
Table 1. Metrics
Property Name Type
CollectionAccumulator[(BlockId,
_updatedBlockStatuses internal.metrics.updatedBlockStatuses
BlockStatus)]
759
TaskMetrics
accumulators Method
Caution FIXME
mergeShuffleReadMetrics Method
Caution FIXME
memoryBytesSpilled Method
Caution FIXME
updatedBlockStatuses Method
Caution FIXME
setExecutorCpuTime Method
Caution FIXME
setResultSerializationTime Method
Caution FIXME
setJvmGCTime Method
Caution FIXME
setExecutorRunTime Method
Caution FIXME
setExecutorDeserializeCpuTime Method
Caution FIXME
setExecutorDeserializeTime Method
760
TaskMetrics
Caution FIXME
setUpdatedBlockStatuses Method
Caution FIXME
Internally, fromAccumulators creates a new TaskMetrics . It then splits accums into internal
and external task metrics collections (using nameToAccums internal registry).
For every internal task metrics, fromAccumulators finds the metrics in nameToAccums
internal registry (of the new TaskMetrics instance), copies metadata, and merges state.
In the end, fromAccumulators adds the external accumulators to the new TaskMetrics
instance.
761
TaskMetrics
register registers the internal accumulators (from nameToAccums internal registry) with
Note register is used exclusively when Stage is requested for its new attempt.
762
ShuffleWriteMetrics
ShuffleWriteMetrics
ShuffleWriteMetrics is a collection of accumulators that represents task metrics about
763
ShuffleWriteMetrics
decRecordsWritten Method
Caution FIXME
decBytesWritten Method
Caution FIXME
writeTime Method
Caution FIXME
recordsWritten Method
Caution FIXME
bytesWritten: Long
1. ShuffleWriteMetricsUIData is created
2. In decBytesWritten
3. StatsReportListener intercepts stage completed events to show shuffle
bytes written
Note
4. ShuffleExternalSorter does writeSortedFile (to incDiskBytesSpilled )
764
ShuffleWriteMetrics
1. SortShuffleWriter stops.
765
ShuffleWriteMetrics
766
TaskSetBlacklist — Blacklisting Executors and Nodes For TaskSet
updateBlacklistForFailedTask Method
Caution FIXME
isExecutorBlacklistedForTaskSet Method
Caution FIXME
isNodeBlacklistedForTaskSet Method
Caution FIXME
767
SchedulerBackend — Pluggable Scheduler Backends
SchedulerBackend — Pluggable Scheduler
Backends
SchedulerBackend is a pluggable interface to support various cluster managers, e.g. Apache
Mesos, Hadoop YARN or Spark’s own Spark Standalone and Spark local.
As the cluster managers differ by their custom task scheduling modes and resource offers
mechanisms Spark abstracts the differences in SchedulerBackend contract.
YarnSchedulerBackend:
YarnClientSchedulerBackend (for client deploy
Spark on YARN mode)
YarnClusterSchedulerBackend (for cluster deploy
mode)
MesosCoarseGrainedSchedulerBackend
Spark on Mesos
MesosFineGrainedSchedulerBackend
Being a scheduler backend in Spark assumes a Apache Mesos-like model in which "an
application" gets resource offers as machines become available and can launch tasks on
them. Once a scheduler backend obtains the resource allocation, it can start executors.
768
SchedulerBackend — Pluggable Scheduler Backends
SchedulerBackend Contract
trait SchedulerBackend {
def applicationId(): String
def applicationAttemptId(): Option[String]
def defaultParallelism(): Int
def getDriverLogUrls: Option[Map[String, String]]
def isReady(): Boolean
def killTask(taskId: Long, executorId: String, interruptThread: Boolean): Unit
def reviveOffers(): Unit
def start(): Unit
def stop(): Unit
}
defaultParallelism
Used when TaskSchedulerImpl finds the default level of
parallelism (as a hint for sizing jobs).
getDriverLogUrls
Returns no URLs by default and only supported by
YarnClusterSchedulerBackend
769
SchedulerBackend — Pluggable Scheduler Backends
Starts SchedulerBackend .
start
Used when TaskSchedulerImpl is started.
Stops SchedulerBackend .
stop
Used when TaskSchedulerImpl is stopped.
770
CoarseGrainedSchedulerBackend
CoarseGrainedSchedulerBackend
CoarseGrainedSchedulerBackend is a SchedulerBackend.
CoarseGrainedSchedulerBackend is an ExecutorAllocationClient.
manager for executors that it in turn uses to launch tasks (on coarse-grained executors).
CoarseGrainedSchedulerBackend holds executors for the duration of the Spark job rather than
relinquishing executors whenever a task is done and asking the scheduler to launch a new
executor for each new task.
Note Active executors are executors that are not pending to be removed or lost.
currentExecutorIdCounter
771
CoarseGrainedSchedulerBackend
driverEndpoint (uninitialized)
executorDataMap empty
executorsPendingToRemove empty
hostToLocalTaskCount empty
localityAwareTasks 0
maxRegisteredWaitingTimeMs spark.scheduler.maxRegisteredResourcesWaitingTime
_minRegisteredRatio spark.scheduler.minRegisteredResourcesRatio
numPendingExecutors 0
772
CoarseGrainedSchedulerBackend
totalCoreCount 0
totalRegisteredExecutors 0
Refer to Logging.
Caution FIXME
makeOffers(): Unit
makeOffers(executorId: String): Unit
makeOffers takes the active executors (out of the executorDataMap internal registry) and
creates WorkerOffer resource offers for each (one per executor with the executor’s id, host
and free cores).
Caution Only free cores are considered in making offers. Memory is not! Why?!
1. TaskSchedulerImpl
2. RpcEnv
773
CoarseGrainedSchedulerBackend
CoarseGrainedSchedulerBackend Contract
class CoarseGrainedSchedulerBackend {
def minRegisteredRatio: Double
def createDriverEndpoint(properties: Seq[(String, String)]): DriverEndpoint
def reset(): Unit
def sufficientResourcesRegistered(): Boolean
def doRequestTotalExecutors(requestedTotal: Int): Future[Boolean]
def doKillExecutors(executorIds: Seq[String]): Future[Boolean]
}
reset FIXME
doRequestTotalExecutors FIXME
doKillExecutors FIXME
numExistingExecutors Method
774
CoarseGrainedSchedulerBackend
Caution FIXME
killExecutors Methods
Caution FIXME
getDriverLogUrls Method
Caution FIXME
applicationAttemptId Method
Caution FIXME
When called, you should see the following INFO message followed by DEBUG message in
the logs:
requestExecutors requests executors from a cluster manager (that reflects the current
computation needs). The "new executor total" is a sum of the internal numExistingExecutors
and numPendingExecutors decreased by the number of executors pending to be removed.
775
CoarseGrainedSchedulerBackend
Note It is a final method that no other scheduler backends could customize further.
requestTotalExecutors(
numExecutors: Int,
localityAwareTasks: Int,
hostToLocalTaskCount: Map[String, Int]): Boolean
Note It is a final method that no other scheduler backends could customize further.
defaultParallelism(): Int
776
CoarseGrainedSchedulerBackend
initialized).
1. Sets numPendingExecutors to 0
2. Clears executorsPendingToRemove
777
CoarseGrainedSchedulerBackend
driverEndpoint is a DriverEndpoint.
It tracks:
start(): Unit
start takes all spark. -prefixed properties and registers the CoarseGrainedScheduler RPC
778
CoarseGrainedSchedulerBackend
isReady(): Boolean
isReady allows to delay task launching until sufficient resources are available or
spark.scheduler.maxRegisteredResourcesWaitingTime passes.
If the resources are available, you should see the following INFO message in the logs and
isReady is positive.
If there are no sufficient resources available yet (the above requirement does not hold),
isReady checks whether the time since startup passed
779
CoarseGrainedSchedulerBackend
You should see the following INFO message in the logs and isReady is positive.
reviveOffers(): Unit
RPC endpoint.
780
CoarseGrainedSchedulerBackend
stop(): Unit
stop stops all executors and CoarseGrainedScheduler RPC endpoint (by sending a
createDriverEndpointRef Method
CoarseGrainedScheduler.
781
CoarseGrainedSchedulerBackend
Settings
Table 4. Spark Properties
Default
Property Description
Value
Time (in
spark.scheduler.revive.interval 1s
milliseconds)
between resource
offers revives.
Maximum message
size to allow in
RPC
communication. In
MB when the unit
is not given.
Generally only
applies to map
output size
(serialized)
spark.rpc.message.maxSize 128 information sent
between executors
and the driver.
Increase this if you
are running jobs
with many
thousands of map
and reduce tasks
and see messages
about the RPC
message size.
Double number
between 0 and 1
(including) that
controls the
minimum ratio of
(registered
spark.scheduler.minRegisteredResourcesRatio 0 resources / total
expected
resources) before
submitting tasks.
782
CoarseGrainedSchedulerBackend
783
DriverEndpoint — CoarseGrainedSchedulerBackend RPC Endpoint
DriverEndpoint —
CoarseGrainedSchedulerBackend RPC
Endpoint
DriverEndpoint is a ThreadSafeRpcEndpoint that acts as a message handler for
DriverEndpoint uses executorDataMap internal registry of all the executors that registered
with the driver. An executor sends a RegisterExecutor message to inform that it wants to
register.
executor resource offers (for launching tasks) (by emitting ReviveOffers message every
spark.scheduler.revive.interval).
784
DriverEndpoint — CoarseGrainedSchedulerBackend RPC Endpoint
CoarseGrainedSchedulerBackend
KillExecutorsOnHost
KillExecutorsOnHost requested to kill all executors on a
handler
node.
CoarseGrainedSchedulerBackend
KillTask KillTask handler
requested to kill a task.
Periodically (every
spark.scheduler.revive.interval
soon after DriverEndpoint
ReviveOffers makeOffers starts accepting messages
CoarseGrainedSchedulerBackend
is requested to revive resource
offers.
RegisterExecutor CoarseGrainedExecutorBackend
RegisterExecutor registers with the driver.
handler
CoarseGrainedExecutorBackend
StatusUpdate sends task status updates to the
StatusUpdate
handler driver.
executorsPendingLossReason
reviveThread
Caution FIXME
KillExecutorsOnHost Handler
Caution FIXME
785
DriverEndpoint — CoarseGrainedSchedulerBackend RPC Endpoint
Caution FIXME
onStop Callback
Caution FIXME
onDisconnected Callback
When called, onDisconnected removes the worker from the internal addressToExecutorId
registry (that effectively removes the worker from a cluster).
While removing, it calls removeExecutor with the reason being SlaveLost and message:
RemoveExecutor
RetrieveSparkProps
StopDriver
StopDriver message stops the RPC endpoint.
StopExecutors
StopExecutors message is receive-reply and blocking. When received, the following INFO
786
DriverEndpoint — CoarseGrainedSchedulerBackend RPC Endpoint
onStart(): Unit
spark.scheduler.revive.interval.
makeOffers(): Unit
makeOffers first creates WorkerOffers for all active executors (registered in the internal
executorDataMap cache).
787
DriverEndpoint — CoarseGrainedSchedulerBackend RPC Endpoint
makeOffers finds the executor data (in executorDataMap registry) and creates a
WorkerOffer.
launchTasks flattens (and hence "destroys" the structure of) the input tasks collection and
The input tasks collection contains one or more TaskDescriptions per executor
Note (and the "task partitioning" per executor is of no use in launchTasks so it simply
flattens the input data structure).
launchTasks encodes the TaskDescription and makes sure that the encoded task’s size is
If the size of the encoded task is acceptable, launchTasks finds the ExecutorData of the
executor that has been assigned to execute the task (in executorDataMap internal registry)
and decreases the executor’s available number of cores.
Note ExecutorData tracks the number of free cores of an executor (as freeCores ).
788
DriverEndpoint — CoarseGrainedSchedulerBackend RPC Endpoint
In the end, launchTasks sends the (serialized) task to associated executor to launch the
task (by sending a LaunchTask message to the executor’s RPC endpoint with the serialized
task insize SerializableBuffer ).
This is the moment in a task’s lifecycle when the driver sends the serialized
Important
task to an assigned executor.
In case the size of a serialized TaskDescription equals or exceeds the maximum RPC
message size, launchTasks finds the TaskSetManager (associated with the
TaskDescription ) and aborts it with the following message:
Scheduling in Spark relies on cores only (not memory), i.e. the number of tasks
Spark can run on an executor is limited by the number of cores available only.
When submitting a Spark application for execution both executor resources —
Note
memory and cores — can however be specified explicitly. It is the job of a
cluster manager to monitor the memory and take action when its use exceeds
what was assigned.
RpcEnv
789
DriverEndpoint — CoarseGrainedSchedulerBackend RPC Endpoint
RegisterExecutor Handler
RegisterExecutor(
executorId: String,
executorRef: RpcEndpointRef,
hostname: String,
cores: Int,
logUrls: Map[String, String])
extends CoarseGrainedClusterMessage
If the requirements hold, you should see the following INFO message in the logs:
790
DriverEndpoint — CoarseGrainedSchedulerBackend RPC Endpoint
Increments totalRegisteredExecutors
If numPendingExecutors is greater than 0 , you should see the following DEBUG message
in the logs and DriverEndpoint decrements numPendingExecutors .
In the end, DriverEndpoint makes executor resource offers (for launching tasks).
If however there was already another executor registered under the input executorId ,
DriverEndpoint sends RegisterExecutorFailed message back with the reason:
If however the input hostname is blacklisted, you should see the following INFO message in
the logs:
StatusUpdate Handler
791
DriverEndpoint — CoarseGrainedSchedulerBackend RPC Endpoint
StatusUpdate(
executorId: String,
taskId: Long,
state: TaskState,
data: SerializableBuffer)
extends CoarseGrainedClusterMessage
If the task has finished, DriverEndpoint updates the number of cores available for work on
the corresponding executor (registered in executorDataMap).
When DriverEndpoint found no executor (in executorDataMap), you should see the
following WARN message in the logs:
WARN Ignored task status update ([taskId] state [state]) from unknown executor with ID
[executorId]
KillTask Handler
KillTask(
taskId: Long,
executor: String,
interruptThread: Boolean)
extends CoarseGrainedClusterMessage
If found, DriverEndpoint passes the message on to the executor (using its registered RPC
endpoint for CoarseGrainedExecutorBackend ).
792
DriverEndpoint — CoarseGrainedSchedulerBackend RPC Endpoint
When removeExecutor is executed, you should see the following DEBUG message in the
logs:
removeExecutor then tries to find the executorId executor (in executorDataMap internal
registry).
If the executorId executor was found, removeExecutor removes the executor from the
following registries:
addressToExecutorId
executorDataMap
executorsPendingLossReason
executorsPendingToRemove
removeExecutor decrements:
totalRegisteredExecutors
executorId executor).
793
DriverEndpoint — CoarseGrainedSchedulerBackend RPC Endpoint
794
ExecutorBackend — Pluggable Executor Backends
ExecutorBackend — Pluggable Executor
Backends
ExecutorBackend is a pluggable interface that TaskRunners use to send task status updates
to a scheduler.
It is effectively a bridge between the driver and an executor, i.e. there are two endpoints
running.
1. CoarseGrainedExecutorBackend
3. MesosExecutorBackend
ExecutorBackend Contract
trait ExecutorBackend {
def statusUpdate(taskId: Long, state: TaskState, data: ByteBuffer): Unit
}
795
ExecutorBackend — Pluggable Executor Backends
statusUpdate
Used when TaskRunner is requested to run a task (to
send task status updates).
796
CoarseGrainedExecutorBackend
CoarseGrainedExecutorBackend
CoarseGrainedExecutorBackend is a standalone application that is started in a resource
container when:
(before accepting messages) and shuts down when the driver disconnects.
797
CoarseGrainedExecutorBackend
KillTask
RegisterExecutorFailed
StopExecutor
Shutdown
798
CoarseGrainedExecutorBackend
log4j.logger.org.apache.spark.executor.CoarseGrainedExecutorBackend=INFO
799
CoarseGrainedExecutorBackend
LaunchTask first decodes TaskDescription from data . You should see the following INFO
LaunchTask then launches the task on the executor (passing itself as the owning
statusUpdate creates a StatusUpdate (with the input taskId , state , and data together
with the executor id) and sends it to the driver (if connected already).
800
CoarseGrainedExecutorBackend
Driver’s URL
The driver’s URL is of the format spark://[RpcEndpoint name]@[hostname]:[port] , e.g.
spark://[email protected]:64859 .
801
CoarseGrainedExecutorBackend
$ ./bin/spark-class org.apache.spark.executor.CoarseGrainedExecutorBackend
Options are:
--driver-url <driverUrl>
--executor-id <executorId>
--hostname <hostname>
--cores <cores>
--app-id <appid>
--worker-url <workerUrl>
--user-class-path <url>
run(
driverUrl: String,
executorId: String,
hostname: String,
cores: Int,
appId: String,
workerUrl: Option[String],
userClassPath: scala.Seq[URL]): Unit
run runs itself with a Hadoop UserGroupInformation (as a thread local variable
Note
distributed to child threads for authenticating HDFS and YARN calls).
Note run expects a clear hostname with no : included (for a port perhaps).
802
CoarseGrainedExecutorBackend
run uses spark.executor.port Spark property (or 0 if not set) for the port to create a
RpcEnv called driverPropsFetcher (together with the input hostname and clientMode
enabled).
run resolves RpcEndpointRef for the input driverUrl and requests SparkAppConfig (by
run uses SparkAppConfig to get the driver’s sparkProperties and adds spark.app.id
run creates a SparkConf using the Spark properties fetched from the driver, i.e. with the
executor-related Spark settings if they were missing and the rest unconditionally.
run requests the current SparkHadoopUtil to start start the credential updater.
run creates SparkEnv for executors (with the input executorId , hostname and cores ,
This is the moment when SparkEnv gets created with all the executor
Important
services.
(only in Spark Standalone) If the optional input workerUrl was defined, run sets up an
RPC endpoint with the name WorkerWatcher and WorkerWatcher RPC endpoint.
803
CoarseGrainedExecutorBackend
run 's main thread is blocked until RpcEnv terminates and only the RPC endpoints process
RPC messages.
1. RpcEnv
2. driverUrl
3. executorId
4. hostname
5. cores
6. userClassPath
7. SparkEnv
onStart(): Unit
When executed, you should see the following INFO message in the logs:
804
CoarseGrainedExecutorBackend
onStart then takes the RpcEndpointRef of the driver asynchronously and initializes the
RegisteredExecutor
extends CoarseGrainedClusterMessage with RegisterExecutorResponse
When RegisteredExecutor is received, you should see the following INFO in the logs:
RegisterExecutorFailed
805
CoarseGrainedExecutorBackend
RegisterExecutorFailed(message)
When a RegisterExecutorFailed message arrives, the following ERROR is printed out to the
logs:
If an executor has not been initialized yet (FIXME: why?), the following ERROR message is
printed out to the logs and CoarseGrainedExecutorBackend exits:
StopExecutor Handler
When StopExecutor is received, the handler turns stopping internal flag on. You should see
the following INFO message in the logs:
Shutdown Handler
806
CoarseGrainedExecutorBackend
executor thread that stops the owned Executor (using executor reference).
exitExecutor(
code: Int,
reason: String,
throwable: Throwable = null,
notifyDriver: Boolean = true): Unit
When exitExecutor is executed, you should see the following ERROR message in the logs
(followed by throwable if available):
If notifyDriver is enabled (it is by default) exitExecutor informs the driver that the
executor should be removed (by sending a blocking RemoveExecutor message with executor
id and a ExecutorLossReason with the input reason ).
You may see the following WARN message in the logs when the notification fails.
807
CoarseGrainedExecutorBackend
onDisconnected Callback
Caution FIXME
start Method
Caution FIXME
stop Method
Caution FIXME
requestTotalExecutors
Caution FIXME
Caution FIXME
808
MesosExecutorBackend
MesosExecutorBackend
Caution FIXME
registered Method
Caution FIXME
launchTask Method
launchTask …FIXME
809
BlockManager — Key-Value Store of Blocks of Data
BlockManager acts as a local cache that runs on every node in a Spark cluster, i.e. the driver
and executors.
BlockManager provides interface for uploading and fetching blocks both locally and remotely
BlockManager is created exclusively when SparkEnv is created (for the driver and
asynchronously (on a thread pool with block-manager-future prefix and maximum of 128
threads).
The common idiom in Spark to access a BlockManager regardless of a location, i.e. the
driver or executors, is through SparkEnv:
BlockManager is a BlockDataManager, i.e. manages the storage for blocks that can
BlockManager is a BlockEvictionHandler that can drop a block from memory and store it on
a disk if required.
Cached blocks are blocks with non-zero sum of memory and disk sizes.
Tip Use Web UI, esp. Storage and Executors tabs, to monitor the memory used.
Use spark-submit's command-line options, i.e. --driver-memory for the driver and
--executor-memory for executors or their equivalents as Spark properties, i.e.
Tip
spark.executor.memory and spark.driver.memory, to control the memory for
storage memory.
810
BlockManager — Key-Value Store of Blocks of Data
A BlockManager is created when a Spark application starts and must be initialized before it
is fully operable.
maxOffHeapMemory
maxOnHeapMemory
Refer to Logging.
You may want to shut off WARN messages being printed out about the current
state of blocks using the following line to cut the noise:
Tip
log4j.logger.org.apache.spark.storage.BlockManager=OFF
811
BlockManager — Key-Value Store of Blocks of Data
getLocations Method
Caution FIXME
blockIdsToHosts Method
Caution FIXME
getLocationBlockIds Method
Caution FIXME
getPeers Method
Caution FIXME
releaseAllLocksForTask Method
Caution FIXME
stop(): Unit
stop …FIXME
getMatchingBlockIds …FIXME
812
BlockManager — Key-Value Store of Blocks of Data
getLocalValues Method
getLocalValues …FIXME
Internally, when getLocalValues is executed, you should see the following DEBUG
message in the logs:
When no blockId block was found, you should see the following DEBUG message in the
logs and getLocalValues returns "nothing" (i.e. NONE ).
When the blockId block was found, you should see the following DEBUG message in the
logs:
1. Values iterator from MemoryStore for blockId for "deserialized" persistence levels.
2. Iterator from SerializerManager after the data stream has been deserialized for the
blockId block and the bytes for blockId block for "serialized" persistence levels.
Caution FIXME
813
BlockManager — Key-Value Store of Blocks of Data
getRemoteValues …FIXME
get attempts to get the blockId block from a local block manager first before requesting it
Internally, get tries to get the block from the local BlockManager. If the block was found,
you should see the following INFO message in the logs and get returns the local
BlockResult.
If however the block was not found locally, get tries to get the block from remote block
managers. If retrieved from a remote block manager, you should see the following INFO
message in the logs and get returns the remote BlockResult.
In the end, get returns "nothing" (i.e. NONE ) when the blockId block was not found either
in the local BlockManager or any remote BlockManager .
Caution FIXME
814
BlockManager — Key-Value Store of Blocks of Data
Caution FIXME
removeBlockInternal Method
Caution FIXME
Stores
A Store is the place where blocks are held.
putBlockData(
blockId: BlockId,
data: ManagedBuffer,
level: StorageLevel,
classTag: ClassTag[_]): Boolean
putBlockData simply stores blockId locally (given the given storage level ).
815
BlockManager — Key-Value Store of Blocks of Data
putBytes(
blockId: BlockId,
bytes: ChunkedByteBuffer,
level: StorageLevel,
tellMaster: Boolean = true): Boolean
putBytes makes sure that the bytes are not null and doPutBytes.
def doPutBytes[T](
blockId: BlockId,
bytes: ChunkedByteBuffer,
level: StorageLevel,
classTag: ClassTag[T],
tellMaster: Boolean = true,
keepReadLock: Boolean = false): Boolean
doPutBytes calls the internal helper doPut with a function that accepts a BlockInfo and
Inside the function, if the storage level 's replication is greater than 1, it immediately starts
replication of the blockId block on a separate thread (from futureExecutionContext thread
pool). The replication uses the input bytes and level storage level.
For a memory storage level, the function checks whether the storage level is deserialized
or not. For a deserialized storage level , BlockManager 's SerializerManager deserializes
bytes into an iterator of values that MemoryStore stores. If however the storage level is
816
BlockManager — Key-Value Store of Blocks of Data
If the put did not succeed and the storage level is to use disk, you should see the following
WARN message in the logs:
DiskStore is requested to store the bytes of a block with memory and disk
Note
storage level only when MemoryStore has failed.
If the storage level is to use disk only, DiskStore stores the bytes.
doPutBytes requests current block status and if the block was successfully stored, and the
driver should know about it ( tellMaster ), the function reports the current storage status of
the block to the driver. The current TaskContext metrics are updated with the updated block
status (only when executed inside a task where TaskContext is available).
The function waits till the earlier asynchronous replication finishes for a block with replication
level greater than 1 .
The final result of doPutBytes is the result of storing the block successful or not (as
computed earlier).
maybeCacheDiskValuesInMemory Method
Caution FIXME
doPut[T](
blockId: BlockId,
level: StorageLevel,
classTag: ClassTag[_],
tellMaster: Boolean,
keepReadLock: Boolean)(putBody: BlockInfo => Option[T]): Option[T]
817
BlockManager — Key-Value Store of Blocks of Data
doPut executes the input putBody function with a BlockInfo being a new BlockInfo object
(with level storage level) that BlockInfoManager managed to create a write lock for.
If the block has already been created (and BlockInfoManager did not manage to create a
write lock for), the following WARN message is printed out to the logs:
doPut releases the read lock for the block when keepReadLock flag is disabled and returns
None immediately.
If however the write lock has been given, doPut executes putBody .
For unsuccessful save, the block is removed from memory and disk stores and the following
WARN message is printed out to the logs:
removeBlock removes the blockId block from the MemoryStore and DiskStore.
When executed, it prints out the following DEBUG message to the logs:
818
BlockManager — Key-Value Store of Blocks of Data
It requests BlockInfoManager for lock for writing for the blockId block. If it receives none, it
prints out the following WARN message to the logs and quits.
Otherwise, with a write lock for the block, the block is removed from MemoryStore and
DiskStore (see Removing Block in MemoryStore and Removing Block in DiskStore ).
WARN Block [blockId] could not be removed as it was not found in either the disk, memo
ry, or external block store
It then calculates the current block status that is used to report the block status to the driver
(if the input tellMaster and the info’s tellMaster are both enabled, i.e. true ) and the
current TaskContext metrics are updated with the change.
removeRdd removes all the blocks that belong to the rddId RDD.
It then requests RDD blocks from BlockInfoManager and removes them (from memory and
disk) (without informing the driver).
819
BlockManager — Key-Value Store of Blocks of Data
Internally, it starts by printing out the following DEBUG message to the logs:
It then requests all the BroadcastBlockId objects that belong to the broadcastId broadcast
from BlockInfoManager and removes them (from memory and disk).
Caution FIXME
Executor ID
RpcEnv
BlockManagerMaster
SerializerManager
SparkConf
MemoryManager
MapOutputTracker
ShuffleManager
BlockTransferService
SecurityManager
CPU cores
820
BlockManager — Key-Value Store of Blocks of Data
BlockManager creates block-manager-future daemon cached thread pool with 128 threads
BlockManager calculates the maximum memory to use (as maxMemory ) by requesting the
maximum on-heap and off-heap storage memory from the assigned MemoryManager .
BlockManager calculates the port used by the external shuffle service (as
externalShuffleServicePort ).
BlockManager creates a client to read other executors' shuffle files (as shuffleClient ). If
BlockManager sets the maximum number of failures before this block manager refreshes the
shuffleServerId
821
BlockManager — Key-Value Store of Blocks of Data
Caution FIXME
1. Initializes BlockTransferService
3. Registers itself with the driver’s BlockManagerMaster (using the id , maxMemory and its
slaveEndpoint ).
5. It creates the address of the server that serves this executor’s shuffle files (using
shuffleServerId)
If the External Shuffle Service is used, the following INFO appears in the logs:
Ultimately, if the initialization happens on an executor and the External Shuffle Service is
used, it registers to the shuffle service.
822
BlockManager — Key-Value Store of Blocks of Data
registerWithExternalShuffleServer(): Unit
When executed, you should see the following INFO message in the logs:
It uses shuffleClient to register the block manager using shuffleServerId (i.e. the host, the
port and the executorId) and a ExecutorShuffleInfo .
The maximum number of attempts and the sleep time in-between are hard-
Note
coded, i.e. they are not configured.
Any issues while connecting to the external shuffle service are reported as ERROR
messages in the logs:
ERROR Failed to connect to external shuffle server, will retry [#attempts] more times
after waiting 5 seconds...
reregister(): Unit
823
BlockManager — Key-Value Store of Blocks of Data
When executed, reregister prints the following INFO message to the logs:
reregister then registers itself to the driver’s BlockManagerMaster (just as it was when
BlockManager was initializing). It passes the BlockManagerId, the maximum memory (as
maxMemory ), and the BlockManagerSlaveEndpoint.
reregister will then report all the local blocks to the BlockManagerMaster.
For each block metadata (in BlockInfoManager) it gets block current status and tries to send
it to the BlockManagerMaster.
If there is an issue communicating to the BlockManagerMaster, you should see the following
ERROR message in the logs:
getCurrentBlockStatus returns the current BlockStatus of the BlockId block (with the
block’s current StorageLevel, memory and disk sizes). It uses MemoryStore and DiskStore
for size and other information.
824
BlockManager — Key-Value Store of Blocks of Data
Internally, it uses the input BlockInfo to know about the block’s storage level. If the storage
level is not set (i.e. null ), the returned BlockStatus assumes the default NONE storage
level and the memory and disk sizes being 0 .
If however the storage level is set, getCurrentBlockStatus uses MemoryStore and DiskStore
to check whether the block is stored in the storages or not and request for their sizes in the
storages respectively (using their getSize or assume 0 ).
It is acceptable that the BlockInfo says to use memory or disk yet the block is
Note
not in the storages (yet or anymore). The method will give current status.
reportAllBlocks(): Unit
reportAllBlocks …FIXME
reportBlockStatus(
blockId: BlockId,
info: BlockInfo,
status: BlockStatus,
droppedMemorySize: Long = 0L): Unit
reportBlockStatus is an internal method for reporting a block status to the driver and if told
In either case, it prints out the following DEBUG message to the logs:
825
BlockManager — Key-Value Store of Blocks of Data
def tryToReportBlockStatus(
blockId: BlockId,
info: BlockInfo,
status: BlockStatus,
droppedMemorySize: Long = 0L): Boolean
response.
Broadcast Values
When a new broadcast value is created, TorrentBroadcast blocks are put in the block
manager.
TRACE Put for block [blockId] took [startTimeMs] to get into synchronized block
It puts the data in the memory first and drop to disk if the memory store can’t hold it.
BlockManagerId
FIXME
Execution Context
block-manager-future is the execution context for…FIXME
826
BlockManager — Key-Value Store of Blocks of Data
Misc
The underlying abstraction for blocks in Spark is a ByteBuffer that limits the size of a block
to 2GB ( Integer.MAX_VALUE - see Why does FileChannel.map take up to
Integer.MAX_VALUE of data? and SPARK-1476 2GB limit in spark for blocks). This has
implication not just for managed blocks in use, but also for shuffle blocks (memory mapped
blocks are limited to 2GB, even though the API allows for long ), ser-deser via byte array-
backed output streams.
BlockResult
BlockResult is a description of a fetched block with the readMethod and bytes .
getDiskWriter(
blockId: BlockId,
file: File,
serializerInstance: SerializerInstance,
bufferSize: Int,
writeMetrics: ShuffleWriteMetrics): DiskBlockObjectWriter
syncWrites .
827
BlockManager — Key-Value Store of Blocks of Data
shuffleMetricsSource: Source
shuffleMetricsSource requests the ShuffleClient for the shuffle-related metrics and creates
Settings
828
BlockManager — Key-Value Store of Blocks of Data
Controls whether
DiskBlockObjectWriter should force
outstanding writes to disk when
spark.shuffle.sync false
committing a single atomic block, i.e.
all operating system buffers should
synchronize with the disk to ensure
that all changes to a file are in fact
recorded in the storage.
replicate(
blockId: BlockId,
data: BlockData,
level: StorageLevel,
classTag: ClassTag[_],
existingReplicas: Set[BlockManagerId] = Set.empty): Unit
replicate …FIXME
replicateBlock Method
replicateBlock(
blockId: BlockId,
existingReplicas: Set[BlockManagerId],
maxReplicas: Int): Unit
replicateBlock …FIXME
putIterator Method
829
BlockManager — Key-Value Store of Blocks of Data
putIterator[T: ClassTag](
blockId: BlockId,
values: Iterator[T],
level: StorageLevel,
tellMaster: Boolean = true): Boolean
putIterator …FIXME
putSingle Method
putSingle[T: ClassTag](
blockId: BlockId,
value: T,
level: StorageLevel,
tellMaster: Boolean = true): Boolean
putSingle …FIXME
getRemoteBytes …FIXME
830
BlockManager — Key-Value Store of Blocks of Data
getRemoteValues …FIXME
getSingle Method
getSingle …FIXME
shuffleClient Property
shuffleClient: ShuffleClient
shuffleMetricsSource
blockTransferService Property
831
BlockManager — Key-Value Store of Blocks of Data
When created, BlockManager is given a BlockTransferService that is used for the following
services:
getOrElseUpdate[T](
blockId: BlockId,
level: StorageLevel,
classTag: ClassTag[T],
makeIterator: () => Iterator[T]): Either[BlockResult, Iterator[T]]
getOrElseUpdate first attempts to get the block by the BlockId (from the local block
832
BlockManager — Key-Value Store of Blocks of Data
If however the block was not found (in any block manager in a Spark cluster),
getOrElseUpdate doPutIterator (for the input BlockId , the makeIterator function and the
StorageLevel ).
For None , getOrElseUpdate getLocalValues for the BlockId and eventually returns the
BlockResult (unless terminated by a SparkException due to some internal error).
doPutIterator[T](
blockId: BlockId,
iterator: () => Iterator[T],
level: StorageLevel,
classTag: ClassTag[T],
tellMaster: Boolean = true,
keepReadLock: Boolean = false): Option[PartiallyUnrolledIterator[T]]
doPutIterator simply doPut with the putBody function that accepts a BlockInfo and does
the following:
1. putBody branches off per whether the StorageLevel indicates to use a memory or
If the MemoryStore returned a correct value, the internal size is set to the value.
833
BlockManager — Key-Value Store of Blocks of Data
When the input StorageLevel indicates to use memory for storage in serialized
format, putBody …FIXME
When the input StorageLevel does not indicate to use memory for storage but disk
instead, putBody …FIXME
3. Only when the block was successfully stored in either the memory or disk store:
putBody reports the block status to the BlockManagerMaster when the input
tellMaster flag (default: enabled) and the tellMaster flag of the block info are
both enabled.
BlockStatus )
With a successful replication, putBody prints out the following DEBUG message to
the logs:
dropFromMemory(
blockId: BlockId,
data: () => Either[Array[T], ChunkedByteBuffer]): StorageLevel
834
BlockManager — Key-Value Store of Blocks of Data
When dropFromMemory is executed, you should see the following INFO message in the logs:
If the block’s StorageLevel uses disks and the internal DiskStore object ( diskStore ) does
not contain the block, it is saved then. You should see the following INFO message in the
logs:
The block is removed from memory if exists. If not, you should see the following WARN
message in the logs:
WARN BlockManager: Block [blockId] could not be dropped from memory as it does not exi
st
It then calculates the current storage status of the block and reports it to the driver. It only
happens when info.tellMaster .
A block is considered updated when it was written to disk or removed from memory or both.
If either happened, the current TaskContext metrics are updated with the change.
835
MemoryStore
MemoryStore
MemoryStore is the memory store for blocks of data.
SparkEnv.get.blockManager.memoryStore
access-order, the order of iteration is the order in which the entries were last accessed, from
least-recently accessed to most-recently. That gives LRU cache behaviour when evicting
blocks.
836
MemoryStore
Refer to Logging.
releaseUnrollMemoryForThisTask Method
releaseUnrollMemoryForThisTask …FIXME
getValues Method
837
MemoryStore
getValues does…FIXME
getBytes Method
getBytes does…FIXME
putIteratorAsBytes Method
putIteratorAsBytes[T](
blockId: BlockId,
values: Iterator[T],
classTag: ClassTag[T],
memoryMode: MemoryMode): Either[PartiallySerializedBlock[T], Long]
Caution FIXME
Removing Block
Caution FIXME
putBytes[T](
blockId: BlockId,
size: Long,
memoryMode: MemoryMode,
_bytes: () => ChunkedByteBuffer): Boolean
putBytes requests storage memory for blockId from MemoryManager and registers the
Internally, putBytes first makes sure that blockId block has not been registered already in
entries internal registry.
putBytes then requests size memory for the blockId block in a given memoryMode from
838
MemoryStore
import org.apache.spark.storage.StorageLevel._
scala> MEMORY_AND_DISK.useOffHeap
Note res0: Boolean = false
scala> OFF_HEAP.useOffHeap
res1: Boolean = true
If successful, putBytes "materializes" _bytes byte buffer and makes sure that the size is
exactly size . It then registers a SerializedMemoryEntry (for the bytes and memoryMode ) for
blockId in the internal entries registry.
INFO Block [blockId] stored as bytes in memory (estimated size [size], free [bytes])
putBytes returns true only after blockId was successfully registered in the internal
entries registry.
Settings
Table 2. Spark Properties
Default
Spark Property Description
Value
Initial per-task memory size needed
to store a block in memory.
spark.storage.unrollMemoryThreshold
should be at most the total amount of
memory available for storage. If not,
you should see the following WARN
message in the logs:
839
MemoryStore
evictBlocksToFreeSpace(
blockId: Option[BlockId],
space: Long,
memoryMode: MemoryMode): Long
evictBlocksToFreeSpace …FIXME
contains is positive ( true ) when the entries internal registry contains blockId key.
putIteratorAsValues Method
putIteratorAsValues[T](
blockId: BlockId,
values: Iterator[T],
classTag: ClassTag[T]): Either[PartiallyUnrolledIterator[T], Long]
putIteratorAsValues makes sure that the BlockId does not exist or throws an
IllegalArgumentException :
Caution FIXME
840
MemoryStore
SparkConf
BlockInfoManager
SerializerManager
MemoryManager
BlockEvictionHandler
reserveUnrollMemoryForThisTask Method
reserveUnrollMemoryForThisTask(
blockId: BlockId,
memory: Long,
memoryMode: MemoryMode): Boolean
acquireUnrollMemory.
maxMemory: Long
841
MemoryStore
Enable INFO logging to find the maxMemory in the logs when MemoryStore is
created:
Tip
MemoryStore started with capacity [maxMemory] MB
842
BlockEvictionHandler
BlockEvictionHandler
BlockEvictionHandler is a contract of FIXME that dropFromMemory.
package org.apache.spark.storage.memory
trait BlockEvictionHandler {
def dropFromMemory[T: ClassTag](
blockId: BlockId,
data: () => Either[Array[T], ChunkedByteBuffer]): StorageLevel
}
dropFromMemory
Used exclusively when MemoryStore is requested to
evictBlocksToFreeSpace.
843
StorageMemoryPool
StorageMemoryPool
StorageMemoryPool is a MemoryPool that…FIXME
MemoryStore
_memoryStore
Used when…FIXME
memoryFree Method
memoryFree: Long
memoryFree …FIXME
acquireMemory Method
acquireMemory …FIXME
844
StorageMemoryPool
freeSpaceToShrinkPool Method
freeSpaceToShrinkPool …FIXME
Lock
845
MemoryPool
MemoryPool
MemoryPool is…FIXME
846
DiskStore
DiskStore
Caution FIXME
putBytes
Caution FIXME
Removing Block
Caution FIXME
847
BlockDataManager
BlockDataManager — Block Storage
Management API
BlockDataManager is the contract for managing storage for blocks of data (aka block storage
management API).
package org.apache.spark.network
trait BlockDataManager {
def getBlockData(blockId: BlockId): ManagedBuffer
def putBlockData(
blockId: BlockId,
data: ManagedBuffer,
level: StorageLevel,
classTag: ClassTag[_]): Boolean
def releaseLock(blockId: BlockId, taskAttemptId: Option[Long]): Unit
}
ShuffleBlockFetcherIterator is requested to
fetchLocalBlocks
Blocks are identified by BlockId that has a globally unique identifier ( name ) and stored as
ManagedBuffer.
848
BlockDataManager
Table 2. BlockIds
Name Description
849
RpcHandler
RpcHandler
RpcHandler is the base of…FIXME
package org.apache.spark.network.server;
Table 2. RpcHandlers
RpcHandler Description
AuthRpcHandler
ExternalShuffleBlockHandler
NettyBlockRpcServer
NettyRpcHandler
NoOpRpcHandler
SaslRpcHandler
850
RpcHandler
OneWayRpcCallback RpcResponseCallback
OneWayRpcCallback is a RpcResponseCallback that simply prints out the WARN and ERROR
void onFailure(Throwable e)
851
RpcResponseCallback
RpcResponseCallback
RpcResponseCallback is the contract of…FIXME
package org.apache.spark.network.client;
interface RpcResponseCallback {
void onSuccess(ByteBuffer response);
void onFailure(Throwable e);
}
852
RpcResponseCallback
Table 2. RpcResponseCallbacks
RpcResponseCallback Description
"Unnamed" in
NettyBlockTransferService
"Unnamed" in
TransportRequestHandler
"Unnamed" in
TransportClient
"Unnamed" in
OneForOneBlockFetcher
OneWayRpcCallback
RegisterDriverCallback
RpcOutboxMessage
853
TransportRequestHandler
TransportRequestHandler
TransportRequestHandler is a MessageHandler of RequestMessage messages from Netty’s
Channel.
createChannelHandler.
Refer to Logging.
processRpcRequest …FIXME
processFetchRequest …FIXME
854
TransportRequestHandler
processOneWayMessage …FIXME
processStreamRequest …FIXME
Netty’s Channel
TransportClient
RpcHandler
855
TransportRequestHandler
856
TransportContext
TransportContext
TransportContext is…FIXME
createChannelHandler …FIXME
initializePipeline Method
initializePipeline …FIXME
1. Uses 0 for the port and no bootstraps. Used exclusively for testing
857
TransportContext
858
TransportServer
TransportServer
TransportServer is…FIXME
createServer.
init …FIXME
getPort Method
int getPort()
getPort …FIXME
TransportContext
RpcHandler
TransportServerBootstraps
When created, TransportServer init with the host and port to bind to.
859
TransportServer
860
TransportClientFactory
TransportClientFactory
TransportClientFactory is…FIXME
createUnmanagedClient Method
createUnmanagedClient …FIXME
createClient …FIXME
861
MessageHandler
MessageHandler
MessageHandler is a contract of message handlers that can handle messages.
package org.apache.spark.network.server;
Table 2. MessageHandlers
MessageHandler Description
TransportRequestHandler
TransportResponseHandler
862
BlockManagerMaster — BlockManager for Driver
BlockManagerMaster — BlockManager for
Driver
BlockManagerMaster runs on the driver.
BlockManagerMaster RPC endpoint name on the driver (with the endpoint references on
executors) to allow executors for sending block status updates to it and hence keep track of
block statuses.
Refer to Logging.
removeExecutorAsync Method
Caution FIXME
contains Method
Caution FIXME
RpcEndpointRef to…FIXME
SparkConf
863
BlockManagerMaster — BlockManager for Driver
a response.
If false in response comes in, a SparkException is thrown with the following message:
If all goes fine, you should see the following INFO message in the logs:
removeRdd removes all the blocks of rddId RDD, possibly in blocking fashion.
If there is an issue, you should see the following WARN message in the logs and the entire
exception:
864
BlockManagerMaster — BlockManager for Driver
fashion.
If there is an issue, you should see the following WARN message in the logs and the entire
exception:
fashion.
If there is an issue, you should see the following WARN message in the logs and the entire
exception:
865
BlockManagerMaster — BlockManager for Driver
stop(): Unit
If all goes fine, you should see the following INFO message in the logs:
registerBlockManager(
blockManagerId: BlockManagerId,
maxMemSize: Long,
slaveEndpoint: RpcEndpointRef): BlockManagerId
866
BlockManagerMaster — BlockManager for Driver
The input maxMemSize is the total available on-heap and off-heap memory for
Note
storage on a BlockManager .
updateBlockInfo(
blockManagerId: BlockManagerId,
blockId: BlockId,
storageLevel: StorageLevel,
memSize: Long,
diskSize: Long): Boolean
867
BlockManagerMaster — BlockManager for Driver
868
BlockManagerMaster — BlockManager for Driver
getExecutorEndpointRef Method
BlockManagerMaster RPC endpoint and waits for a response which becomes the return
value.
getMemoryStatus Method
getStorageStatus: Array[StorageStatus]
endpoint and waits for a response which becomes the return value.
getBlockStatus Method
getBlockStatus(
blockId: BlockId,
askSlaves: Boolean = true): Map[BlockManagerId, BlockStatus]
869
BlockManagerMaster — BlockManager for Driver
BlockManagerMaster RPC endpoint and waits for a response (of type Map[BlockManagerId,
Future[Option[BlockStatus]]] ).
It then builds a sequence of future results that are BlockStatus statuses and waits for a
result for spark.rpc.askTimeout, spark.network.timeout or 120 secs.
getMatchingBlockIds Method
getMatchingBlockIds(
filter: BlockId => Boolean,
askSlaves: Boolean): Seq[BlockId]
BlockManagerMaster RPC endpoint and waits for a response which becomes the result for
spark.rpc.askTimeout, spark.network.timeout or 120 secs.
hasCachedBlocks Method
RPC endpoint and waits for a response which becomes the result.
870
BlockManagerMasterEndpoint — BlockManagerMaster RPC Endpoint
BlockManagerMasterEndpoint —
BlockManagerMaster RPC Endpoint
BlockManagerMasterEndpoint is the ThreadSafeRpcEndpoint for BlockManagerMaster under
BlockManagerMaster name.
Spark application.
executors).
871
BlockManagerMasterEndpoint — BlockManagerMaster RPC Endpoint
Refer to Logging.
Caution FIXME
getLocationsMultipleBlockIds Method
Caution FIXME
UpdateBlockInfo
class UpdateBlockInfo(
var blockManagerId: BlockManagerId,
var blockId: BlockId,
var storageLevel: StorageLevel,
var memSize: Long,
var diskSize: Long)
Caution FIXME
RemoveExecutor
RemoveExecutor(execId: String)
872
BlockManagerMasterEndpoint — BlockManagerMaster RPC Endpoint
When received, executor execId is removed and the response true sent back.
getPeers finds all the registered BlockManagers (using blockManagerInfo internal registry)
GetPeers(blockManagerId: BlockManagerId)
extends ToBlockManagerMaster
BlockManagerHeartbeat
Caution FIXME
873
BlockManagerMasterEndpoint — BlockManagerMaster RPC Endpoint
GetLocations Message
GetLocations(blockId: BlockId)
extends ToBlockManagerMaster
GetLocationsMultipleBlockIds Message
GetLocationsMultipleBlockIds(blockIds: Array[BlockId])
extends ToBlockManagerMaster
blockIds .
RegisterBlockManager Event
RegisterBlockManager(
blockManagerId: BlockManagerId,
maxMemSize: Long,
sender: RpcEndpointRef)
register records the current time and registers BlockManager (using BlockManagerId)
The input maxMemSize is the total available on-heap and off-heap memory for
Note
storage on a BlockManager .
874
BlockManagerMasterEndpoint — BlockManagerMaster RPC Endpoint
If another BlockManager has earlier been registered for the executor, you should see the
following ERROR message in the logs:
ERROR Got two different block manager registrations on same executor - will replace ol
d one [oldId] with new one [id]
blockManagerIdByExecutor
blockManagerInfo
GetRpcHostPortForExecutor
GetMemoryStatus
GetStorageStatus
GetBlockStatus
GetMatchingBlockIds
875
BlockManagerMasterEndpoint — BlockManagerMaster RPC Endpoint
RemoveShuffle
RemoveBroadcast
RemoveBlock
StopBlockManagerMaster
BlockManagerHeartbeat
HasCachedBlocks
removeExecutor(execId: String)
removeBlockManager(blockManagerId: BlockManagerId)
blockManagerIdByExecutor
blockManagerInfo
It then goes over all the blocks for the BlockManager , and removes the executor for each
block from blockLocations registry.
876
BlockManagerMasterEndpoint — BlockManagerMaster RPC Endpoint
SparkListenerBlockManagerRemoved(System.currentTimeMillis(), blockManagerId) is
posted to listenerBus.
You should then see the following INFO message in the logs:
RpcEnv
SparkConf
LiveListenerBus
877
DiskBlockManager
DiskBlockManager
DiskBlockManager creates and maintains the logical mapping between logical blocks and
By default, one block is mapped to one file with a name given by its BlockId. It is however
possible to have a block map to only a segment of a file.
localDirs
Used when:
DiskBlockManager is requested to getFile, initialize
subDirs and stop
BlockManager is requested to register the executor’s
BlockManager with an external shuffle server
878
DiskBlockManager
Refer to Logging.
Caution FIXME
createTempShuffleBlock Method
Caution FIXME
getAllFiles Method
getAllFiles(): Seq[File]
getAllFiles …FIXME
879
DiskBlockManager
DiskBlockManager creates one or many local directories to store block data (as localDirs).
When not successful, you should see the following ERROR message in the logs and
DiskBlockManager exits with error code 53 .
DiskBlockManager initializes the internal subDirs collection of locks for every local directory
In the end, DiskBlockManager registers a shutdown hook to clean up the local directories for
blocks.
addShutdownHook(): AnyRef
When executed, you should see the following DEBUG message in the logs:
addShutdownHook adds the shutdown hook so it prints the following INFO message and
executes doStop.
doStop(): Unit
doStop deletes the local directories recursively (only when the constructor’s
deleteFilesOnStop is enabled and the parent directories are not registered to be removed at
shutdown).
880
DiskBlockManager
getConfiguredLocalDirs returns the local directories where Spark can write files.
In non-YARN mode (or for the driver in yarn-client mode), getConfiguredLocalDirs checks
the following environment variables (in the order) and returns the value of the first met:
3. MESOS_DIRECTORY environment variable (only when External Shuffle Service is not used)
In the end, when no earlier environment variables were found, getConfiguredLocalDirs uses
spark.local.dir Spark property or falls back on java.io.tmpdir System property.
comma-separated local directories (that have already been created and secured so that only
the user has access to them).
getYarnLocalDirs throws an Exception with the message Yarn Local dirs can’t be empty
881
DiskBlockManager
getAllBlocks(): Seq[BlockId]
Internally, getAllBlocks takes the block files and returns their names (as BlockId ).
block data.
If successful, you should see the following INFO message in the logs:
When failed to create a local directory, you should see the following ERROR message in the
logs:
882
DiskBlockManager
ERROR DiskBlockManager: Failed to create local dir in [rootDir]. Ignoring this directo
ry.
stop(): Unit
stop …FIXME
subDirs: Array[Array[File]]
subDirs is a collection of subDirsPerLocalDir file locks for every local block store directory
where DiskBlockManager stores block data (with the columns being the number of local
directories and the rows as collection of subDirsPerLocalDir size).
Settings
Table 2. Spark Properties
Spark Property Default Value Description
spark.diskStore.subDirectories 64 The number of …FIXME
883
BlockInfoManager
BlockInfoManager
BlockInfoManager manages memory blocks (aka memory pages). It controls concurrent
access to memory blocks by read and write locks (for existing and new ones).
Locks are the mechanism to control concurrent access to data and prevent
Note
destructive interaction between operations that use the same resource.
readLocksByTask
Tracks tasks (by TaskAttemptId ) and the blocks they
locked for reading (as BlockId).
writeLocksByTask
Tracks tasks (by TaskAttemptId ) and the blocks they
locked for writing (as BlockId).
Refer to Logging.
registerTask Method
Caution FIXME
downgradeLock …FIXME
884
BlockInfoManager
lockForReading(
blockId: BlockId,
blocking: Boolean = true): Option[BlockInfo]
lockForReading locks blockId memory block for reading when the block was registered
When executed, lockForReading prints out the following TRACE message to the logs:
lockForReading looks up the metadata of the blockId block (in infos registry).
If no metadata could be found, it returns None which means that the block does not exist or
was removed (and anybody could acquire a write lock).
Otherwise, when the metadata was found, i.e. registered, it checks so-called writerTask.
Only when the block has no writer tasks, a read lock can be acquired. If so, the readerCount
of the block metadata is incremented and the block is recorded (in the internal
readLocksByTask registry). You should see the following TRACE message in the logs:
For blocks with writerTask other than NO_WRITER , when blocking is enabled,
lockForReading waits (until another thread invokes the Object.notify method or the
With blocking enabled, it will repeat the waiting-for-read-lock sequence until either None
or the lock is obtained.
When blocking is disabled and the lock could not be obtained, None is returned
immediately.
lockForReading is a synchronized method, i.e. no two objects can use this and
Note
other instance methods.
885
BlockInfoManager
lockForWriting(
blockId: BlockId,
blocking: Boolean = true): Option[BlockInfo]
When executed, lockForWriting prints out the following TRACE message to the logs:
It looks up blockId in the internal infos registry. When no BlockInfo could be found, None
is returned. Otherwise, blockId block is checked for writerTask to be
BlockInfo.NO_WRITER with no readers (i.e. readerCount is 0 ) and only then the lock is
returned.
If, for some reason, blockId has a writer or the number of readers is positive (i.e.
BlockInfo.readerCount is greater than 0 ), the method will wait (based on the input
blocking flag) and attempt the write lock acquisition process until it finishes with a write
lock.
(deadlock possible) The method is synchronized and can block, i.e. wait that
Note causes the current thread to wait until another thread invokes Object.notify or
Object.notifyAll methods for this object.
lockForWriting return None for no blockId in the internal infos registry or when
blocking flag is disabled and the write lock could not be acquired.
lockNewBlockForWriting(
blockId: BlockId,
newBlockInfo: BlockInfo): Boolean
886
BlockInfoManager
lockNewBlockForWriting obtains a write lock for blockId but only when the method could
When executed, lockNewBlockForWriting prints out the following TRACE message to the
logs:
If some other thread has already created the block, it finishes returning false . Otherwise,
when the block does not exist, newBlockInfo is recorded in the internal infos registry and
the block is locked for this client for writing. It then returns true .
currentTaskAttemptId Method
Caution FIXME
unlock releases…FIXME
When executed, unlock starts by printing out the following TRACE message to the logs:
unlock gets the metadata for blockId . It may throw a IllegalStateException if the block
If the writer task for the block is not NO_WRITER, it becomes so and the blockId block is
removed from the internal writeLocksByTask registry for the current task attempt.
Otherwise, if the writer task is indeed NO_WRITER , it is assumed that the blockId block is
locked for reading. The readerCount counter is decremented for the blockId block and the
read lock removed from the internal readLocksByTask registry for the current task attempt.
887
BlockInfoManager
In the end, unlock wakes up all the threads waiting for the BlockInfoManager (using Java’s
Object.notifyAll).
Caution FIXME
Caution FIXME
assertBlockIsLockedForWriting Method
Caution FIXME
888
BlockInfo
readerCount is incremented when a read lock is acquired and decreases when the following
happens:
All locks for the memory block obtained by a task are released.
NON_TASK_WRITER (i.e. -1024 ) for non-task threads, e.g. by a driver thread or by unit
test code.
the task attempt id of the task which currently holds the write lock for this block.
A write lock is requested for a memory block (with no writer and readers)
889
BlockInfo
890
BlockManagerSlaveEndpoint
BlockManagerSlaveEndpoint
BlockManagerSlaveEndpoint is a thread-safe RPC endpoint for remote communication
Refer to Logging.
RemoveBlock Message
RemoveBlock(blockId: BlockId)
When a RemoveBlock message comes in, you should see the following DEBUG message in
the logs:
When the computation is successful, you should see the following DEBUG in the logs:
And true response is sent back. You should see the following DEBUG in the logs:
891
BlockManagerSlaveEndpoint
In case of failure, you should see the following ERROR in the logs and the stack trace.
RemoveRdd Message
RemoveRdd(rddId: Int)
When a RemoveRdd message comes in, you should see the following DEBUG message in
the logs:
When the computation is successful, you should see the following DEBUG in the logs:
And the number of blocks removed is sent back. You should see the following DEBUG in the
logs:
In case of failure, you should see the following ERROR in the logs and the stack trace.
RemoveShuffle Message
RemoveShuffle(shuffleId: Int)
892
BlockManagerSlaveEndpoint
When a RemoveShuffle message comes in, you should see the following DEBUG message
in the logs:
If MapOutputTracker was given (when the RPC endpoint was created), it calls
MapOutputTracker to unregister the shuffleId shuffle.
When the computation is successful, you should see the following DEBUG in the logs:
And the result is sent back. You should see the following DEBUG in the logs:
In case of failure, you should see the following ERROR in the logs and the stack trace.
RemoveBroadcast Message
RemoveBroadcast(broadcastId: Long)
When a RemoveBroadcast message comes in, you should see the following DEBUG
message in the logs:
893
BlockManagerSlaveEndpoint
When the computation is successful, you should see the following DEBUG in the logs:
And the result is sent back. You should see the following DEBUG in the logs:
In case of failure, you should see the following ERROR in the logs and the stack trace.
GetBlockStatus Message
GetBlockStatus(blockId: BlockId)
When a GetBlockStatus message comes in, it responds with the result of calling
BlockManager about the status of blockId .
GetMatchingBlockIds Message
TriggerThreadDump Message
When a TriggerThreadDump message comes in, a thread dump is generated and sent back.
pool ( asyncThreadPool ) for some messages to talk to other Spark services, i.e.
BlockManager , MapOutputTracker, ShuffleManager in a non-blocking, asynchronous way.
894
BlockManagerSlaveEndpoint
The reason for the async thread pool is that the block-related operations might take quite
some time and to release the main RPC thread other threads are spawned to talk to the
external services and pass responses on to the clients.
895
DiskBlockObjectWriter
DiskBlockObjectWriter
DiskBlockObjectWriter is a java.io.OutputStream that BlockManager offers for writing blocks
to disk.
DiskBlockObjectWriter can be in the following states (that match the state of the underlying
output streams):
1. Initialized
2. Open
3. Closed
896
DiskBlockObjectWriter
Internal flag…FIXME
initialized
Used when…FIXME
Internal flag…FIXME
hasBeenClosed
Used when…FIXME
Internal flag…FIXME
streamOpen
Used when…FIXME
FIXME
objOut
Used when…FIXME
FIXME
mcs
Used when…FIXME
FIXME
bs
Used when…FIXME
FIXME
objOut
Used when…FIXME
FIXME
blockId
Used when…FIXME
updateBytesWritten Method
Caution FIXME
initialize Method
Caution FIXME
897
DiskBlockObjectWriter
write …FIXME
Caution FIXME
recordWritten Method
Caution FIXME
commitAndGet Method
commitAndGet(): FileSegment
close Method
Caution FIXME
1. file
2. serializerManager — SerializerManager
3. serializerInstance — SerializerInstance
4. bufferSize
5. syncWrites flag
6. writeMetrics — ShuffleWriteMetrics
7. blockId — BlockId
898
DiskBlockObjectWriter
write then writes the key first followed by writing the value .
open(): DiskBlockObjectWriter
open opens DiskBlockObjectWriter , i.e. initializes and re-sets bs and objOut internal output
streams.
Internally, open makes sure that DiskBlockObjectWriter is not closed (i.e. hasBeenClosed
flag is disabled). If it was, open throws a IllegalStateException :
Unless DiskBlockObjectWriter has already been initialized (i.e. initialized flag is enabled),
open initializes it (and turns initialized flag on).
899
DiskBlockObjectWriter
900
BlockManagerSource — Metrics Source for BlockManager
created).
901
BlockManagerSource — Metrics Source for BlockManager
You can access the BlockManagerSource metrics using the web UI’s port (as spark.ui.port
configuration property).
902
ShuffleMetricsSource — Metrics Source of BlockManager for Shuffle-Related Metrics
ShuffleMetricsSource — Metrics Source of
BlockManager for Shuffle-Related Metrics
ShuffleMetricsSource is the metrics source of a BlockManager for shuffle-related metrics.
shuffleMetricsSource.
When created, ShuffleMetricsSource gets a MetricSet that BlockManager requests from the
ShuffleClient (only when in a non-local / cluster mode).
ShuffleMetricsSource is registered under the following source name per the type of a
BlockManager :
903
ShuffleMetricsSource — Metrics Source of BlockManager for Shuffle-Related Metrics
Since Executor does not have a web UI attached you cannot access the
Note
metrics using the HTTP protocol (through MetricsServlet JSON metrics sink).
Source name
904
StorageStatus
StorageStatus
StorageStatus is a developer API that Spark uses to pass "just enough" information about
There are two ways to access StorageStatus about all the known
BlockManagers in a Spark application:
Note SparkContext.getExecutorStorageStatus
Being a SparkListener and intercepting onBlockManagerAdded and
onBlockManagerRemoved events
a Spark application)
updateStorageInfo Method
Caution FIXME
BlockManagerId
Maximum memory — total available on-heap and off-heap memory for storage on the
BlockManager
905
StorageStatus
rddBlocksById gives the blocks (as BlockId with their status as BlockStatus ) that belong
to rddId RDD.
Internally, removeBlock updates block status of blockId (to be empty, i.e. removed).
removeBlock branches off per the type of BlockId , i.e. RDDBlockId or not.
For a RDDBlockId , removeBlock finds the RDD in _rddBlocks and removes the blockId .
removeBlock removes the RDD (from _rddBlocks) completely, if there are no more blocks
registered.
906
ManagedBuffer
ManagedBuffer
ManagedBuffer is the base of…FIXME
package org.apache.spark.network.buffer;
Table 2. ManagedBuffers
ManagedBuffer Description
BlockManagerManagedBuffer
FileSegmentManagedBuffer
NettyManagedBuffer
NioManagedBuffer
907
MapOutputTracker — Shuffle Map Output Registry
shuffle map outputs (with information about the BlockManager and estimated size of the
reduce blocks per shuffle).
There are two concrete MapOutputTrackers , i.e. one for the driver and another for executors:
Given the different runtime environments of the driver and executors, accessing the current
MapOutputTracker is possible using SparkEnv.
SparkEnv.get.mapOutputTracker
epoch
Starts from 0 when MapOutputTracker is created.
Can be updated (on MapOutputTrackerWorkers ) or
incremented (on the driver’s MapOutputTrackerMaster ).
epochLock FIXME
908
MapOutputTracker — Shuffle Map Output Registry
trackerEndpoint Property
trackerEndpoint is a RpcEndpointRef that MapOutputTracker uses to send RPC messages.
trackerEndpoint is initialized when SparkEnv is created for the driver and executors and
Caution FIXME
deserializeMapStatuses Method
Caution FIXME
sendTracker Method
Caution FIXME
serializeMapStatuses Method
Caution FIXME
909
MapOutputTracker — Shuffle Map Output Registry
getStatistics returns a MapOutputStatistics which is simply a pair of the shuffle id (of the
input ShuffleDependency ) and the total sums of estimated sizes of the reduce shuffle blocks
from all the BlockManagers.
Internally, getStatistics finds map outputs for the input ShuffleDependency and calculates
the total sizes for the estimated sizes of the reduce block (in bytes) for every MapStatus and
partition.
The internal totalSizes array has the number of elements as specified by the
number of partitions of the Partitioner of the input ShuffleDependency .
Note
totalSizes contains elements as a sum of the estimated size of the block for
partition in a BlockManager (for a MapStatus ).
Caution FIXME How do the start and end partitions influence the return value?
sizes.
When executed, you should see the following DEBUG message in the logs:
getMapSizesByExecutorId gets the map outputs for all the partitions (despite the
Note
method’s signature).
910
MapOutputTracker — Shuffle Map Output Registry
In the end, getMapSizesByExecutorId converts shuffle map outputs (as MapStatuses ) into the
collection of BlockManagerIds with their blocks and sizes.
getEpoch: Long
getEpoch is used when DAGScheduler is notified that an executor was lost and
Note when TaskSetManager is created (and sets the epoch for the tasks in a
TaskSet).
updateEpoch updates epoch when the input newEpoch is greater (and hence more recent)
911
MapOutputTracker — Shuffle Map Output Registry
stop Method
stop(): Unit
stop is used exclusively when SparkEnv stops (and stops all the services,
Note
MapOutputTracker including).
getStatuses finds MapStatuses for the input shuffleId in the mapStatuses internal cache
and, when not available, fetches them from a remote MapOutputTrackerMaster (using RPC).
Internally, getStatuses first queries the mapStatuses internal cache and returns the map
outputs if found.
If not found (in the mapStatuses internal cache), you should see the following INFO
message in the logs:
INFO Don't have map outputs for shuffle [id], fetching them
If some other process fetches the map outputs for the shuffleId (as recorded in fetching
internal registry), getStatuses waits until it is done.
When no other process fetches the map outputs, getStatuses registers the input
shuffleId in fetching internal registry (of shuffle map outputs being fetched).
getStatuses sends a GetMapOutputStatuses RPC remote message for the input shuffleId
getStatuses requests shuffle map outputs remotely within a timeout and with
Note
retries. Refer to RpcEndpointRef.
912
MapOutputTracker — Shuffle Map Output Registry
getStatuses deserializes the map output statuses and records the result in the
DEBUG Fetching map output statuses for shuffle [id] took [time] ms
If getStatuses could not find the map output locations for the input shuffleId (locally and
remotely), you should see the following ERROR message in the logs and throws a
MetadataFetchFailedException .
convertMapStatuses(
shuffleId: Int,
startPartition: Int,
endPartition: Int,
statuses: Array[MapStatus]): Seq[(BlockManagerId, Seq[(BlockId, Long)])]
convertMapStatuses iterates over the input statuses array (of MapStatus entries indexed
by map id) and creates a collection of BlockManagerId (for each MapStatus entry) with a
ShuffleBlockId (with the input shuffleId , a mapId , and partition ranging from the input
startPartition and endPartition ) and estimated size for the reduce block for every status
and partitions.
For any empty MapStatus , you should see the following ERROR message in the logs:
913
MapOutputTracker — Shuffle Map Output Registry
askTracker[T](message: Any): T
askTracker sends the message to trackerEndpoint RpcEndpointRef and waits for a result.
When an exception happens, you should see the following ERROR message in the logs and
askTracker throws a SparkException .
914
MapOutputTrackerMaster — MapOutputTracker For Driver
MapOutputTrackerMaster — MapOutputTracker
For Driver
MapOutputTrackerMaster is the MapOutputTracker for the driver.
There is currently a hardcoded limit of map and reduce tasks above which
Note Spark does not assign preferred locations aka locality preferences based on
map output sizes — 1000 for map and reduce each.
entries in mapStatuses .
915
MapOutputTrackerMaster — MapOutputTracker For Driver
Refer to Logging.
removeBroadcast Method
Caution FIXME
clearCachedBroadcast Method
Caution FIXME
916
MapOutputTrackerMaster — MapOutputTracker For Driver
post Method
Caution FIXME
stop Method
Caution FIXME
unregisterMapOutput Method
Caution FIXME
You should see the following DEBUG message in the logs for entries being removed:
1. SparkConf
2. broadcastManager — BroadcastManager
MapOutputTrackerMaster initializes the internal registries and counters and starts map-
output-dispatcher threads.
917
MapOutputTrackerMaster — MapOutputTracker For Driver
threadpool: ThreadPoolExecutor
name prefix.
getPreferredLocationsForShuffle finds the locations (i.e. BlockManagers) with the most map
number of partitions of the RDD of the input ShuffleDependency and partitions in the
partitioner of the input ShuffleDependency both being less than 1000 .
The thresholds for the number of partitions in the RDD and of the partitioner
Note
when computing the preferred locations are 1000 and are not configurable.
918
MapOutputTrackerMaster — MapOutputTracker For Driver
incrementEpoch(): Unit
getLocationsWithLargestOutputs(
shuffleId: Int,
reducerId: Int,
numReducers: Int,
fractionThreshold: Double): Option[Array[BlockManagerId]]
getLocationsWithLargestOutputs returns BlockManagerIds with the largest size (of all the
shuffle blocks they manage) above the input fractionThreshold (given the total size of all
the shuffle blocks for the shuffle across all BlockManagers).
919
MapOutputTrackerMaster — MapOutputTracker For Driver
mapping between BlockManagerId and the cumulative sum of shuffle blocks across
BlockManagers.
registerShuffle adds a lock in the shuffleIdLocks internal registry (without using it).
920
MapOutputTrackerMaster — MapOutputTracker For Driver
registerMapOutputs(
shuffleId: Int,
statuses: Array[MapStatus],
changeEpoch: Boolean = false): Unit
registerMapOutputs registers the input statuses (as the shuffle map output) with the input
default).
shuffleId .
921
MapOutputTrackerMaster — MapOutputTracker For Driver
cachedSerializedStatuses internal cache if the epoch has not changed in the meantime.
getSerializedMapOutputStatuses also saves its broadcast version in
If the epoch has changed in the meantime, the serialized map output statuses and their
broadcast version are not saved, and you should see the following INFO message in the
logs:
checkCachedStatuses(): Boolean
uses to do some bookkeeping (when the epoch and cacheEpoch differ) and set local
statuses , retBytes and epochGotten (that getSerializedMapOutputStatuses uses).
Internally, checkCachedStatuses acquires the epochLock lock and checks the status of
epoch to cached cacheEpoch .
922
MapOutputTrackerMaster — MapOutputTracker For Driver
checkCachedStatuses gets the serialized map output statuses for the shuffleId (of the
owning getSerializedMapOutputStatuses).
When the serialized map output status is found, checkCachedStatuses saves it in a local
retBytes and returns true .
When not found, you should see the following DEBUG message in the logs:
checkCachedStatuses uses mapStatuses internal cache to get map output statuses for the
and sets it to a local statuses . checkCachedStatuses sets the local epochGotten to the
current epoch and returns false .
arrives.
Unless PoisonPill is processed, you should see the following DEBUG message in the
logs:
DEBUG Handling request to send map output locations for shuffle [shuffleId] to [hostPo
rt]
MessageLoop replies back with serialized map output statuses for the shuffleId (from the
PoisonPill Message
PoisonPill is a GetMapOutputMessage (with -99 as shuffleId ) that indicates that
Settings
923
MapOutputTrackerMaster — MapOutputTracker For Driver
Controls whether to
compute locality
preferences for reduce
tasks.
When enabled (i.e.
true ),
spark.shuffle.reduceLocality.enabled true MapOutputTrackerMaster
computes the preferred
hosts on which to run a
given map output
partition in a given
shuffle, i.e. the nodes
that the most outputs for
that partition are on.
924
MapOutputTrackerMasterEndpoint
MapOutputTrackerMasterEndpoint
MapOutputTrackerMasterEndpoint is a RpcEndpoint for MapOutputTrackerMaster.
GetMapOutputStatuses
StopMapOutputTracker
Refer to Logging.
1. rpcEnv — RpcEnv
2. tracker — MapOutputTrackerMaster
3. conf — SparkConf
When created, you should see the following DEBUG message in the logs:
DEBUG init
GetMapOutputStatuses Message
GetMapOutputStatuses(shuffleId: Int)
extends MapOutputTrackerMessage
925
MapOutputTrackerMasterEndpoint
INFO Asked to send map output locations for shuffle [shuffleId] to [hostPort]
StopMapOutputTracker Message
StopMapOutputTracker
extends MapOutputTrackerMessage
When StopMapOutputTracker arrives, you should see the following INFO message in the
logs:
MapOutputTrackerMasterEndpoint confirms the request (by replying true ) and stops itself
926
MapOutputTrackerWorker — MapOutputTracker for Executors
MapOutputTrackerWorker — MapOutputTracker
for Executors
A MapOutputTrackerWorker is the MapOutputTracker for executors.
for mapStatuses internal cache and any lookup cache miss triggers a fetch from the driver’s
MapOutputTrackerMaster.
Refer to Logging.
927
ShuffleManager — Pluggable Shuffle Systems
The driver and executor access their ShuffleManager instances using SparkEnv.
The driver registers shuffles with a shuffle manager, and executors (or tasks running locally
in the driver) can ask to read and write data.
There can be many shuffle services running simultaneously and a driver registers with all of
them when CoarseGrainedSchedulerBackend is used.
ShuffleManager Contract
trait ShuffleManager {
def registerShuffle[K, V, C](
shuffleId: Int,
numMaps: Int,
dependency: ShuffleDependency[K, V, C]): ShuffleHandle
def getWriter[K, V](
handle: ShuffleHandle,
mapId: Int,
context: TaskContext): ShuffleWriter[K, V]
def getReader[K, C](
handle: ShuffleHandle,
startPartition: Int,
endPartition: Int,
context: TaskContext): ShuffleReader[K, C]
def unregisterShuffle(shuffleId: Int): Boolean
def shuffleBlockResolver: ShuffleBlockResolver
def stop(): Unit
}
928
ShuffleManager — Pluggable Shuffle Systems
registerShuffle
Executed when ShuffleDependency is created and
registers itself.
getWriter
Used when a ShuffleMapTask runs (and requests a
ShuffleWriter to write records for a partition).
Used when:
1. BlockManager requests a ShuffleBlockResolver
shuffleBlockResolver
capable of retrieving shuffle block data (for a
ShuffleBlockId)
2. BlockManager requests a ShuffleBlockResolver for
local shuffle block data as bytes.
Settings
929
ShuffleManager — Pluggable Shuffle Systems
930
SortShuffleManager — The Default Shuffle System
or tungsten-sort .
shuffleBlockResolver
NOTE: shuffleBlockResolver is part of ShuffleManager
contract.
Beside the uses due to the contract,
shuffleBlockResolver is used in unregisterShuffle and
stopped in stop .
Refer to Logging.
unregisterShuffle Method
Caution FIXME
931
SortShuffleManager — The Default Shuffle System
SortShuffleManager makes sure that spark.shuffle.spill Spark property is enabled. If not you
registerShuffle[K, V, C](
shuffleId: Int,
numMaps: Int,
dependency: ShuffleDependency[K, V, C]): ShuffleHandle
3. BaseShuffleHandle
getWriter[K, V](
handle: ShuffleHandle,
mapId: Int,
context: TaskContext): ShuffleWriter[K, V]
932
SortShuffleManager — The Default Shuffle System
Internally, getWriter makes sure that a ShuffleHandle is associated with its numMaps in
numMapsForShuffle internal registry.
getReader[K, C](
handle: ShuffleHandle,
startPartition: Int,
endPartition: Int,
context: TaskContext): ShuffleReader[K, C]
getReader returns a new BlockStoreShuffleReader passing all the input parameters on to it.
stop(): Unit
reference).
933
SortShuffleManager — The Default Shuffle System
2. mapSideCombine flag is disabled (i.e. false ) but the number of partitions (of the
canUseSerializedShuffle condition holds (i.e. is positive) when all of the following hold
3. The number of shuffle output partitions of the input ShuffleDependency is at most the
supported maximum number (which is (1 << 24) - 1 , i.e. 16777215 ).
You should see the following DEBUG message in the logs when canUseSerializedShuffle
holds:
Otherwise, canUseSerializedShuffle does not hold and you should see one of the following
DEBUG messages:
934
SortShuffleManager — The Default Shuffle System
DEBUG Can't use serialized shuffle for shuffle [id] because the serializer, [name], do
es not support object relocation
DEBUG SortShuffleManager: Can't use serialized shuffle for shuffle [id] because an agg
regator is defined
DEBUG Can't use serialized shuffle for shuffle [id] because it has more than [number]
partitions
Settings
Table 2. Spark Properties
Default
Spark Property Description
Value
No longer in use.
When false the following
WARN shows in the logs
when SortShuffleManager
is created:
spark.shuffle.spill true
WARN SortShuffleManager:
spark.shuffle.spill was set
to false, but this
configuration is ignored as
of Spark 1.6+. Shuffle will
continue to spill to disk
when necessary.
935
ExternalShuffleService
ExternalShuffleService
ExternalShuffleService is an external shuffle service that serves shuffle blocks from
outside an Executor process. It runs as a standalone application and manages shuffle output
files so they are available for executors at all time. As the shuffle output files are managed
externally to the executors it offers an uninterrupted access to the shuffle output files
regardless of executors being killed or down.
Refer to Logging.
start-shuffle-service.sh
936
ExternalShuffleService
$ ./sbin/start-shuffle-service.sh
starting org.apache.spark.deploy.ExternalShuffleService, logging
to ...logs/spark-jacek-
org.apache.spark.deploy.ExternalShuffleService-1-
japila.local.out
$ tail -f ...logs/spark-jacek-
org.apache.spark.deploy.ExternalShuffleService-1-
japila.local.out
Spark Command:
/Library/Java/JavaVirtualMachines/Current/Contents/Home/bin/java
-cp
/Users/jacek/dev/oss/spark/conf/:/Users/jacek/dev/oss/spark/asse
mbly/target/scala-2.11/jars/* -Xmx1g
org.apache.spark.deploy.ExternalShuffleService
========================================
Using Spark's default log4j profile: org/apache/spark/log4j-
defaults.properties
16/06/07 08:02:02 INFO ExternalShuffleService: Started daemon
with process name: [email protected]
16/06/07 08:02:03 INFO ExternalShuffleService: Starting shuffle
service on port 7337 with useSasl = false
937
ExternalShuffleService
Refer to Logging.
You should also see the following messages when a SparkContext is closed:
938
ExternalShuffleService
start(): Unit
When start is executed, you should see the following INFO message in the logs:
The internal server reference (a TransportServer ) is created (which will attempt to bind to
port ).
stop(): Unit
stop closes the internal server reference and clears it (i.e. sets it to null ).
ExternalShuffleBlockHandler
ExternalShuffleBlockHandler is a RpcHandler (i.e. a handler for sendRPC() messages sent
by TransportClient s).
939
ExternalShuffleService
Refer to Logging.
handleMessage Method
handleMessage(
BlockTransferMessage msgObj,
TransportClient client,
RpcResponseCallback callback)
OpenBlocks
RegisterExecutor
OpenBlocks
It then gets block data for each block id in blockIds (using ExternalShuffleBlockResolver).
Finally, it registers a stream and does callback.onSuccess with a serialized byte buffer (for
the streamId and the number of blocks in msg ).
940
ExternalShuffleService
TRACE Registered streamId [streamId] with [length] buffers for client [clientId] from
host [remoteAddress]
RegisterExecutor
RegisterExecutor
ExternalShuffleBlockResolver
Caution FIXME
getBlockData Method
shuffle block ids with the other three parts being shuffleId , mapId , and reduceId .
It throws a IllegalArgumentException for block ids with less than four parts:
941
ExternalShuffleService
Settings
Table 1. Spark Properties
Default
Spark Property Description
Value
Enables External Shuffle Service.
When true , the driver registers
itself with the shuffle service.
Used to enable for dynamic
allocation of executors and in
spark.shuffle.service.enabled false CoarseMesosSchedulerBackend to
instantiate
MesosExternalShuffleClient.
Explicitly disabled for
LocalSparkCluster (and any
attempts to set it are ignored).
spark.shuffle.service.port 7337
942
OneForOneStreamManager
OneForOneStreamManager
Caution FIXME
registerStream Method
Caution FIXME
943
ShuffleBlockResolver
ShuffleBlockResolver
ShuffleBlockResolver is used to find shuffle block data.
ShuffleBlockResolver Contract
trait ShuffleBlockResolver {
def getBlockData(blockId: ShuffleBlockId): ManagedBuffer
def stop(): Unit
}
944
IndexShuffleBlockResolver
IndexShuffleBlockResolver
IndexShuffleBlockResolver is the one and only ShuffleBlockResolver in Spark.
IndexShuffleBlockResolver manages shuffle block data and uses shuffle index files for
faster shuffle data access. IndexShuffleBlockResolver can write a shuffle block index and
data file, find and remove shuffle index and data files per shuffle and map.
Note Shuffle block data files are more often referred as map outputs files.
1. SparkConf,
945
IndexShuffleBlockResolver
writeIndexFileAndCommit(
shuffleId: Int,
mapId: Int,
lengths: Array[Long],
dataTmp: File): Unit
Internally, writeIndexFileAndCommit first finds the index file for the input shuffleId and
mapId .
writeIndexFileAndCommit creates a temporary file for the index file (in the same directory)
and writes offsets (as the moving sum of the input lengths starting from 0 to the final offset
at the end for the end of the output file).
Note The offsets are the sizes in the input lengths exactly.
mapId .
writeIndexFileAndCommit checks if the given index and data files match each other (aka
consistency check).
946
IndexShuffleBlockResolver
If the consistency check fails, it means that another attempt for the same task has already
written the map outputs successfully and so the input dataTmp and temporary index files are
deleted (as no longer correct).
If the consistency check succeeds, the existing index and data files are deleted (if they exist)
and the temporary index and data files become "official", i.e. renamed to their final names.
or
Internally, getBlockData finds the index file for the input shuffle blockId .
getBlockData reads the start and end offsets from the index file and then creates a
FileSegmentManagedBuffer to read the data file for the offsets (using transportConf internal
property).
The start and end offsets are the offset and the length of the file segment for the
Note
block data.
947
IndexShuffleBlockResolver
checkIndexAndDataFile first checks if the size of the input index file is exactly the input
blocks multiplied by 8 .
checkIndexAndDataFile returns null when the numbers, and hence the shuffle index and
checkIndexAndDataFile reads the shuffle index file and converts the offsets into lengths of
each block.
checkIndexAndDataFile makes sure that the size of the input shuffle data file is exactly the
checkIndexAndDataFile returns the block lengths if the numbers match, and null
otherwise.
getIndexFile then requests DiskBlockManager for the shuffle index file given the input
948
IndexShuffleBlockResolver
getDataFile then requests DiskBlockManager for the shuffle block data file given the input
removeDataByMap finds and deletes the shuffle data for the input shuffleId and mapId first
When removeDataByMap fails deleting the files, you should see a WARN message in the logs.
or
949
IndexShuffleBlockResolver
stop(): Unit
950
ShuffleWriter
ShuffleWriter
Caution FIXME
ShuffleWriter Contract
951
BypassMergeSortShuffleWriter
BypassMergeSortShuffleWriter
BypassMergeSortShuffleWriter is a ShuffleWriter that ShuffleMapTask uses to write records
into one single shuffle block data file when the task runs for a ShuffleDependency .
952
BypassMergeSortShuffleWriter
partitionWriters FIXME
partitionWriterSegments FIXME
IndexShuffleBlockResolver.
shuffleBlockResolver Initialized when BypassMergeSortShuffleWriter is created.
Used when BypassMergeSortShuffleWriter writes records.
Internal flag that controls the use of Java New I/O when
BypassMergeSortShuffleWriter concatenates per-partition
shuffle files into a single shuffle block data file.
transferToEnabled
Specified when BypassMergeSortShuffleWriter is created
and controlled by spark.file.transferTo Spark property.
Enabled by default.
953
BypassMergeSortShuffleWriter
Refer to Logging.
1. BlockManager
2. IndexShuffleBlockResolver
3. BypassMergeSortShuffleHandle
4. mapId
5. TaskContext
6. SparkConf
Internally, when the input records iterator has no more records, write creates an empty
partitionLengths internal array of numPartitions size.
954
BypassMergeSortShuffleWriter
write then requests the internal IndexShuffleBlockResolver to write shuffle index and data
files (with dataTmp as null ) and sets the internal mapStatus (with the address of
BlockManager in use and partitionLengths).
However, when there are records to write, write creates a new Serializer.
For every partition, write requests DiskBlockManager for a temporary shuffle block and its
file.
write takes records serially, i.e. record by record, and, after computing the partition for a
After all the records have been written, write requests every DiskBlockObjectWriter to
commitAndGet and saves the commit results in partitionWriterSegments. write closes
every DiskBlockObjectWriter .
write requests IndexShuffleBlockResolver for the shuffle block data file for shuffleId
and mapId .
955
BypassMergeSortShuffleWriter
write creates a temporary shuffle block data file and writes the per-partition shuffle files to
it.
In the end, write requests IndexShuffleBlockResolver to write shuffle index and data files
for the shuffleId and mapId (with partitionLengths and the temporary file) and creates a
new mapStatus (with the location of the BlockManager and partitionLengths).
writePartitionedFile creates a file output stream for the input outputFile in append
mode.
For every numPartitions partition, writePartitionedFile takes the file from the FileSegment
(from partitionWriterSegments) and creates a file input stream to read raw bytes.
writePartitionedFile then copies the raw bytes from each partition segment input stream
to outputFile (possibly using Java New I/O per transferToEnabled flag set when
BypassMergeSortShuffleWriter was created) and records the length of the shuffle data file (in
In the end, writePartitionedFile increments shuffle write time, clears partitionWriters array
and returns the lengths of the shuffle data files per partition.
956
BypassMergeSortShuffleWriter
copyStream(
in: InputStream,
out: OutputStream,
closeStreams: Boolean = false,
transferToEnabled: Boolean = false): Long
copyStream branches off depending on the type of in and out streams, i.e. whether they
If they are both FileInputStream with transferToEnabled enabled, copyStream gets their
FileChannels and transfers bytes from the input file to the output file and counts the number
If either in and out input streams are not FileInputStream or transferToEnabled flag is
disabled (default), copyStream reads data from in to write to out and counts the number
of bytes written.
copyStream can optionally close in and out streams (depending on the input
Visit the official web site of JSR 51: New I/O APIs for the Java Platform and read
Tip
up on java.nio package.
957
SortShuffleWriter
SortShuffleWriter — Fallback ShuffleWriter
SortShuffleWriter is a ShuffleWriter that is used when SortShuffleManager returns a
Refer to Logging.
1. IndexShuffleBlockResolver
2. BaseShuffleHandle
4. TaskContext
958
SortShuffleWriter
write requests IndexShuffleBlockResolver for the shuffle data output file (for the
ShuffleDependency and mapId ) and creates a temporary file for the shuffle data file in the
same directory.
write creates a ShuffleBlockId (for the ShuffleDependency and mapId and the special
write requests ExternalSorter to write all the records (previously inserted in) into the
partitioned file).
write creates a MapStatus (with the location of the shuffle server that serves the
executor’s shuffle files and the sizes of the shuffle partitioned file’s partitions).
write does not handle exceptions so when they occur, they will break the
Note
processing.
In the end, write deletes the temporary partitioned file. You may see the following ERROR
message in the logs if write did not manage to do so:
959
SortShuffleWriter
stop is part of ShuffleWriter contract to close itself (and return the last written
Note
MapStatus).
stop turns stopping flag on and returns the internal mapStatus if the input success is
enabled.
Otherwise, when stopping flag is already enabled or the input success is disabled, stop
returns no MapStatus (i.e. None ).
In the end, stop stops the ExternalSorter and increments the shuffle write time task
metrics.
960
UnsafeShuffleWriter — ShuffleWriter for SerializedShuffleHandle
UnsafeShuffleWriter — ShuffleWriter for
SerializedShuffleHandle
UnsafeShuffleWriter is a ShuffleWriter that is used to write records (i.e. key-value pairs).
for a SerializedShuffleHandle.
UnsafeShuffleWriter can use a specialized NIO-based merge procedure that avoids extra
serialization/deserialization.
Refer to Logging.
mergeSpillsWithTransferTo Method
Caution FIXME
forceSorterToSpill Method
961
UnsafeShuffleWriter — ShuffleWriter for SerializedShuffleHandle
Caution FIXME
mergeSpills Method
Caution FIXME
updatePeakMemoryUsed Method
Caution FIXME
Internally, write traverses the input sequence of records (for a RDD partition) and
insertRecordIntoSorter one by one. When all the records have been processed, write
closes internal resources and writes spill files merged.
Caution FIXME
Caution FIXME
1. BlockManager
2. IndexShuffleBlockResolver
962
UnsafeShuffleWriter — ShuffleWriter for SerializedShuffleHandle
3. TaskMemoryManager
4. SerializedShuffleHandle
5. mapId
6. TaskContext
7. SparkConf
UnsafeShuffleWriter makes sure that the number of shuffle output partitions (of the
16777215 .
If the number of shuffle output partitions is greater than the maximum, UnsafeShuffleWriter
throws a IllegalArgumentException .
UnsafeShuffleWriter can only be used for shuffles with at most 16777215 reduce partiti
ons
open makes sure that the internal reference to ShuffleExternalSorter (as sorter ) is not
open creates a new byte array output stream (as serBuffer ) with the buffer capacity of
1M .
963
UnsafeShuffleWriter — ShuffleWriter for SerializedShuffleHandle
open creates a new SerializationStream for the new byte array output stream using
SerializerInstance.
insertRecordIntoSorter calculates the partition for the key of the input record .
insertRecordIntoSorter then writes the key and the value of the input record to
metadata.
964
UnsafeShuffleWriter — ShuffleWriter for SerializedShuffleHandle
closeAndWriteOutput creates a temporary file to merge spill files, deletes them afterwards,
If there is an issue with deleting spill files, you should see the following ERROR message in
the logs:
If there is an issue with deleting the temporary file, you should see the following ERROR
message in the logs:
Settings
Table 2. Spark Properties
Default
Spark Property Description
Value
spark.file.transferTo true Controls whether…FIXME
4096
spark.shuffle.sort.initialBufferSize
(bytes) Default initial sort buffer size
965
BaseShuffleHandle — Fallback Shuffle Handle
1. shuffleId
2. numMaps
3. ShuffleDependency
966
BaseShuffleHandle — Fallback Shuffle Handle
// Start a Spark application, e.g. spark-shell, with the Spark properties to trigger s
election of BaseShuffleHandle:
// 1. spark.shuffle.spill.numElementsForceSpillThreshold=1
// 2. spark.shuffle.sort.bypassMergeThreshold=1
scala> rdd.dependencies
DEBUG SortShuffleManager: Can't use serialized shuffle for shuffle 0 because an aggreg
ator is defined
res0: Seq[org.apache.spark.Dependency[_]] = List(org.apache.spark.ShuffleDependency@11
60c54b)
scala> rdd.getNumPartitions
res1: Int = 2
// mapSideCombine is disabled
scala> shuffleDep.mapSideCombine
res2: Boolean = false
// aggregator defined
scala> shuffleDep.aggregator
res3: Option[org.apache.spark.Aggregator[Int,Int,Int]] = Some(Aggregator(<function1>,<
function2>,<function2>))
scala> shuffleDep.shuffleHandle
res5: org.apache.spark.shuffle.ShuffleHandle = org.apache.spark.shuffle.BaseShuffleHan
dle@22b0fe7e
967
BypassMergeSortShuffleHandle — Marker Interface for Bypass Merge Sort Shuffle Handles
BypassMergeSortShuffleHandle — Marker
Interface for Bypass Merge Sort Shuffle
Handles
BypassMergeSortShuffleHandles is a BaseShuffleHandle with no additional methods or fields
and serves only to identify the choice of bypass merge sort shuffle.
968
BypassMergeSortShuffleHandle — Marker Interface for Bypass Merge Sort Shuffle Handles
scala> rdd.dependencies
res0: Seq[org.apache.spark.Dependency[_]] = List(org.apache.spark.ShuffleDependency@65
5875bb)
scala> rdd.getNumPartitions
res1: Int = 8
// mapSideCombine is disabled
scala> shuffleDep.mapSideCombine
res2: Boolean = false
// aggregator defined
scala> shuffleDep.aggregator
res3: Option[org.apache.spark.Aggregator[Int,Int,Int]] = Some(Aggregator(<function1>,<
function2>,<function2>))
// spark.shuffle.sort.bypassMergeThreshold == 200
// the number of reduce partitions < spark.shuffle.sort.bypassMergeThreshold
scala> shuffleDep.partitioner.numPartitions
res4: Int = 8
scala> shuffleDep.shuffleHandle
res5: org.apache.spark.shuffle.ShuffleHandle = org.apache.spark.shuffle.sort.BypassMer
geSortShuffleHandle@68893394
969
SerializedShuffleHandle — Marker Interface for Serialized Shuffle Handles
970
ShuffleReader
ShuffleReader
ShuffleReader is a contract of shuffle readers to read combined key-value records for a
reduce task.
package org.apache.spark.shuffle
trait ShuffleReader[K, C] {
def read(): Iterator[Product2[K, C]]
}
971
BlockStoreShuffleReader
BlockStoreShuffleReader
BlockStoreShuffleReader is the one and only known ShuffleReader that reads the combined
key-values for the reduce task (for a range of start and end reduce partitions) from a shuffle
by requesting them from block managers.
read uses MapOutputTracker to find the BlockManagers with the shuffle blocks
Note
and sizes to create ShuffleBlockFetcherIterator .
read updates the context task metrics for each record read.
If the ShuffleDependency has an Aggregator defined, read wraps the current iterator
inside an iterator defined by Aggregator.combineCombinersByKey (for mapSideCombine
enabled) or Aggregator.combineValuesByKey otherwise.
972
BlockStoreShuffleReader
1. Creates an ExternalSorter
Settings
973
BlockStoreShuffleReader
BaseShuffleHandle
974
BlockStoreShuffleReader
TaskContext
SerializerManager
BlockManager
MapOutputTracker
975
ShuffleBlockFetcherIterator
ShuffleBlockFetcherIterator
ShuffleBlockFetcherIterator is a Scala Iterator that fetches shuffle blocks (aka shuffle map
InputStream) pairs so a caller can handle shuffle blocks in a pipelined fashion as they are
received.
memory.
976
ShuffleBlockFetcherIterator
The maximum size (in bytes) of all the remote shuffle blocks to fetch.
maxBytesInFlight
Set when ShuffleBlockFetcherIterator is created.
Refer to Logging.
fetchUpToMaxBytes Method
977
ShuffleBlockFetcherIterator
Caution FIXME
TaskContext
ShuffleClient
BlockManager
Function to wrap the returned input stream (as (BlockId, InputStream) ⇒ InputStream )
maxBlocksInFlightPerAddress
maxReqSizeShuffleToMem
initialize(): Unit
initialize registers a task cleanup and fetches shuffle blocks from remote and local
BlockManagers.
Internally, initialize registers a TaskCompletionListener (that will clean up right after the
task finishes).
initialize splitLocalRemoteBlocks.
initialize registers the new remote fetch requests (with fetchRequests internal registry).
978
ShuffleBlockFetcherIterator
Internally, when sendRequest runs, you should see the following DEBUG message in the
logs:
The input FetchRequest contains the remote BlockManagerId address and the
Note
shuffle blocks to fetch (as a sequence of BlockId and their sizes).
sendRequest requests ShuffleClient to fetch shuffle blocks (from the host, the port, and
979
ShuffleBlockFetcherIterator
2. For every shuffle block fetch failure adds it as FailureFetchResult to results internal
queue.
onBlockFetchSuccess Callback
Internally, onBlockFetchSuccess checks if the iterator is not zombie and does the further
processing if it is not.
onBlockFetchSuccess marks the input blockId as received (i.e. removes it from all the
queue.
onBlockFetchFailure Callback
When onBlockFetchFailure is called, you should see the following ERROR message in the
logs:
980
ShuffleBlockFetcherIterator
throwFetchFailedException(
blockId: BlockId,
address: BlockManagerId,
e: Throwable): Nothing
ShuffleBlockId .
cleanup(): Unit
cleanup iterates over results internal queue and for every SuccessFetchResult , increments
remote bytes read and blocks fetched shuffle task metrics, and eventually releases the
managed buffer.
releaseCurrentResultBuffer(): Unit
981
ShuffleBlockFetcherIterator
fetchLocalBlocks(): Unit
fetchLocalBlocks …FIXME
hasNext Method
hasNext: Boolean
hasNext is part of Scala’s Iterator Contract to test whether this iterator can
Note
provide another element.
splitLocalRemoteBlocks(): ArrayBuffer[FetchRequest]
splitLocalRemoteBlocks …FIXME
next Method
982
ShuffleBlockFetcherIterator
next is part of Scala’s Iterator Contract to produce the next element of this
Note
iterator.
next …FIXME
983
ShuffleExternalSorter — Cache-Efficient Sorter
ShuffleExternalSorter — Cache-Efficient Sorter
ShuffleExternalSorter is a specialized cache-efficient sorter that sorts arrays of
compressed record pointers and partition ids. By using only 8 bytes of space per record in
the sorting array, ShuffleExternalSorter can fit more of the array into cache.
ShuffleExternalSorter is a MemoryConsumer.
Refer to Logging.
getMemoryUsage Method
Caution FIXME
closeAndGetSpills Method
Caution FIXME
insertRecord Method
Caution FIXME
freeMemory Method
Caution FIXME
984
ShuffleExternalSorter — Cache-Efficient Sorter
getPeakMemoryUsedBytes Method
Caution FIXME
writeSortedFile Method
Caution FIXME
cleanupResources Method
Caution FIXME
1. memoryManager — TaskMemoryManager
2. blockManager — BlockManager
3. taskContext — TaskContext
4. initialSize
5. numPartitions
6. SparkConf
7. writeMetrics — ShuffleWriteMetrics
Spark properties.
985
ShuffleExternalSorter — Cache-Efficient Sorter
spill is part of MemoryConsumer contract to sort and spill the current records
Note
due to memory pressure.
spill frees execution memory, updates TaskMetrics , and in the end returns the spill size.
INFO Thread [id] spilling sort data of [memoryUsage] to disk ([size] times so far)
spill resets the internal ShuffleInMemorySorter (that in turn frees up the underlying in-
986
ExternalSorter
ExternalSorter
ExternalSorter is a Spillable of WritablePartitionedPairCollection of K -key / C -value
pairs.
When created ExternalSorter expects three different types of data defined, i.e. K , V ,
C , for keys, values, and combiner (partial) values, respectively.
Refer to Logging.
stop Method
Caution FIXME
writePartitionedFile Method
Caution FIXME
1. TaskContext
2. Optional Aggregator
3. Optional Partitioner
5. Optional Serializer
987
ExternalSorter
Caution FIXME
spill Method
Caution FIXME
Caution FIXME
insertAll Method
Caution FIXME
Settings
988
ExternalSorter
spark.shuffle.spill.batchSize 10000
Size of object batches when
reading/writing from serializers.
989
Serialization
Serialization
Serialization systems:
Java serialization
Kryo
Avro
Thrift
Protobuf
990
Serializer — Task SerDe
Caution FIXME
deserialize Method
Caution FIXME
supportsRelocationOfSerializedObjects Property
supportsRelocationOfSerializedObjects should be enabled (i.e. true) only when reordering
the bytes of serialized objects in serialization stream output is equivalent to having re-
ordered those elements prior to serializing them.
991
SerializerInstance
SerializerInstance
Caution FIXME
serializeStream Method
Caution FIXME
992
SerializationStream
SerializationStream
Caution FIXME
writeKey Method
Caution FIXME
writeValue Method
Caution FIXME
993
DeserializationStream
DeserializationStream
Caution FIXME
994
ExternalClusterManager — Pluggable Cluster Managers
ExternalClusterManager — Pluggable Cluster
Managers
ExternalClusterManager is a contract for pluggable cluster managers. It returns a task
scheduler and a backend scheduler that will be used by SparkContext to schedule tasks.
ExternalClusterManager Contract
canCreate Method
master URL.
canCreate is used when SparkContext loads the external cluster manager for
Note
a master URL.
createTaskScheduler Method
masterURL .
createSchedulerBackend Method
995
ExternalClusterManager — Pluggable Cluster Managers
createSchedulerBackend(sc: SparkContext,
masterURL: String,
scheduler: TaskScheduler): SchedulerBackend
initialize is called after the task scheduler and the backend scheduler were created and
initialized separately.
996
BroadcastManager
BroadcastManager
Broadcast Manager ( BroadcastManager ) is a Spark service to manage broadcast variables
in Spark. It is created for a Spark application when SparkContext is initialized and is a simple
wrapper around BroadcastFactory.
BroadcastManager tracks the number of broadcast variables in a Spark application (using the
The idea is to transfer values used in transformations from a driver to executors in a most
effective way so they are copied once and used many times by tasks (rather than being
copied every time a task is launched).
stop Method
Caution FIXME
Caution FIXME
initialize(): Unit
newBroadcast Method
newBroadcast simply requests the current BroadcastFactory for a new broadcast variable.
997
BroadcastManager
Settings
Table 1. Settings
Default
Name Description
value
The size of a block (in kB when unit not
specified).
spark.broadcast.blockSize 4m
Used when TorrentBroadcast stores
brodcast blocks to BlockManager .
998
BroadcastFactory — Pluggable Broadcast Variable Factories
BroadcastFactory — Pluggable Broadcast
Variable Factories
BroadcastFactory is the contract for factories of broadcast variables in Apache Spark.
package org.apache.spark.broadcast
trait BroadcastFactory {
def initialize(isDriver: Boolean, conf: SparkConf, securityMgr: SecurityManager): Un
it
def newBroadcast[T: ClassTag](value: T, isLocal: Boolean, id: Long): Broadcast[T]
def unbroadcast(id: Long, removeFromDriver: Boolean, blocking: Boolean): Unit
def stop(): Unit
}
999
TorrentBroadcastFactory
TorrentBroadcastFactory
TorrentBroadcastFactory is a BroadcastFactory of TorrentBroadcasts, i.e. BitTorrent-like
broadcast variables.
unbroadcast removes all the persisted state associated with a TorrentBroadcast of a given
ID.
newBroadcast creates a TorrentBroadcast (for the input value_ and id and ignoring the
isLocal parameter).
1000
TorrentBroadcast
TorrentBroadcast — BroadcastFactory With
BitTorrent-Like Protocol For Block Distribution
TorrentBroadcast is a BroadcastFactory that uses a BitTorrent-like protocol for block
distribution (that only happens when tasks access broadcast variables on executors).
newBroadcast.
// On the driver
val sc: SparkContext = ???
val anyScalaValue = ???
val b = sc.broadcast(anyScalaValue) // <-- TorrentBroadcast is created
1001
TorrentBroadcast
A broadcast variable is stored on the driver’s BlockManager as a single value and separately
as broadcast blocks (after it was divided into broadcast blocks, i.e. blockified). The
broadcast block size is the value of spark.broadcast.blockSize Spark property.
TorrentBroadcast uses _value internal registry for the value that is computed by
_value: T
_value is a @transient private lazy val which means that it is not serialized
Note
to be sent remotely and instantiated when first requested.
Refer to Logging.
unBlockifyObject Method
Caution FIXME
1002
TorrentBroadcast
releaseLock Method
Caution FIXME
def getValue(): T
getValue is part of the Broadcast Variable Contract and is the only way to
Note
access the value of a broadcast variable.
Internaly, getValue reads the internal _value that, once accessed, reads broadcast blocks
from the local or remote BlockManagers.
The internal _value is transient and lazy, i.e. it is not preserved when
serialized and (re)created only when requested, respectively. That "trick" allows
Note
for serializing broadcast values on the driver before they are transferred to
executors over the wire.
readBroadcastBlock(): T
If the broadcast was available locally, readBroadcastBlock releases a lock for the broadcast
and returns the value.
If however the broadcast was not found locally, you should see the following INFO message
in the logs:
1003
TorrentBroadcast
readBroadcastBlock stores the broadcast variable with MEMORY_AND_DISK storage level to the
setConf uses the input conf SparkConf to set compression codec and the block size.
setConf also reads spark.broadcast.blockSize Spark property and sets the block size (as
1004
TorrentBroadcast
writeBlocks stores the broadcast’s value and blocks in the driver’s BlockManager. It
returns the number of the broadcast blocks the broadcast was divided into.
Internally, writeBlocks stores the block for value broadcast to the local BlockManager
(using a new BroadcastBlockId, value , MEMORY_AND_DISK storage level and without telling
the driver).
If storing the broadcast block fails, you should see the following SparkException in the logs:
writeBlocks divides value into blocks (of spark.broadcast.blockSize size) using the
If storing any of the broadcast pieces fails, you should see the following SparkException in
the logs:
blockifyObject[T](
obj: T,
blockSize: Int,
serializer: Serializer,
compressionCodec: Option[CompressionCodec]): Array[ByteBuffer]
blockifyObject divides (aka blockifies) the input obj broadcast variable into blocks (of
1005
TorrentBroadcast
doUnpersist Method
doUnpersist removes all the persisted state associated with a broadcast variable on
executors.
doDestroy Method
doDestroy removes all the persisted state associated with a broadcast variable on all the
unpersist(
id: Long,
removeFromDriver: Boolean,
blocking: Boolean): Unit
unpersist removes all broadcast blocks from executors and possibly the driver (only when
When executed, you should see the following DEBUG message in the logs:
1006
TorrentBroadcast
ID
readBlocks(): Array[BlockData]
readBlocks …FIXME
1007
CompressionCodec
CompressionCodec
With spark.broadcast.compress enabled (which is the default), TorrentBroadcast uses
compression for broadcast blocks.
lz4 org.apache.spark.io.LZ4CompressionCodec
The default
implementation
lzf org.apache.spark.io.LZFCompressionCodec
You can control the default compression codec in a Spark application using
spark.io.compression.codec Spark property.
createCodec uses the internal shortCompressionCodecNames lookup table to find the input
createCodec finds the constructor of the compression codec’s implementation (that accepts
1008
CompressionCodec
getCodecName Method
Settings
Table 2. Settings
Default
Name Description
value
The compression codec to use.
spark.io.compression.codec lz4
Used when getCodecName is called to
find the current compression codec.
1009
ContextCleaner — Spark Application Garbage Collector
shuffles, RDDs, broadcasts, accumulators and checkpointed RDDs that is aimed at reducing
the memory requirements of long-running data-heavy Spark applications.
It uses a daemon Spark Context Cleaner thread that cleans RDD, shuffle, and broadcast
states (using keepCleaning method).
Refer to Logging.
doCleanupRDD Method
Caution FIXME
1010
ContextCleaner — Spark Application Garbage Collector
keepCleaning(): Unit
Caution FIXME
registerRDDCheckpointDataForCleanup Method
Caution FIXME
registerBroadcastForCleanup Method
Caution FIXME
registerRDDForCleanup Method
Caution FIXME
registerAccumulatorForCleanup Method
Caution FIXME
1011
ContextCleaner — Spark Application Garbage Collector
stop Method
Caution FIXME
start(): Unit
start starts cleaning thread and an action to request the JVM garbage collector (using
1012
ContextCleaner — Spark Application Garbage Collector
doCleanupShuffle performs a shuffle cleanup which is to remove the shuffle from the current
Internally, when executed, you should see the following DEBUG message in the logs:
BlockManagerMaster .
1013
ContextCleaner — Spark Application Garbage Collector
In the end, you should see the following DEBUG message in the logs:
In case of any exception, you should see the following ERROR message in the logs and the
exception itself.
Settings
Table 2. Spark Properties
Default
Spark Property Description
Value
spark.cleaner.periodicGC.interval 30min
Controls how often to trigger a garba
collection.
spark.cleaner.referenceTracking true
Controls whether a ContextCleaner
created when a SparkContext
spark.cleaner.referenceTracking.cleanCheckpoints false
Controls whether to clean checkpoint
reference is out of scope.
1014
ContextCleaner — Spark Application Garbage Collector
1015
CleanerListener
CleanerListener
Caution FIXME
Caution FIXME
1016
Dynamic Allocation (of Executors)
Unlike the "traditional" static allocation where a Spark application reserves CPU and
memory resources upfront (irrespective of how much it may eventually use), in dynamic
allocation you get as much as needed and no more. It scales the number of executors up
and down based on workload, i.e. idle executors are removed, and when there are pending
tasks waiting for executors to be launched on, dynamic allocation requests them.
Dynamic allocation reports the current state using ExecutorAllocationManager metric source.
Dynamic Allocation comes with the policy of scaling executors up and down as follows:
1. Scale Up Policy requests new executors when there are pending tasks and increases
the number of executors exponentially since executors start slow and Spark application
may need slightly more.
2. Scale Down Policy removes executors that have been idle for
spark.dynamicAllocation.executorIdleTimeout seconds.
Dynamic allocation is available for all the currently-supported cluster managers, i.e. Spark
Standalone, Hadoop YARN and Apache Mesos.
Tip Review the excellent slide deck Dynamic Allocation in Spark from Databricks.
1017
Dynamic Allocation (of Executors)
Refer to Logging.
If not, you should see the following WARN message in the logs:
1018
Dynamic Allocation (of Executors)
than spark.dynamicAllocation.minExecutors.
If not, you should see the following WARN message in the logs:
maximum of:
spark.dynamicAllocation.minExecutors
spark.dynamicAllocation.initialExecutors
spark.executor.instances
Settings
1019
Dynamic Allocation (of Executors)
spark.dynamicAllocation.enabled false
spark.dynamicAllocation.initialExecutors spark.dynamicAllocation.minExecutors
spark.dynamicAllocation.minExecutors 0
spark.dynamicAllocation.maxExecutors Integer.MAX_VALUE
spark.dynamicAllocation.schedulerBacklogTimeout 1s
spark.dynamicAllocation.sustainedSchedulerBacklogTimeout spark.dynamicAllocation.schedulerBackl
spark.dynamicAllocation.executorIdleTimeout 60s
spark.dynamicAllocation.cachedExecutorIdleTimeout Integer.MAX_VALUE
spark.dynamicAllocation.testing
Future
SPARK-4922
1020
Dynamic Allocation (of Executors)
SPARK-4751
SPARK-7955
1021
ExecutorAllocationManager — Allocation Manager for Spark Core
ExecutorAllocationManager — Allocation
Manager for Spark Core
ExecutorAllocationManager is responsible for dynamically allocating and removing executors
It intercepts Spark events using the internal ExecutorAllocationListener that keeps track of
the workload (changing the internal registries that the allocation manager uses for executors
management).
1022
ExecutorAllocationManager — Allocation Manager for Spark Core
initialNumExecutors FIXME
numExecutorsTarget FIXME
numExecutorsToAdd FIXME
Flag whether…FIXME
initializing
Starts enabled (i.e. true ).
Refer to Logging.
addExecutors Method
Caution FIXME
removeExecutor Method
Caution FIXME
maxNumExecutorsNeeded Method
1023
ExecutorAllocationManager — Allocation Manager for Spark Core
Caution FIXME
start(): Unit
events and make decisions when to add and remove executors. It then immediately starts
spark-dynamic-executor-allocation allocation executor that is responsible for the scheduling
every 100 milliseconds.
100 milliseconds for the period between successive scheduling is fixed, i.e. not
Note
configurable.
schedule(): Unit
It then go over removeTimes to remove expired executors, i.e. executors for which
expiration time has elapsed.
updateAndSyncNumExecutorsTarget Method
updateAndSyncNumExecutorsTarget …FIXME
1024
ExecutorAllocationManager — Allocation Manager for Spark Core
reset(): Unit
stop(): Unit
ExecutorAllocationClient
LiveListenerBus
SparkConf
validateSettings(): Unit
validateSettings makes sure that the settings for dynamic allocation are correct.
validateSettings validates the following and throws a SparkException if not set correctly.
1025
ExecutorAllocationManager — Allocation Manager for Spark Core
It is started…
It is stopped…
1026
ExecutorAllocationClient
ExecutorAllocationClient
ExecutorAllocationClient is a contract for clients to communicate with a cluster manager to
ExecutorAllocationClient Contract
trait ExecutorAllocationClient {
def getExecutorIds(): Seq[String]
def requestTotalExecutors(numExecutors: Int, localityAwareTasks: Int, hostToLocalTas
kCount: Map[String, Int]): Boolean
def requestExecutors(numAdditionalExecutors: Int): Boolean
def killExecutor(executorId: String): Boolean
def killExecutors(executorIds: Seq[String]): Seq[String]
def killExecutorsOnHost(host: String): Boolean
}
Used when:
SparkContext requests executors (for coarse-
requestTotalExecutors grained scheduler backends only).
1027
ExecutorAllocationClient
killExecutorsOnHost
Used exclusively when BlacklistTracker kills blacklisted
executors.
1028
ExecutorAllocationListener
ExecutorAllocationListener
Caution FIXME
1029
ExecutorAllocationManagerSource
ExecutorAllocationManagerSource — Metric
Source for Dynamic Allocation
ExecutorAllocationManagerSource is a metric source for dynamic allocation with name
elements in executorsPendingToRemove.
executorIds.
Spark uses Metrics Java library to expose internal state of its services to
Note
measure.
1030
HTTP File Server
Settings
spark.fileserver.port (default: 0 ) - the port of a file server
1031
Data Locality
With HDFS the Spark driver contacts NameNode about the DataNodes (ideally local)
containing the various blocks of a file or directory as well as their locations (represented as
InputSplits ), and then schedules the work to the SparkWorkers.
Spark tries to execute tasks as close to the data as possible to minimize data transfer (over
the wire).
PROCESS_LOCAL
NODE_LOCAL
NO_PREF
RACK_LOCAL
ANY
1032
Cache Manager
Cache Manager
Cache Manager in Spark is responsible for passing RDDs partition contents to Block
Manager and making sure a node doesn’t load two copies of an RDD at once.
Caution FIXME
1033
OutputCommitCoordinator
OutputCommitCoordinator
OutputCommitCoordinator service is authority that coordinates result commits by means of
Result commits are the outputs of running tasks (and a running task is described by a task
attempt for a partition in a stage).
From the scaladoc (it’s a private[spark] class so no way to find it outside the code):
Authority that decides whether tasks can commit output to HDFS. Uses a "first
committer wins" policy. OutputCommitCoordinator is instantiated in both the drivers and
executors. On executors, it is configured with a reference to the driver’s
OutputCommitCoordinatorEndpoint, so requests to commit output will be forwarded to
the driver’s OutputCommitCoordinator.
This class was introduced in SPARK-4879; see that JIRA issue (and the associated pull
requests) for an extensive design discussion.
Authorized committers are task attempts (per partition and stage) that can…FIXME
Refer to Logging.
1034
OutputCommitCoordinator
stop Method
Caution FIXME
stageStart Method
Caution FIXME
taskCompleted Method
taskCompleted(
stage: StageId,
partition: PartitionId,
attemptNumber: TaskAttemptNumber,
reason: TaskEndReason): Unit
taskCompleted marks the partition (in the stage ) completed (and hence a result
committed), but only when the attemptNumber is amongst authorized committers per stage
(for the partition ).
For the reason being Success taskCompleted does nothing and exits.
For the reason being TaskCommitDenied , you should see the following INFO message in
the logs and taskCompleted exits.
For task completion reasons other than Success or TaskCommitDenied and attemptNumber
amongst authorized committers, taskCompleted marks partition unlocked.
1035
OutputCommitCoordinator
When the lock for partition is cleared, You should see the following DEBUG message in
the logs:
1036
RpcEnv — RPC Environment
RpcEnv — RPC Environment
FIXME
Caution How to know the available endpoints in the environment? See the
exercise Developing RPC Environment.
stops them
A RPC Environment is defined by the name, host, and port. It can also be controlled by a
security manager.
1037
RpcEnv — RPC Environment
RpcEndpointRefs can be looked up by name or uri (because different RpcEnvs may have
different naming schemes).
Caution FIXME
Caution FIXME
shutdown Method
Caution FIXME
Caution FIXME
awaitTermination Method
Caution FIXME
ThreadSafeRpcEndpoint
1038
RpcEnv — RPC Environment
RpcAddress
RpcAddress is the logical address for an RPC Environment, with hostname and port.
RpcEndpointAddress
RpcEndpointAddress is the logical address for an endpoint registered to an RPC
Environment, with RpcAddress and name.
Caution FIXME
It is a prioritized list of lookup timeout properties (the higher on the list, the more important):
spark.rpc.lookupTimeout
spark.network.timeout
Their value can be a number alone (seconds) or any number with time suffix, e.g. 50s ,
100ms , or 250us . See Settings.
1039
RpcEnv — RPC Environment
You can control the time to wait for a response using the following settings (in that order):
spark.rpc.askTimeout
spark.network.timeout
Their value can be a number alone (seconds) or any number with time suffix, e.g. 50s ,
100ms , or 250us . See Settings.
Exceptions
When RpcEnv catches uncaught exceptions, it uses RpcCallContext.sendFailure to send
exceptions back to the sender, or logging them if no such sender or
NotSerializableException .
If any error is thrown from one of RpcEndpoint methods except onError , onError will be
invoked with the cause. If onError throws an error, RpcEnv will ignore it.
RpcEnvConfig
RpcEnvConfig is a placeholder for an instance of SparkConf, the name of the RPC
create(
name: String,
host: String,
port: Int,
conf: SparkConf,
securityManager: SecurityManager,
clientMode: Boolean = false): RpcEnv (1)
create(
name: String,
bindAddress: String,
advertiseAddress: String,
port: Int,
conf: SparkConf,
securityManager: SecurityManager,
clientMode: Boolean): RpcEnv
1040
RpcEnv — RPC Environment
1. The 6-argument create (with clientMode disabled) simply passes the input
arguments on to the second create making bindAddress and advertiseAddress the
same.
create creates a RpcEnvConfig (with the input arguments) and creates a NettyRpcEnv .
Settings
1041
RpcEnv — RPC Environment
spark.rpc.askTimeout 120s
Timeout for RPC ask calls. Refer to
Ask Operation Timeout.
1042
RpcEndpoint
RpcEndpoint
RpcEndpoint is a contract to define an RPC endpoint that can receive messages using
package org.apache.spark.rpc
trait RpcEndpoint {
def onConnected(remoteAddress: RpcAddress): Unit
def onDisconnected(remoteAddress: RpcAddress): Unit
def onError(cause: Throwable): Unit
def onNetworkError(cause: Throwable, remoteAddress: RpcAddress): Unit
def onStart(): Unit
def onStop(): Unit
def receive: PartialFunction[Any, Unit]
def receiveAndReply(context: RpcCallContext): PartialFunction[Any, Unit]
val rpcEnv: RpcEnv
}
Caution FIXME
1043
RpcEndpoint
stop Method
Caution FIXME
1044
RpcEndpointRef
RpcEndpointRef is a serializable entity and so you can send it over a network or save it for
later use (it can however be deserialized using the owning RpcEnv only).
You can send asynchronous one-way messages to the corresponding RpcEndpoint using
send method.
You can send a semi-synchronous message, i.e. "subscribe" to be notified when a response
arrives, using ask method. You can also block the current calling thread for a response
using askWithRetry method.
send Method
Caution FIXME
askWithRetry Method
Caution FIXME
1045
RpcEnvFactory
RpcEnvFactory
RpcEnvFactory is the contract to create a RPC Environment.
RpcEnvFactory Contract
trait RpcEnvFactory {
def create(config: RpcEnvConfig): RpcEnv
}
1046
Netty-based RpcEnv
Netty-based RpcEnv
Read up RpcEnv — RPC Environment on the concept of RPC Environment in
Tip
Spark.
When NettyRpcEnv starts, the following INFO message is printed out in the logs:
Caution FIXME
Client Mode
Refer to Client Mode = is this an executor or the driver? for introduction about client mode.
This is only for Netty-based RpcEnv.
1047
Netty-based RpcEnv
When created, a Netty-based RpcEnv starts the RPC server and register necessary
endpoints for non-client mode, i.e. when client mode is false .
It means that the required services for remote communication with NettyRpcEnv are only
started on the driver (not executors).
Thread Pools
shuffle-server-ID
EventLoopGroup uses a daemon thread pool called shuffle-server-ID , where ID is a
dispatcher-event-loop-ID
NettyRpcEnv’s Dispatcher uses the daemon fixed thread pool with
spark.rpc.netty.dispatcher.numThreads threads.
netty-rpc-env-timeout
NettyRpcEnv uses the daemon single-thread scheduled thread pool netty-rpc-env-timeout .
netty-rpc-connection-ID
NettyRpcEnv uses the daemon cached thread pool with up to spark.rpc.connect.threads
threads.
1048
Netty-based RpcEnv
Settings
The Netty-based implementation uses the following properties:
spark.rpc.io.mode (default: NIO ) - NIO or EPOLL for low-level IO. NIO is always
io.netty.channel.epoll.EpollEventLoopGroup .
JVM)
controls the maximum number of binding attempts/retries to a port before giving up.
Endpoints
endpoint-verifier ( RpcEndpointVerifier ) - a RpcEndpoint for remote RpcEnvs to
query whether an RpcEndpoint exists or not. It uses Dispatcher that keeps track of
registered endpoints and responds true / false to CheckExistence message.
endpoint-verifier is used to check out whether a given endpoint exists or not before the
One use case is when an AppClient connects to standalone Masters before it registers the
application it acts for.
Message Dispatcher
1049
Netty-based RpcEnv
1050
TransportConf — Transport Configuration
TransportConf — Transport Configuration
TransportConf is a class for the transport-related network configuration for modules, e.g.
ExternalShuffleService or YarnShuffleService.
Internally, fromSparkConf calculates the default number of threads for both the Netty client
and server thread pools.
defaultNumThreads calculates the default number of threads for both the Netty client and
Note 8 is the maximum number of threads for Netty and is not configurable.
Note defaultNumThreads uses Java’s Runtime for the number of processors in JVM.
1051
TransportConf — Transport Configuration
spark.module.prefix Settings
The settings can be in the form of spark.[module].[prefix] with the following prefixes:
timeout in milliseconds.
thread pool.
sasl.timeout (default: 30s ) — the timeout (in milliseconds) for a single round trip of
io.retryWait (default: 5s ) — the time (in milliseconds) that Spark will wait in order to
( true ) or not ( false ). If true , file descriptors are created only when data is going to
be transferred. This can reduce the number of open files.
spark.storage.memoryMapThreshold
1052
TransportConf — Transport Configuration
should start using memory map rather than reading in through normal IO operations.
This prevents Spark from memory mapping very small blocks. In general, memory mapping
has high overhead for blocks close to or below the page size of the OS.
spark.network.sasl.maxEncryptedBlockSize
spark.network.sasl.maxEncryptedBlockSize (default: 64k ) is the maximum number of bytes
spark.network.sasl.serverAlwaysEncrypt
spark.network.sasl.serverAlwaysEncrypt (default: false ) controls whether the server
1053
Utils Helper Object
getLocalDir Method
getLocalDir …FIXME
spark-shell is launched
Note Spark on YARN’s Client is requested to prepareLocalResources and
create __spark_conf__.zip archive with configuration files and Spark
configuration
PySpark’s PythonBroadcast is requested to readObject
PySpark’s EvalPythonExec is requested to doExecute
fetchFile Method
fetchFile(
url: String,
targetDir: File,
conf: SparkConf,
securityMgr: SecurityManager,
hadoopConf: Configuration,
timestamp: Long,
useCache: Boolean): File
fetchFile …FIXME
1054
Utils Helper Object
getOrCreateLocalRootDirsImpl …FIXME
getOrCreateLocalRootDirs …FIXME
1055
Securing Web UI
Securing Web UI
Tip Read the official document Web UI.
To secure Web UI you implement a security filter and use spark.ui.filters setting to refer
to the class.
neolitec/BasicAuthenticationFilter.java
1056
Deployment Environments — Run Modes
local
clustered
Spark Standalone
A Spark application is composed of the driver and executors that can run locally (on a single
JVM) or using cluster resources (like CPU, RAM and disk that are managed by a cluster
manager).
You can specify where to run the driver using the deploy mode (using --deploy-
Note
mode option of spark-submit or spark.submit.deployMode Spark property).
Master URLs
Spark supports the following master URLs (see private object SparkMasterRegex):
1057
Deployment Environments — Run Modes
1058
Spark local (pseudo-cluster)
This mode of operation is also called Spark in-process or (less commonly) a local version
of Spark.
scala> sc.isLocal
res0: Boolean = true
Spark shell defaults to local mode with local[*] as the the master URL.
1059
Spark local (pseudo-cluster)
scala> sc.master
res0: String = local[*]
Tasks are not re-executed on failure in local mode (unless local-with-retries master URL is
used).
The task scheduler in local mode works with LocalSchedulerBackend task scheduler
backend.
Master URL
You can run Spark in local mode using local , local[n] or the most general local[*] for
the master URL.
local[*] uses as many threads as the number of processors available to the Java
FIXME What happens when there’s less cores than n in the master URL?
Caution
It is a question from twitter.
1060
Spark local (pseudo-cluster)
If there is one or more tasks that match the offer, they are launched (using
executor.launchTask method).
1061
LocalSchedulerBackend
LocalSchedulerBackend
LocalSchedulerBackend is a scheduler backend and a ExecutorBackend for Spark local run
mode.
LocalSchedulerBackend acts as a "cluster manager" for local mode to offer resources on the
1062
LocalEndpoint
LocalEndpoint
LocalEndpoint is the communication channel between Task Scheduler and
When a LocalEndpoint starts up (as part of Spark local’s initialization) it prints out the
following INFO messages to the logs:
reviveOffers Method
Caution FIXME
Caution FIXME
RPC Messages
LocalEndpoint accepts the following RPC message types:
KillTask (receive-only, non-blocking) that kills the task that is currently running on the
executor.
1063
Spark on cluster
Spark Clustered
Spark can be run in distributed mode on a cluster. The following (open source) cluster
managers (aka task schedulers aka resource managers) are currently supported:
Hadoop YARN
Apache Mesos
Here is a very brief list of pros and cons of using one cluster manager versus the other
options supported by Spark:
2. Hadoop YARN has a very good support for HDFS with data locality.
3. Apache Mesos makes resource offers that a framework can accept or reject. It is Spark
(as a Mesos framework) to decide what resources to accept. It is a push-based
resource management model.
4. Hadoop YARN responds to a YARN framework’s resource requests. Spark (as a YARN
framework) requests CPU and memory from YARN. It is a pull-based resource
management model.
Spark driver requests resources from a cluster manager. Currently only CPU and memory
are requested resources. It is a cluster manager’s responsibility to spawn Spark executors in
the cluster (on its workers).
FIXME
Spark execution in cluster - Diagram of the communication between
driver, cluster manager, workers with executors and tasks. See Cluster
Mode Overview.
Caution Show Spark’s driver with the main code in Scala in the box
Nodes with executors with tasks
Hosts drivers
Manages a cluster
1064
Spark on cluster
The workers are in charge of communicating the cluster manager the availability of their
resources.
Communication with a driver is through a RPC interface (at the moment Akka), except
Mesos in fine-grained mode.
Executors remain alive after jobs are finished for future ones. This allows for better data
utilization as intermediate data is cached in memory.
fine-grained partitioning
low-latency scheduling
Reusing also means the the resources can be hold onto for a long time.
Spark reuses long-running executors for speed (contrary to Hadoop MapReduce using
short-lived containers for each task).
The Spark driver is launched to invoke the main method of the Spark application.
The driver asks the cluster manager for resources to run the application, i.e. to launch
executors that run tasks.
The driver runs the Spark application and sends tasks to the executors.
Right after SparkContext.stop() is executed from the driver or the main method has
exited all the executors are terminated and the cluster resources are released by the
cluster manager.
"There’s not a good reason to run more than one worker per machine." by Sean
Note Owen in What is the relationship between workers, worker instances, and
executors?
1065
Spark on cluster
One executor per node may not always be ideal, esp. when your nodes have
Caution lots of RAM. On the other hand, using fewer executors has benefits like
more efficient broadcasts.
Others
Spark application can be split into the part written in Scala, Java, and Python with the
cluster itself in which the application is going to run.
A Spark application consists of a single driver process and a set of executor processes
scattered across nodes on the cluster.
Both the driver and the executors usually run as long as the application. The concept of
dynamic resource allocation has changed it.
A node is a machine, and there’s not a good reason to run more than one worker per
machine. So two worker nodes typically means two machines, each a Spark worker.
Workers hold many executors for many applications. One application has executors on
many workers.
1066
Spark on YARN
Spark on YARN
You can submit Spark applications to a Hadoop YARN cluster using yarn master URL.
Since Spark 2.0.0, yarn master URL is the only proper master URL and you
Note can use --deploy-mode to choose between client (default) or cluster
modes.
Figure 1. Submitting Spark Application to YARN Cluster (aka Creating SparkContext with
yarn Master URL and client Deploy Mode)
Without specifying the deploy mode, it is assumed client .
Tip Deploy modes are all about where the Spark driver runs.
In client mode the Spark driver (and SparkContext) runs on a client node outside a YARN
cluster whereas in cluster mode it runs inside a YARN cluster, i.e. inside a YARN container
alongside ApplicationMaster (that acts as the Spark application in YARN).
1067
Spark on YARN
In order to deploy applications to YARN clusters, you need to use Spark with
Note
YARN support.
Spark on YARN supports multiple application attempts and supports data locality for data in
HDFS. You can also take advantage of Hadoop’s security and run Spark in a secure Hadoop
environment using Kerberos authentication (aka Kerberized clusters).
There are few settings that are specific to YARN (see Settings). Among them, you can
particularly like the support for YARN resource queues (to divide cluster resources and
allocate shares to different teams and users based on advanced policies).
You can start spark-submit with --verbose command-line option to have some
Tip
settings displayed, including YARN-specific. See spark-submit and YARN options.
The memory in the YARN resource requests is --executor-memory + what’s set for
spark.yarn.executor.memoryOverhead, which defaults to 10% of --executor-memory .
If YARN has enough resources it will deploy the executors distributed across the cluster,
then each of them will try to process the data locally ( NODE_LOCAL in Spark Web UI), with as
many splits in parallel as you defined in spark.executor.cores.
See YarnRMClient.getMaxRegAttempts.
Caution FIXME
--archives
--executor-cores
--keytab
1068
Spark on YARN
--num-executors
--principal
--queue
Memory Requirements
When Client submits a Spark application to a YARN cluster, it makes sure that the
application will not request more than the maximum memory capability of the YARN cluster.
The memory for ApplicationMaster is controlled by custom settings per deploy mode.
For client deploy mode it is a sum of spark.yarn.am.memory (default: 512m ) with an optional
overhead as spark.yarn.am.memoryOverhead.
If the optional overhead is not set, it is computed as 10% of the main memory
(spark.yarn.am.memory for client mode or spark.driver.memory for cluster mode) or 384m
whatever is larger.
Otherwise, you will see the following error in the logs and Spark will exit.
Error: Could not load YARN classes. This copy of Spark may not have been compiled with
YARN support.
Master URL
Since Spark 2.0.0, the only proper master URL is yarn .
Before Spark 2.0.0, you could have used yarn-client or yarn-cluster , but it is now
deprecated. When you use the deprecated master URLs, you should see the following
warning in the logs:
1069
Spark on YARN
Warning: Master yarn-client is deprecated since 2.0. Please use master "yarn" with spe
cified deploy mode instead.
Keytab
Caution FIXME
Environment Variables
SPARK_DIST_CLASSPATH
SPARK_DIST_CLASSPATH is a distribution-defined CLASSPATH to add to processes.
Settings
(video) Spark on YARN: The Road Ahead — Marcelo Vanzin (Cloudera) from Spark
Summit 2015
1070
YarnShuffleService — ExternalShuffleService on YARN
YarnShuffleService — ExternalShuffleService
on YARN
YarnShuffleService is an external shuffle service for Spark on YARN. It is YARN
There is the ExternalShuffleService for Spark and despite their names they
Note
don’t share code.
After the external shuffle service is configured in YARN you enable it in a Spark application
using spark.shuffle.service.enabled flag.
log4j.logger.org.apache.spark.network.yarn.YarnShuffleService=INFO
Tip
YARN saves logs in /usr/local/Cellar/hadoop/2.7.2/libexec/logs directory on
Mac OS X with brew, e.g. /usr/local/Cellar/hadoop/2.7.2/libexec/logs/yarn-
jacek-nodemanager-japila.local.log .
Advantages
The advantages of using the YARN Shuffle Service:
With dynamic allocation enabled executors can be discarded and a Spark application
could still get at the shuffle data the executors wrote out.
It allows individual executors to go into GC pause (or even crash) and still allow other
Executors to read shuffle data and make progress.
1071
YarnShuffleService — ExternalShuffleService on YARN
getRecoveryPath
Caution FIXME
serviceStop
void serviceStop()
Caution FIXME What are shuffleServer and blockHandler ? What’s their lifecycle?
When an exception occurs, you should see the following ERROR message in the logs:
stopContainer
When called, stopContainer simply prints out the following INFO message in the logs and
exits.
1072
YarnShuffleService — ExternalShuffleService on YARN
initializeContainer
when…FIXME
When called, initializeContainer simply prints out the following INFO message in the logs
and exits.
stopApplication
When called, stopApplication obtains YARN’s ApplicationId for the application (using the
input context ).
1073
YarnShuffleService — ExternalShuffleService on YARN
When an exception occurs, you should see the following ERROR message in the logs:
initializeApplication
when…FIXME
authentication is enabled.
When an exception occurs, you should see the following ERROR message in the logs:
serviceInit
1074
YarnShuffleService — ExternalShuffleService on YARN
Caution FIXME
When called, serviceInit creates a TransportConf for the shuffle module that is used to
create ExternalShuffleBlockHandler (as blockHandler ).
Installation
cp common/network-yarn/target/scala-2.11/spark-2.0.0-SNAPSHOT-yarn-shuffle.jar \
/usr/local/Cellar/hadoop/2.7.2/libexec/share/hadoop/yarn/lib/
1075
YarnShuffleService — ExternalShuffleService on YARN
<?xml version="1.0"?>
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>spark_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.spark_shuffle.class</name>
<value>org.apache.spark.network.yarn.YarnShuffleService</value>
</property>
<!-- optional -->
<property>
<name>spark.shuffle.service.port</name>
<value>10000</value>
</property>
<property>
<name>spark.authenticate</name>
<value>true</value>
</property>
</configuration>
org.apache.spark.network.yarn.YarnShuffleService .
1076
YarnShuffleService — ExternalShuffleService on YARN
1077
ExecutorRunnable
ExecutorRunnable
ExecutorRunnable starts a YARN container with CoarseGrainedExecutorBackend
standalone application.
YARN containers (and for debugging purposes when ApplicationMaster requests cluster
resources for executors).
Refer to Logging.
1078
ExecutorRunnable
2. YarnConfiguration
3. sparkConf — SparkConf
4. masterAddress
5. executorId
7. executorMemory
8. executorCores
9. appId
10. SecurityManager
Note Most of the input parameters are exactly as YarnAllocator was created with.
prepareCommand(
masterAddress: String,
slaveId: String,
hostname: String,
executorMemory: Int,
executorCores: Int,
appId: String): List[String]
1079
ExecutorRunnable
prepareCommand adds all the Spark properties for executors to the JVM options.
are used.
prepareCommand reads the list of URIs representing the user classpath and adds --user-
In the end, prepareCommand combines the parts together to build the entire command with
the following (in order):
2. JAVA_HOME/bin/java
3. -server
4. JVM options
5. org.apache.spark.executor.CoarseGrainedExecutorBackend
1080
ExecutorRunnable
12. 1><LOG_DIR>/stdout
13. 2><LOG_DIR>/stderr
You can see the result of prepareCommand as command in the INFO message in
Note the logs when ApplicationMaster registers itself with YARN ResourceManager
(to print it out once and avoid flooding the logs when starting Spark executors).
spark.executor.extraClassPath property)
SparkConf, i.e. the Spark properties with the prefix spark.executorEnv. , and
YarnSparkHadoopUtil.addPathToEnvironment(env, key, value).
1081
ExecutorRunnable
With the input container defined and SPARK_USER environment variable available,
prepareEnvironment registers SPARK_LOG_URL_STDERR and SPARK_LOG_URL_STDOUT
In the end, prepareEnvironment collects all the System environment variables with SPARK
prefix.
run(): Unit
When called, you should see the following DEBUG message in the logs:
run creates a YARN NMClient (to communicate with YARN NodeManager service), inits it
1082
ExecutorRunnable
startContainer follows the design pattern to request YARN NodeManager to start a YARN reso
container:
ContainerLaunchContext .
In the end, startContainer requests the YARN NodeManager to start the YARN container
with the ContainerLaunchContext context.
startContainer uses nmClient internal reference to send the request with the
Note
YARN resource container given when ExecutorRunnable was created.
1083
ExecutorRunnable
launchContextDebugInfo(): String
===============================================================================
YARN executor launch context:
env:
[key] -> [value]
...
command:
[commands]
resources:
[key] -> [value]
===============================================================================
1084
Client
Client
Client is a handle to a YARN cluster to submit ApplicationMaster (that represents a Spark
1085
Client
FIXME
spark.yarn.executor.memoryOverhead
and falls back to 10% of the NOTE: 10% and
executorMemoryOverhead
spark.executor.memory or 384 384 are constants
whatever is larger. and cannot be
changed.
Refer to Logging.
isUserClassPathFirst Method
Caution FIXME
getUserClasspath Method
Caution FIXME
ClientArguments
Caution FIXME
Caution FIXME
launcherBackend Property
launcherBackend …FIXME
loginFromKeytab Method
1086
Client
Caution FIXME
Sets the internal fireAndForget flag to the result of isClusterMode and not
spark.yarn.submit.waitAppCompletion.
1087
Client
submitApplication(): ApplicationId
YARN cluster (i.e. to the YARN ResourceManager) and returns the application’s
ApplicationId.
It then inits the internal yarnClient (with the internal yarnConf ) and starts it. All this happens
using Hadoop API.
INFO Client: Requesting a new application from cluster with [count] NodeManagers
The LauncherBackend instance changes state to SUBMITTED with the application id.
submitApplication verifies whether the cluster has resources for the ApplicationManager
(using verifyClusterResources).
1088
Client
createApplicationSubmissionContext(
newApp: YarnClientApplication,
containerContext: ContainerLaunchContext): ApplicationSubmissionContext
ApplicationSubmissionContext :
1089
Client
ContainerLaunchContext (that
describes the Container with which
the input containerContext
the ApplicationMaster for the Spark
application is launched)
Rolled Log Aggregation for the Spark See Rolled Log Aggregation Configuration
application for Spark Application section below
You will see the DEBUG message in the logs when the setting is not set:
The requested YARN’s Resource for the ApplicationMaster for a Spark application is the
sum of amMemory and amMemoryOverhead for the memory and amCores for the virtual CPU
cores.
1090
Client
Priority 0
Number of containers 1
spark.yarn.am.nodeLabelExpression
Node label expression
configuration setting
spark.yarn.am.nodeLabelExpression
ResourceRequest of AM container
configuration setting
It sets the resource request to this new YARN ResourceRequest detailed in the table above.
spark.yarn.rolledLog.includePattern
Include Pattern
configuration setting
spark.yarn.rolledLog.excludePattern
Exclude Pattern
configuration setting
that the Spark application (as a set of ApplicationMaster and executors) is not going to
request more than the maximum memory capability of the YARN cluster. If so, it throws an
IllegalArgumentException .
1091
Client
If the required memory for ApplicationMaster is more than the maximum memory capability,
verifyClusterResources throws an IllegalArgumentException with the following message:
1092
Client
When a Spark application is submitted to YARN, it calls the private helper method
createContainerLaunchContext that creates a YARN ContainerLaunchContext request for
When called, you should see the following INFO message in the logs:
Caution FIXME
-Dspark.yarn.app.container.log.dir= …FIXME
1093
Client
Caution FIXME
SPARK_CONF_FILE .
1094
Client
prepareLocalResources Method
Caution FIXME
prepareLocalResources(
destDir: Path,
pySparkArchives: Seq[String]): HashMap[String, LocalResource]
prepareLocalResources is…FIXME
When called, prepareLocalResources prints out the following INFO message to the logs:
prepareLocalResources then obtains security tokens from credential providers and gets the
After all the security delegation tokens are obtained and only when there are any, you should
see the following DEBUG message in the logs:
If a keytab is used to log in and the nearest time of the next renewal is in the future,
prepareLocalResources sets the internal spark.yarn.credentials.renewalTime and
1095
Client
The replication factor is only used for copyFileToRemote later. Perhaps it should
Note
not be mentioned here (?)
It creates the input destDir (on a HDFS-compatible file system) with 0700 permission
( rwx------ ), i.e. inaccessible to all but its owner and the superuser so the owner only can
read, write and execute. It uses Hadoop’s Path.getFileSystem to access Hadoop’s
FileSystem that owns destDir (using the constructor’s hadoopConf — Hadoop’s
Configuration).
If the location of the single archive containing Spark jars (spark.yarn.archive) is set, it is
distributed (as ARCHIVE) to spark_libs .
If neither spark.yarn.archive nor spark.yarn.jars is set, you should see the following WARN
message in the logs:
It then finds the directory with jar files under SPARK_HOME (using
YarnCommandBuilderUtils.findJarsDir ).
And all the jars are zipped to a temporary archive, e.g. spark_libs2944590295025097383.zip
that is distribute as ARCHIVE to spark_libs (only when they differ).
If a user jar ( --jar ) was specified on command line, the jar is distribute as FILE to
app.jar .
It then distributes additional resources specified in SparkConf for the application, i.e. jars
(under spark.yarn.dist.jars), files (under spark.yarn.dist.files), and archives (under
spark.yarn.dist.archives).
1096
Client
It sets spark.yarn.secondary.jars for the jars that have localized path (non-local paths) or
their path (for local paths).
It updates Spark configuration (with internal configuration settings using the internal
distCacheMgr reference).
FIXME Where are they used? It appears they are required for
Caution ApplicationMaster when it prepares local resources, but what is the
sequence of calls to lead to ApplicationMaster ?
createConfArchive(): File
1097
Client
an archive with the local config files — log4j.properties and metrics.properties (before
distributing it and the other files for ApplicationMaster and executors to use on a YARN
cluster).
The archive will also contain all the files under HADOOP_CONF_DIR and YARN_CONF_DIR
environment variables (if defined).
The archive is a temporary file with the spark_conf prefix and .zip extension with the files
above.
copyFileToRemote(
destDir: Path,
srcPath: Path,
replication: Short,
force: Boolean = false,
destName: Option[String] = None): Path
destDir (if needed) and return the destination path resolved following symlinks and mount
points.
Unless force is enabled (it is disabled by default), copyFileToRemote will only copy
srcPath when the source (of srcPath ) and target (of destDir ) file systems are the same.
copyFileToRemote copies srcPath to destDir and sets 644 permissions, i.e. world-wide
If force is disabled or the files are the same, copyFileToRemote will only print out the
following INFO message to the logs:
1098
Client
INFO Client: Source and destination file systems are the same. Not copying [srcPath]
Ultimately, copyFileToRemote returns the destination path resolved following symlinks and
mount points.
populateClasspath(
args: ClientArguments,
conf: Configuration,
sparkConf: SparkConf,
env: HashMap[String, String],
extraClassPath: Option[String] = None): Unit
It merely adds the following entries to the CLASSPATH key in the input env :
1. The optional extraClassPath (which is first changed to include paths on YARN cluster
machines).
Caution FIXME
6. (unless the optional spark.yarn.archive is defined) All the local jars in spark.yarn.jars
(which are first changed to be paths on YARN cluster machines).
is not set)
1099
Client
You should see the result of executing populateClasspath when you enable DEBUG logging leve
value of spark.yarn.config.replacementPath.
addClasspathEntry is a private helper method to add the input path to CLASSPATH key in
distribute(
path: String,
resType: LocalResourceType = LocalResourceType.FILE,
destName: Option[String] = None,
targetDir: Option[String] = None,
appMasterOnly: Boolean = false): (Boolean, String)
1100
Client
whether the input path is of local: URI scheme and return a localized path for a
non- local path, or simply the input path for a local one.
distribute returns a pair with the first element being a flag for the input path being local
or non-local, and the other element for the local or localized path.
For local path that was not distributed already, distribute copies the input path to
remote file system (if needed) and adds path to the application’s distributed cache.
buildPath is a helper method to join all the path components using the directory separator,
i.e. org.apache.hadoop.fs.Path.SEPARATOR.
client deploy mode. The flag is enabled for cluster deploy mode, i.e. true .
Caution FIXME Replace the internal fields used below with their true meanings.
1101
Client
spark.driver.extraLibraryPath and
libraryPaths spark.yarn.am.extraLibraryP
spark.driver.libraryPath
--class
command-line args.userClass
argument for
ApplicationMaster
Application master
org.apache.spark.deploy.yarn.ApplicationMaster org.apache.spark.deploy.ya
class
When the isClusterMode flag is enabled, the internal reference to YARN’s YarnClient is
used to stop application.
SPARK_YARN_MODE flag
SPARK_YARN_MODE flag controls…FIXME
Any environment variable with the SPARK_ prefix is propagated to all (remote)
Note
processes.
It is enabled (i.e. true ) when SparkContext is created for Spark on YARN in client deploy
mode, when Client sets up an environment to launch ApplicationMaster container (and,
what is currently considered deprecated, a Spark application was deployed to a YARN
cluster).
1102
Client
accessed.
create and submit a YARN application (for your Spark application), killApplication.
yarnClient is inited and started when Client submits a Spark application to a YARN
cluster.
When you start the main method when starting the Client standalone application, say using
org.apache.spark.deploy.yarn.Client , you will see the following WARN message in the logs unl
Note
WARN Client: WARNING: This client is deprecated and will be removed in a future version of
stop(): Unit
stop closes the internal LauncherBackend and stops the internal YarnClient.
1103
Client
It also clears SPARK_YARN_MODE flag (to allow switching between cluster types).
SparkException :
SparkException with the message "Application [appId] finished with failed status".
monitorApplication Method
monitorApplication(
appId: ApplicationId,
returnOnRunning: Boolean = false,
logApplicationReport: Boolean = true): (YarnApplicationState, FinalApplicationStatus)
1104
Client
FINISHED
FAILED
KILLED
Unless logApplicationReport is disabled, it prints the following INFO message to the logs:
If logApplicationReport and DEBUG log level are enabled, it prints report details every time
interval to the logs:
For INFO log level it prints report details only when the application state changes.
For states FINISHED , FAILED or KILLED , cleanupStagingDir is called and the method
finishes by returning a pair of the current state and the final application status.
If returnOnRunning is enabled (it is disabled by default) and the application state turns
RUNNING , the method returns a pair of the current state RUNNING and the final application
status.
1105
Client
The current state is recorded for future checks (in the loop).
cleanupStagingDir Method
cleanupStagingDir clears the staging directory of an application.
It uses spark.yarn.stagingDir setting or falls back to a user’s home directory for the staging
directory. If cleanup is enabled, it deletes the entire staging directory for the application.
reportLauncherState Method
1106
YarnRMClient
YarnRMClient
YarnRMClient is responsible for registering and unregistering a Spark application (in the
YarnRMClient tracks the application attempt identifiers and the maximum number of
uiHistoryAddress
Refer to Logging.
1107
YarnRMClient
register(
driverUrl: String,
driverRef: RpcEndpointRef,
conf: YarnConfiguration,
sparkConf: SparkConf,
uiAddress: String,
uiHistoryAddress: String,
securityMgr: SecurityManager,
localResources: Map[String, LocalResource]): YarnAllocator
register creates a AMRMClient, initializes it (using the input YarnConfiguration) and starts
immediately.
You should see the following INFO message in the logs (in stderr in YARN):
(using the local hostname, the port 0 and the input uiAddress ).
The input uiAddress is the web UI of the Spark application and is specified
Note using the SparkContext (when the application runs in cluster deploy mode) or
using spark.driver.appUIAddress property.
In the end, register creates a new YarnAllocator (using the input parameters of
register and the internal AMRMClient).
1108
YarnRMClient
It basically checks that ApplicationMaster is registered and only when it is requests the
internal AMRMClient to unregister.
settings and return the maximum number of application attempts before ApplicationMaster
registration with YARN is considered unsuccessful (and so the Spark application).
The return value is the minimum of the configuration settings of YARN and Spark.
getAttemptId(): ApplicationAttemptId
getAttemptId returns YARN’s ApplicationAttemptId (of the Spark application to which the
1109
YarnRMClient
getAmIpFilterParams Method
Caution FIXME
1110
ApplicationMaster
From the official documentation of Apache Hadoop YARN (with some minor
changes of mine):
The per-application ApplicationMaster is actually a framework-specific
Note
library and is tasked with negotiating cluster resources from the YARN
ResourceManager and working with the YARN NodeManager(s) to
execute and monitor the tasks.
1111
ApplicationMaster
ExecutorLauncher is a custom ApplicationMaster for client deploy mode only for the purpose
distinguishing client and cluster deploy modes when using ps or jps .
$ jps -lm
Note
71253 org.apache.spark.deploy.yarn.ExecutorLauncher --arg
192.168.99.1:50188 --properties-file /tmp/hadoop-jacek/nm-local-
dir/usercache/jacek/appcache/.../__spark_conf__/__spark_conf__.prope
1112
ApplicationMaster
Flag to…FIXME
yarnConf Hadoop’s
YarnConfiguration Created using
SparkHadoopUtil.newConfiguration
exitCode 0 FIXME
sparkYarnAM RPC
environment from a Spark
application submitted to YARN
rpcEnv (uninitialized) in client deploy mode.
sparkDriver RPC
environment from the Spark
application submitted to YARN
in cluster deploy mode.
true (when --
isClusterMode class was Flag…FIXME
specified)
1113
ApplicationMaster
maxNumExecutorFailures FIXME
maxNumExecutorFailures Property
Caution FIXME
ApplicationMasterArguments
YarnRMClient
reporterThread Method
Caution FIXME
Caution FIXME
reference (to be sc ).
1114
ApplicationMaster
addAmIpFilter(): Unit
In cluster deploy mode (when ApplicationMaster runs with web UI), it sets
spark.ui.filters system property as
[value] .
In client deploy mode (when ApplicationMaster runs on another JVM or even host than web
UI), it simply sends a AddWebUIFilter to ApplicationMaster (namely to AMEndpoint RPC
Endpoint).
finish Method
Caution FIXME
YarnRMClient reference).
1115
ApplicationMaster
node.
Refer to Logging.
1116
ApplicationMaster
run(): Int
(only in cluster deploy mode) run sets cluster deploy mode-specific settings and sets
the application attempt id (from YARN).
keytab.
In the end, run registers ApplicationMaster (with YARN ResourceManager) for the Spark
application — either calling runDriver (in cluster deploy mode) or runExecutorLauncher
(for client deploy mode).
1117
ApplicationMaster
In case of an exception, you should see the following ERROR message in the logs and run
finishes with FAILED final application status.
Read the note in Creating RpcEnv to learn the meaning of clientMode input
argument.
Tip
clientMode is enabled for so-called a client-mode ApplicationMaster which is
when a Spark application is submitted to YARN in client deploy mode.
runExecutorLauncher then waits until the driver accepts connections and creates
RpcEndpointRef to communicate.
1118
ApplicationMaster
Internally, runDriver registers web UI security filters and starts a Spark application (on a
separate Thread).
available and accesses the current RpcEnv (and saves it as the internal rpcEnv).
runDriver uses SparkEnv to access the current RpcEnv that the Spark
Note
application’s SparkContext manages.
YarnAM endpoint (using spark.driver.host and spark.driver.port properties for the driver’s
resources (using the Spark application’s RpcEnv, the driver’s RPC endpoint reference,
webUrl if web UI is enabled and the input securityMgr ).
If the Spark application has not started in spark.yarn.am.waitTime time, runDriver reports a
IllegalStateException :
If TimeoutException is reported while waiting for the Spark application to start, you should
see the following ERROR message in the logs and runDriver finishes with FAILED final
application status and the error code 13 .
ERROR SparkContext did not initialize after waiting for [spark.yarn.am.waitTime] ms. P
lease check earlier log output for errors. Failing the application.
1119
ApplicationMaster
startUserApplication(): Thread
Internally, when startUserApplication is executed, you should see the following INFO
message in the logs:
startUserApplication takes the user-specified jars and maps them to use the file:
protocol.
startUserApplication then creates a class loader to load the main class of the Spark
application given the precedence of the Spark system jars and the user-specified jars.
startUserApplication loads the main class (using the custom class loader created above
with the user-specified jars) and creates a reference to the main method.
startUserApplication starts a Java Thread (with the name Driver) that invokes the main
When the main method (of the Spark application) finishes successfully, the Driver thread
will finish with SUCCEEDED final application status and code status 0 and you should see
the following DEBUG message in the logs:
Any exceptions in the Driver thread are reported with corresponding ERROR message in
the logs, FAILED final application status, appropriate code status.
1120
ApplicationMaster
// SparkUserAppException
ERROR User application exited with status [exitCode]
// non-SparkUserAppException
ERROR User class threw exception: [cause]
registerAM(
_sparkConf: SparkConf,
_rpcEnv: RpcEnv,
driverRef: RpcEndpointRef,
uiAddress: String,
securityMgr: SecurityManager): Unit
1121
ApplicationMaster
Internally, registerAM first takes the application and attempt ids, and creates the URL of
Spark History Server for the Spark application, i.e. [address]/history/[appId]/[attemptId] ,
by substituting Hadoop variables (using the internal YarnConfiguration) in the optional
spark.yarn.historyServer.address setting.
registerAM prints YARN launch context diagnostic information (with command, environment
Command-Line Parameters
— ApplicationMasterArguments class
ApplicationMaster uses ApplicationMasterArguments class to handle command-line
parameters.
ApplicationMasterArguments is created right after main method has been executed for args
command-line parameters.
--arg ARG — an argument to be passed to the Spark application’s main class. There
1122
ApplicationMaster
When an unsupported parameter is found the following message is printed out to standard
error output and ApplicationMaster exits with the exit code 1 .
localResources Property
When ApplicationMaster is instantiated, it computes internal localResources collection of
YARN’s LocalResource by name based on the internal spark.yarn.cache.* configuration
settings.
It starts by reading the internal Spark configuration settings (that were earlier set when
Client prepared local resources to distribute):
spark.yarn.cache.filenames
spark.yarn.cache.sizes
spark.yarn.cache.timestamps
spark.yarn.cache.visibilities
spark.yarn.cache.types
1123
ApplicationMaster
Ultimately, it removes the cache-related settings from the Spark configuration and system
properties.
spark.ui.port to 0
spark.master as yarn
spark.submit.deployMode as cluster
Caution FIXME Why are the system properties required? Who’s expecting them?
isClusterMode is an internal flag that is enabled (i.e. true ) for cluster mode.
1124
ApplicationMaster
Specifically, it says whether the main class of the Spark application (through --class
command-line argument) was specified or not. That is how the developers decided to inform
ApplicationMaster about being run in cluster mode when Client creates YARN’s
isClusterMode is used to set additional system properties in run and runDriver (the flag is
Besides, isClusterMode controls the default final status of a Spark application being
FinalApplicationStatus.FAILED (when the flag is enabled) or
FinalApplicationStatus.UNDEFINED .
isClusterMode also controls whether to set system properties in addAmIpFilter (when the
ResourceManager.
It first checks that the ApplicationMaster has not already been unregistered (using the
internal unregistered flag). If so, you should see the following INFO message in the logs:
1125
ApplicationMaster
When ApplicationMaster starts running, it registers a shutdown hook that unregisters the
Spark application from the YARN ResourceManager and cleans up the staging directory.
Internally, it checks the internal finished flag, and if it is disabled, it marks the Spark
application as failed with EXIT_EARLY .
If the internal unregistered flag is disabled, it unregisters the Spark application and cleans
up the staging directory afterwards only when the final status of the ApplicationMaster’s
registration is FinalApplicationStatus.SUCCEEDED or the number of application attempts is
more than allowed.
The shutdown hook runs after the SparkContext is shut down, i.e. the shutdown priority is
one less than SparkContext’s.
ExecutorLauncher
ExecutorLauncher comes with no extra functionality when compared to ApplicationMaster .
It serves as a helper class to run ApplicationMaster under another class name in client
deploy mode.
With the two different class names (pointing at the same class ApplicationMaster ) you
should be more successful to distinguish between ExecutorLauncher (which is really a
ApplicationMaster ) in client deploy mode and the ApplicationMaster in cluster deploy
getAttemptId(): ApplicationAttemptId
getAttemptId returns YARN’s ApplicationAttemptId (of the Spark application to which the
1126
ApplicationMaster
waitForSparkDriver(): RpcEndpointRef
When executed, you should see the following INFO message in the logs:
FIXME waitForSparkDriver expects the driver’s host and port as the 0-th
Caution
element in ApplicationMasterArguments.userArgs . Why?
waitForSparkDriver tries to connect to the driver’s host and port until the driver accepts the
While waitForSparkDriver tries to connect (while the socket is down), you can see the
following ERROR message and waitForSparkDriver pauses for 100 ms and tries to connect
again (until the waitTime elapses).
1127
ApplicationMaster
sparkDriver RPC environment when the driver lives in YARN cluster (in
Note
cluster deploy mode)
1128
ApplicationMaster
runAMEndpoint is used when ApplicationMaster waits for the driver (in client
Note
deploy mode) and runs the driver (in cluster deploy mode).
1129
AMEndpoint — ApplicationMaster RPC Endpoint
AMEndpoint — ApplicationMaster RPC
Endpoint
onStart Callback
When onStart is called, AMEndpoint communicates with the driver (the driver remote
RPC Endpoint reference) by sending a one-way RegisterClusterManager message with a
reference to itself.
RPC Messages
AddWebUIFilter
AddWebUIFilter(
filterName: String,
filterParams: Map[String, String],
proxyBase: String)
When AddWebUIFilter arrives, you should see the following INFO message in the logs:
It then passes the AddWebUIFilter message on to the driver’s scheduler backend (through
YarnScheduler RPC Endpoint).
RequestExecutors
RequestExecutors(
requestedTotal: Int,
localityAwareTasks: Int,
hostToLocalTaskCount: Map[String, Int])
1130
AMEndpoint — ApplicationMaster RPC Endpoint
In case when YarnAllocator is not available yet, you should see the following WARN
message in the logs:
resetAllocatorInterval
When RequestExecutors message arrives, it calls resetAllocatorInterval procedure.
resetAllocatorInterval(): Unit
1131
YarnClusterManager — ExternalClusterManager for YARN
YarnClusterManager —
ExternalClusterManager for YARN
YarnClusterManager is the only currently known ExternalClusterManager in Spark. It creates
canCreate Method
YarnClusterManager can handle the yarn master URL only.
createTaskScheduler Method
createTaskScheduler creates a YarnClusterScheduler for cluster deploy mode and a
createSchedulerBackend Method
createSchedulerBackend creates a YarnClusterSchedulerBackend for cluster deploy mode
1132
TaskSchedulers for YARN
1133
YarnScheduler
It is a custom TaskSchedulerImpl with ability to compute racks per hosts, i.e. it comes with a
specialized getRackForHost.
Refer to Logging.
1134
YarnClusterScheduler
YarnClusterScheduler — TaskScheduler for
Cluster Deploy Mode
YarnClusterScheduler is the TaskScheduler for Spark on YARN in cluster deploy mode.
While being created, you should see the following INFO message in the logs:
Refer to Logging.
postStartHook Callback
postStartHook calls ApplicationMaster.sparkContextInitialized before the parent’s
postStartHook .
1135
SchedulerBackends for YARN
1136
YarnSchedulerBackend
YarnSchedulerBackend — Foundation for
Coarse-Grained Scheduler Backends for YARN
YarnSchedulerBackend is a CoarseGrainedSchedulerBackend that acts as the foundation for
the concrete deploy mode-specific Spark scheduler backends for YARN, i.e.
YarnClientSchedulerBackend and YarnClusterSchedulerBackend for client deploy mode
and cluster deploy mode, respectively.
Environment.
executors are registered (that varies on dynamic allocation being enabled or not).
1137
YarnSchedulerBackend
Created
yarnSchedulerEndpointRef
RPC endpoint reference to YarnScheduler RPC
YarnSchedulerBac
endpoint
created
Total expected nu
executors that is
that sufficient res
available
totalExpectedExecutors 0
task launch reque
Updated to the fin
Spark on YARN s
mode
YARN’s
a Spark applicatio
Only defined in
mode
Set when
attemptId (undefined) YarnClusterSched
starts
using YARN’s
ApplicationMaste
Used for
which is part of
SchedulerBacken
Controls
YarnSchedulerBac
another
RPC message ar
allows resetting in
after the initial
ApplicationManag
new one was reg
shouldResetOnAmRegister
can only happen
deploy mode
1138
YarnSchedulerBackend
Disabled (i.e.
YarnSchedulerBa
created
doRequestTotalExecutors Method
YarnScheduler RPC Endpoint with the input requestedTotal and the internal
localityAwareTasks and hostToLocalTaskCount attributes.
Caution FIXME The internal attributes are already set. When and how?
1139
YarnSchedulerBackend
start throws IllegalArgumentException when the internal appId has not been
set yet.
Note
java.lang.IllegalArgumentException: requirement failed: application ID unset
bindToYarn sets the internal appId and attemptId to the value of the input parameters,
applicationAttemptId(): Option[String]
1140
YarnSchedulerBackend
This section is only to take notes about the required components to instantiate
Note
the base services.
1. TaskSchedulerImpl
2. SparkContext
sufficientResourcesRegistered(): Boolean
1141
YarnClientSchedulerBackend
YarnClientSchedulerBackend —
SchedulerBackend for YARN in Client Deploy
Mode
YarnClientSchedulerBackend is the YarnSchedulerBackend used when a Spark application is
YarnClientSchedulerBackend submits a Spark application when started and waits for the
Refer to Logging.
1142
YarnClientSchedulerBackend
Enable DEBUG logging level for org.apache.hadoop logger to see what happens
inside Hadoop YARN.
log4j.logger.org.apache.hadoop=DEBUG
Tip
Refer to Logging.
Use with caution though as there will be a flood of messages in the logs every
second.
start(): Unit
start creates Client (to communicate with YARN ResourceManager) and submits a Spark
After the application is launched, start starts a MonitorThread state monitor thread. In the
meantime it also calls the supertype’s start .
1143
YarnClientSchedulerBackend
hostport ).
start sets the total expected number of executors to the initial number of executors.
Caution FIXME Why is this part of subtypes since they both set it to the same value?
start submits the Spark application to YARN (through Client) and saves ApplicationId
CoarseGrainedSchedulerBackend).
Caution FIXME Would be very nice to know why start does so in a NOTE.
start creates and starts monitorThread (to monitor the Spark application and stop the
stop
stop is part of the SchedulerBackend Contract.
It stops the internal helper objects, i.e. monitorThread and client as well as "announces"
the stop to other services through Client.reportLauncherState . In the meantime it also calls
the supertype’s stop .
stop makes sure that the internal client has already been created (i.e. it is not null ),
1144
YarnClientSchedulerBackend
Later, it passes the call on to the suppertype’s stop and, once the supertype’s stop has
finished, it calls YarnSparkHadoopUtil.stopExecutorDelegationTokenRenewer followed by
stopping the internal client.
Eventually, when all went fine, you should see the following INFO message in the logs:
waitForApplication(): Unit
Client.monitorApplication).
If the application has FINISHED , FAILED , or has been KILLED , a SparkException is thrown
with the following message:
Yarn application has already ended! It might have been killed or unable to launch appl
ication master.
You should see the following INFO message in the logs for RUNNING state:
asyncMonitorApplication
asyncMonitorApplication(): MonitorThread
1145
YarnClientSchedulerBackend
MonitorThread
MonitorThread internal class is to monitor a Spark application submitted to a YARN cluster
When the call to Client.monitorApplication has finished, it is assumed that the application
has exited. You should see the following ERROR message in the logs:
1146
YarnClusterSchedulerBackend
YarnClusterSchedulerBackend -
SchedulerBackend for YARN in Cluster Deploy
Mode
YarnClusterSchedulerBackend is a custom YarnSchedulerBackend for Spark on YARN in
This is a scheduler backend that supports multiple application attempts and URLs for
driver’s logs to display as links in the web UI in the Executors tab for the driver.
It uses spark.yarn.app.attemptId under the covers (that the YARN resource manager
sets?).
Refer to Logging.
Creating YarnClusterSchedulerBackend
Creating a YarnClusterSchedulerBackend object requires a TaskSchedulerImpl and
SparkContext objects.
Internally, it first queries ApplicationMaster for attemptId and records the application and
attempt ids.
It then calls the parent’s start and sets the parent’s totalExpectedExecutors to the initial
number of executors.
1147
YarnClusterSchedulerBackend
Internally, it retrieves the container id and through environment variables computes the base
URL.
1148
YarnSchedulerEndpoint RPC Endpoint
It uses the reference to the remote ApplicationMaster RPC Endpoint to send messages to.
Refer to Logging.
RPC Messages
RequestExecutors
RequestExecutors(
requestedTotal: Int,
localityAwareTasks: Int,
hostToLocalTaskCount: Map[String, Int])
extends CoarseGrainedClusterMessage
RequestExecutors is to inform ApplicationMaster about the current requirements for the total
number of executors (as requestedTotal ), including already pending and running executors.
1149
YarnSchedulerEndpoint RPC Endpoint
Any issues communicating with the remote ApplicationMaster RPC endpoint are reported
as ERROR messages in the logs:
RemoveExecutor
KillExecutors
AddWebUIFilter
AddWebUIFilter(
filterName: String,
filterParams: Map[String, String],
proxyBase: String)
It firstly sets spark.ui.proxyBase system property to the input proxyBase (if not empty).
1150
YarnSchedulerEndpoint RPC Endpoint
If it defines a filter, i.e. the input filterName and filterParams are both not empty, you
should see the following INFO message in the logs:
It then sets spark.ui.filters to be the input filterName in the internal conf SparkConf
attribute.
RegisterClusterManager Message
RegisterClusterManager(am: RpcEndpointRef)
When RegisterClusterManager message arrives, the following INFO message is printed out
to the logs:
The internal reference to the remote ApplicationMaster RPC Endpoint is set (to am ).
RetrieveLastAllocatedExecutorId
When RetrieveLastAllocatedExecutorId is received, YarnSchedulerEndpoint responds with
the current value of currentExecutorIdCounter.
onDisconnected Callback
1151
YarnSchedulerEndpoint RPC Endpoint
onDisconnected clears the internal reference to the remote ApplicationMaster RPC Endpoint
You should see the following WARN message in the logs if that happens:
onStop Callback
onStop shuts askAmThreadPool down immediately.
new threads as needed and reuses previously constructed threads when they are available.
1152
YarnAllocator
creating a YarnAllocator ).
reference).
1153
YarnAllocator
1154
YarnAllocator
pendingLossReasonRequests
releasedExecutorLossReasons
executorIdToContainer
numUnexpectedContainerRelease
containerIdToExecutorId
failedExecutorsTimeStamps
executorMemory
memoryOverhead
executorCores
launchContainers
labelExpression
nodeLabelConstructor
containerPlacementStrategy
1155
YarnAllocator
Refer to Logging.
1. driverUrl
3. YarnConfiguration
4. sparkConf — SparkConf
6. ApplicationAttemptId
7. SecurityManager
All the input parameters for YarnAllocator (but appAttemptId and amClient ) are passed
directly from the input parameters of YarnRMClient .
1156
YarnAllocator
numExecutorsRunning to 0
numUnexpectedContainerRelease to 0L
numLocalityAwareTasks to 0
It creates the internal resource to Hadoop YARN’s Resource with both executorMemory +
memoryOverhead memory and executorCores CPU cores.
getNumExecutorsRunning Method
Caution FIXME
updateInternalState Method
Caution FIXME
1157
YarnAllocator
killExecutor Method
Caution FIXME
requestTotalExecutorsWithPreferredLocalities(
requestedTotal: Int,
localityAwareTasks: Int,
hostToLocalTaskCount: Map[String, Int],
nodeBlacklist: Set[String]): Boolean
If the input requestedTotal is different than the internal targetNumExecutors you should see
the following INFO message in the logs:
ResouceManager for this application in order to avoid allocating new Containers on the
problematic nodes.
1158
YarnAllocator
updateResourceRequests(): Unit
YARN ResourceManager.
1. missing executors, i.e. when the number of executors allocated already or pending does
not match the needs and so there are missing executors.
2. executors to cancel, i.e. when the number of pending executor allocations is positive,
but the number of all the executors is more than Spark needs.
INFO YarnAllocator: Will request [count] executor containers, each with [vCores] cores
and [memory] MB memory including [memoryOverhead] MB overhead
It then splits pending container allocation requests per locality preference of pending tasks
(in the internal hostToLocalTaskCounts registry).
1159
YarnAllocator
For any new container needed updateResourceRequests adds a container request (using
YARN’s AMRMClient.addContainerRequest).
INFO Canceling requests for [numToCancel] executor container(s) to have a new desired
total [targetNumExecutors] executors.
It checks whether there are pending allocation requests and removes the excess (using
YARN’s AMRMClient.removeContainerRequest). If there are no pending allocation requests,
you should see the WARN message in the logs:
1160
YarnAllocator
If handleAllocatedContainers did not manage to allocate some containers, you should see
the following DEBUG message in the logs:
1161
YarnAllocator
runAllocatedContainers checks if the number of executors running is less than the number
of required executors.
If there are executors still missing (and runAllocatedContainers is not in testing mode),
runAllocatedContainers schedules execution of a ExecutorRunnable on ContainerLauncher
When runAllocatedContainers catches a non-fatal exception and you should see the
following ERROR message in the logs and immediately releases the container (using the
internal AMRMClient).
If YarnAllocator has reached target number of executors, you should see the following
INFO message in the logs:
1162
YarnAllocator
matchContainerToRequest(
allocatedContainer: Container,
location: String,
containersToUse: ArrayBuffer[Container],
remaining: ArrayBuffer[Container]): Unit
collections per available outstanding ContainerRequests that match the priority of the input
allocatedContainer , the input location , and the memory and vcore capabilities for Spark
executors.
Note The input location can be host, rack, or * (star), i.e. any host.
If there are any outstanding ContainerRequests that meet the requirements, it simply takes
the first one and puts it in the input containersToUse collection. It also removes the
ContainerRequest so it is not submitted again (it uses the internal
AMRMClient[ContainerRequest] ).
processCompletedContainers Method
1163
YarnAllocator
Id
Note State
Exit status of a completed container.
It looks the host of the container up (in the internal allocatedContainerToHostMap lookup
table). The host may or may not exist in the lookup table.
Caution FIXME The host may or may not exist in the lookup table?
When the host of the completed container has been found, the internal
numExecutorsRunning counter is decremented.
INFO Executor for container [id] exited because of a YARN event (e.g., pre-emption) an
d not because of an error in the running job.
Other exit statuses of the container are considered application failures and reported as a
WARN message in the logs:
WARN Container killed by YARN for exceeding memory limits. [diagnostics] Consider boos
ting spark.yarn.executor.memoryOverhead.
1164
YarnAllocator
or
WARN Container marked as failed: [id] [host]. Exit status: [containerExitStatus]. Diag
nostics: [containerDiagnostics]
If the executor was recorded in the internal pendingLossReasonRequests lookup table, the
exit reason (as calculated earlier as ExecutorExited ) is sent back for every pending RPC
message recorded.
If no executor was found, the executor and the exit reason are recorded in the internal
releasedExecutorLossReasons lookup table.
In case the container was not in the internal releasedContainers registry, the internal
numUnexpectedContainerRelease counter is increased and a RemoveExecutor RPC
message is sent to the driver (as specified when YarnAllocator was created) to notify about
the failure of the executor.
allocateResources(): Unit
In YARN, you first have to submit requests for YARN resource containers to
Note YARN ResourceManager (using AMRMClient.addContainerRequest) before
claiming them by calling AMRMClient.allocate.
Internally, allocateResources submits requests for new containers and cancels previous
container requests.
allocateResources then claims the containers (using the internal reference to YARN’s
1165
YarnAllocator
You can see the exact moment in the YARN console for the Spark application with the
progress bar at 10%.
If the number of allocated containers is greater than 0 , you should see the following
DEBUG message in the logs (in stderr on YARN):
ResourceManager.
If the number of completed containers is greater than 0 , you should see the following
DEBUG message in the logs (in stderr on YARN):
You should see the following DEBUG message in the logs (in stderr on YARN):
1166
Introduction to Hadoop YARN
ResourceManager is the master daemon that communicates with YARN clients, tracks
resources on the cluster (on NodeManagers), and orchestrates work by assigning tasks
to NodeManagers. It coordinates work of ApplicationMasters and NodeManagers.
YARN currently defines two resources: vcores and memory. vcore is a usage share of a
CPU core.
YARN ResourceManager keeps track of the cluster’s resources while NodeManagers tracks
the local host’s resources.
Proxy Server for viewing application status and logs from outside the cluster.
YARN ResourceManager accepts application submissions, schedules them, and tracks their
status (through ApplicationMasters). A YARN NodeManager registers with the
ResourceManager and provides its local CPUs and memory for resource negotiation.
In a real YARN cluster, there are one ResourceManager (two for High Availability) and
multiple NodeManagers.
YARN ResourceManager
YARN ResourceManager manages the global assignment of compute resources to
applications, e.g. memory, cpu, disk, network, etc.
1167
Introduction to Hadoop YARN
YARN NodeManager
Each NodeManager tracks its own local resources and communicates its resource
configuration to the ResourceManager, which keeps a running total of the cluster’s
available resources.
YARN ApplicationMaster
YARN ResourceManager manages the global assignment of compute resources to
applications, e.g. memory, cpu, disk, network, etc.
For each running application, a special piece of code called an ApplicationMaster helps
coordinate tasks on the YARN cluster. The ApplicationMaster is the first process run
after the application starts.
One or more tasks that do the actual work (runs in a process) in the container
allocated by YARN.
The application starts and talks to the ResourceManager (running on the master)
for the cluster.
Once all tasks are finished, the ApplicationMaster exits. The last container is de-
allocated from the cluster.
1168
Introduction to Hadoop YARN
It monitors tasks, restarts failed ones, etc. It can run any type of tasks, be them MapReduce
tasks or Spark tasks.
An ApplicationMaster is like a queen bee that starts creating worker bees (in their own
containers) in the YARN cluster.
Others
A host is the Hadoop term for a computer (also called a node, in YARN terminology).
It can technically also be a single host used for debugging and simple testing.
Master hosts are a small number of hosts reserved to control the rest of the cluster.
Worker hosts are the non-master hosts in the cluster.
A master host is the communication point for a client program. A master host sends
the work to the rest of the cluster, which consists of worker hosts.
The YARN configuration file is an XML file that contains properties. This file is placed in
a well-known location on each host in the cluster and is used to configure the
ResourceManager and NodeManager. By default, this file is named yarn-site.xml .
Once a hold has been granted on a host, the NodeManager launches a process called
a task.
1169
Introduction to Hadoop YARN
Hadoop YARN
YARN could be considered a cornerstone of Hadoop OS (operating system) for big
distributed data with HDFS as the storage along with YARN as a process scheduler.
YARN is essentially a container system and scheduler designed primarily for use with a
Hadoop-based cluster.
Spark runs on YARN clusters, and can read from and save data to HDFS.
Spark needs distributed file system and HDFS (or Amazon S3, but slower) is a great
choice.
Excellent throughput when Spark and Hadoop are both distributed and co-located on
the same (YARN or Mesos) cluster nodes.
When reading data from HDFS, each InputSplit maps to exactly one Spark partition.
1170
Introduction to Hadoop YARN
HDFS is distributing files on data-nodes and storing a file on the filesystem, it will be
split into partitions.
ContainerExecutors
LinuxContainerExecutor and Docker
WindowsContainerExecutor
(video) HUG Meetup Apr 2016: The latest of Apache Hadoop YARN and running your
docker apps on YARN
1171
Setting up YARN Cluster
YARN_CONF_DIR
HADOOP_CONF_DIR
HADOOP_HOME
1172
Kerberos
Kerberos
Microsoft incorporated Kerberos authentication into Windows 2000
Two open source Kerberos implementations exist: the MIT reference implementation
and the Heimdal Kerberos implementation.
YARN supports user authentication via Kerberos (so do the other services: HDFS, HBase,
Hive).
Caution FIXME
Hadoop Security
1173
ConfigurableCredentialManager
ConfigurableCredentialManager
Caution FIXME
Caution FIXME
credentialRenewer Method
Caution FIXME
Caution FIXME
1174
ClientDistributedCacheManager
ClientDistributedCacheManager
ClientDistributedCacheManager is a mere wrapper to hold the collection of cache-related
resource entries CacheEntry (as distCacheEntries ) to add resources to and later update
Spark configuration with files to distribute.
addResource(
fs: FileSystem,
conf: Configuration,
destPath: Path,
localResources: HashMap[String, LocalResource],
resourceType: LocalResourceType,
link: String,
statCache: Map[URI, FileStatus],
appMasterOnly: Boolean = false): Unit
updateConfiguration sets the following internal Spark configuration settings in the input
spark.yarn.cache.filenames
spark.yarn.cache.sizes
spark.yarn.cache.timestamps
spark.yarn.cache.visibilities
spark.yarn.cache.types
1175
ClientDistributedCacheManager
1176
YarnSparkHadoopUtil
YarnSparkHadoopUtil
YarnSparkHadoopUtil is…FIXME
Refer to Logging.
startCredentialUpdater Method
Caution FIXME
Caution FIXME
addPathToEnvironment Method
Caution FIXME
startExecutorDelegationTokenRenewer
Caution FIXME
stopExecutorDelegationTokenRenewer
1177
YarnSparkHadoopUtil
Caution FIXME
getApplicationAclsForYarn Method
Caution FIXME
MEMORY_OVERHEAD_FACTOR
MEMORY_OVERHEAD_FACTOR is a constant that equals to 10% for memory overhead.
MEMORY_OVERHEAD_MIN
MEMORY_OVERHEAD_MIN is a constant that equals to 384L for memory overhead.
getContainerId: ContainerId
getContainerId is a private[spark] method that gets YARN’s ContainerId from the YARN
1178
YarnSparkHadoopUtil
1179
Settings
Settings
The following settings (aka system properties) are specific to Spark on YARN.
Port that
ApplicationMaster
spark.yarn.am.port 0 uses to create the
sparkYarnAM RPC
environment.
spark.yarn.am.waitTime 100s
In milliseconds unless
the unit is specified.
spark.yarn.app.id
spark.yarn.credentials.renewalTime
spark.yarn.credentials.renewalTime (default: Long.MaxValue ms) is an internal setting for
See prepareLocalResources.
spark.yarn.credentials.updateTime
spark.yarn.credentials.updateTime (default: Long.MaxValue ms) is an internal setting for the
spark.yarn.rolledLog.includePattern
spark.yarn.rolledLog.includePattern
1180
Settings
spark.yarn.rolledLog.excludePattern
spark.yarn.rolledLog.excludePattern
spark.yarn.am.nodeLabelExpression
spark.yarn.am.nodeLabelExpression
spark.yarn.am.attemptFailuresValidityInterval
spark.yarn.am.attemptFailuresValidityInterval
spark.yarn.tags
spark.yarn.tags
spark.yarn.am.extraLibraryPath
spark.yarn.am.extraLibraryPath
spark.yarn.am.extraJavaOptions
spark.yarn.am.extraJavaOptions
spark.yarn.scheduler.initial-allocation.interval
spark.yarn.scheduler.initial-allocation.interval (default: 200ms ) controls the initial
allocation interval.
spark.yarn.scheduler.heartbeat.interval-ms
spark.yarn.scheduler.heartbeat.interval-ms (default: 3s ) is the heartbeat interval to YARN
ResourceManager.
spark.yarn.max.executor.failures
spark.yarn.max.executor.failures is an optional setting that sets the maximum number of
1181
Settings
Caution FIXME
spark.yarn.maxAppAttempts
spark.yarn.maxAppAttempts is the maximum number of attempts to register
spark.yarn.user.classpath.first
Caution FIXME
spark.yarn.archive
spark.yarn.archive is the location of the archive containing jars files with Spark classes. It
spark.yarn.queue
spark.yarn.queue (default: default ) is the name of the YARN resource queue that Client
You can specify the value using spark-submit’s --queue command-line argument.
spark.yarn.jars
spark.yarn.jars is the location of the Spark jars.
--conf spark.yarn.jar=hdfs://master:8020/spark/spark-assembly-2.0.0-hadoop2.7.2.jar
spark.yarn.report.interval
1182
Settings
It is used in Client.monitorApplication.
spark.yarn.dist.jars
spark.yarn.dist.jars (default: empty) is a collection of additional jars to distribute.
It is used when Client distributes additional resources as specified using --jars command-
line option for spark-submit.
spark.yarn.dist.files
spark.yarn.dist.files (default: empty) is a collection of additional files to distribute.
spark.yarn.dist.archives
spark.yarn.dist.archives (default: empty) is a collection of additional archives to distribute.
spark.yarn.principal
spark.yarn.principal — See the corresponding --principal command-line option for spark-
submit.
spark.yarn.keytab
spark.yarn.keytab — See the corresponding --keytab command-line option for spark-
submit.
spark.yarn.submit.file.replication
spark.yarn.submit.file.replication is the replication factor (number) for files uploaded by
Spark to HDFS.
spark.yarn.config.gatewayPath
1183
Settings
present on gateway nodes, and will be replaced with the corresponding path in cluster
machines.
spark.yarn.config.replacementPath
spark.yarn.config.replacementPath (default: null ) is the path to use as a replacement for
spark.yarn.historyServer.address
spark.yarn.historyServer.address is the optional address of the History Server.
spark.yarn.access.namenodes
spark.yarn.access.namenodes (default: empty) is a list of extra NameNode URLs for which to
request delegation tokens. The NameNode that hosts fs.defaultFS does not need to be
listed here.
spark.yarn.cache.types
spark.yarn.cache.types is an internal setting…
spark.yarn.cache.visibilities
spark.yarn.cache.visibilities is an internal setting…
spark.yarn.cache.timestamps
spark.yarn.cache.timestamps is an internal setting…
spark.yarn.cache.filenames
spark.yarn.cache.filenames is an internal setting…
spark.yarn.cache.sizes
spark.yarn.cache.sizes is an internal setting…
1184
Settings
spark.yarn.cache.confArchive
spark.yarn.cache.confArchive is an internal setting…
spark.yarn.secondary.jars
spark.yarn.secondary.jars is…
spark.yarn.executor.nodeLabelExpression
spark.yarn.executor.nodeLabelExpression is a node label expression for executors.
spark.yarn.containerLauncherMaxThreads
spark.yarn.containerLauncherMaxThreads (default: 25 )…FIXME
spark.yarn.executor.failuresValidityInterval
spark.yarn.executor.failuresValidityInterval (default: -1L ) is an interval (in milliseconds)
after which Executor failures will be considered independent and not accumulate towards
the attempt count.
spark.yarn.submit.waitAppCompletion
spark.yarn.submit.waitAppCompletion (default: true ) is a flag to control whether to wait for
the application to finish before exiting the launcher process in cluster mode.
spark.yarn.am.cores
spark.yarn.am.cores (default: 1 ) sets the number of CPU cores for ApplicationMaster’s
JVM.
spark.yarn.driver.memoryOverhead
spark.yarn.driver.memoryOverhead (in MiBs)
spark.yarn.am.memoryOverhead
spark.yarn.am.memoryOverhead (in MiBs)
spark.yarn.am.memory
1185
Settings
spark.yarn.am.memory (default: 512m ) sets the memory size of ApplicationMaster’s JVM (in
MiBs)
spark.yarn.stagingDir
spark.yarn.stagingDir is a staging directory used while submitting applications.
spark.yarn.preserve.staging.files
spark.yarn.preserve.staging.files (default: false ) controls whether to preserve
spark.yarn.credentials.file
spark.yarn.credentials.file …
spark.yarn.launchContainers
spark.yarn.launchContainers (default: true ) is a flag used for testing only so
1186
Spark Standalone
Standalone Master (often written standalone Master) is the resource manager for the Spark
Standalone cluster (read Standalone Master for in-depth coverage).
Standalone Worker (aka standalone slave) is the worker in the Spark Standalone cluster
(read Standalone Worker for in-depth coverage).
Standalone cluster mode is subject to the constraint that only one executor can be allocated
on each worker per application.
Once a Spark Standalone cluster has been started, you can access it using spark://
master URL (read Master URLs).
You can deploy, i.e. spark-submit , your applications to Spark Standalone in client or
cluster deploy mode (read Deployment modes).
Deployment modes
Caution FIXME
1187
Spark Standalone
Keeps track of task ids and executor ids, executors per host, hosts per rack
You can give one or many comma-separated masters URLs in spark:// URL.
Caution FIXME
scheduleExecutorsOnWorkers
Caution FIXME
scheduleExecutorsOnWorkers(
app: ApplicationInfo,
usableWorkers: Array[WorkerInfo],
spreadOutApps: Boolean): Array[Int]
1188
Spark Standalone
SPARK_WORKER_INSTANCES (and
SPARK_WORKER_CORES)
There is really no need to run multiple workers per machine in Spark 1.5 (perhaps in 1.4,
too). You can run multiple executors on the same machine with one worker.
You can set up the number of cores as an command line argument when you start a worker
daemon using --cores .
Since the change SPARK-1706 Allow multiple executors per worker in Standalone mode in
Spark 1.4 it’s currently possible to start multiple executors in a single JVM process of a
worker.
To launch multiple executors on a machine you start multiple standalone workers, each with
its own JVM. It introduces unnecessary overhead due to these JVM processes, provided
that there are enough cores on that worker.
If you are running Spark in standalone mode on memory-rich nodes it can be beneficial to
have multiple worker instances on the same node as a very large heap size has two
disadvantages:
Mesos and YARN can, out of the box, support packing multiple, smaller executors onto the
same physical host, so requesting smaller executors doesn’t mean your application will have
fewer overall resources.
SparkDeploySchedulerBackend
SparkDeploySchedulerBackend is the Scheduler Backend for Spark Standalone, i.e. it is used
1189
Spark Standalone
AppClient
AppClient is an interface to allow Spark applications to talk to a Standalone cluster (using a
AppClient registers AppClient RPC endpoint (using ClientEndpoint class) to a given RPC
Environment.
AppClient uses a daemon cached thread pool ( askAndReplyThreadPool ) with threads' name
to master.
1190
Spark Standalone
When AppClient starts, AppClient.start() method is called that merely registers AppClient
RPC Endpoint.
Others
killExecutors
start
stop
It is a ThreadSafeRpcEndpoint that knows about the RPC endpoint of the primary active
standalone Master (there can be a couple of them, but only one can be active and hence
primary).
An AppClient registers the Spark application to a single master (regardless of the number of
the standalone masters given in the master URL).
1191
Spark Standalone
An AppClient tries connecting to a standalone master 3 times every 20 seconds per master
before giving up. They are not configurable parameters.
successful application registration. It comes with the application id and the master’s RPC
endpoint reference.
The AppClientListener gets notified about the event via listener.connected(appId) with
appId being an application id.
ApplicationRemoved is received from the primary master to inform about having removed the
It can come from the standalone Master after a kill request from Web UI, application has
finished properly or the executor where the application was still running on has been killed,
failed, lost or exited.
1192
Spark Standalone
stop the AppClient after the SparkContext has been stopped (and so should the running
application on the standalone cluster).
Settings
spark.deploy.spreadOut
spark.deploy.spreadOut (default: true ) controls whether standalone Master should
1193
Standalone Master — Cluster Manager of Spark Standalone
A standalone Master is pretty much the Master RPC Endpoint that you can access using
RPC port (low-level operation communication) or Web UI.
workers ( workers )
applications ( apps )
endpointToApp
addressToApp
completedApps
nextAppNumber
drivers ( drivers )
completedDrivers
nextDriverNumber
The following INFO shows up when the Master endpoint starts up ( Master#onStart is
called):
1194
Standalone Master — Cluster Manager of Spark Standalone
Master can be started and stopped using custom management scripts for standalone
Master.
FIXME
hadoopConf
Used when…FIXME
Master WebUI
FIXME MasterWebUI
MasterWebUI is the Web UI server for the standalone master. Master starts Web UI to listen
States
Master can be in the following states:
RECOVERING
COMPLETING_RECOVERY
Caution FIXME
RPC Environment
The org.apache.spark.deploy.master.Master class starts sparkMaster RPC environment.
1195
Standalone Master — Cluster Manager of Spark Standalone
The Master endpoint starts the daemon single-thread scheduler pool master-forward-
message-thread . It is used for worker management, i.e. removing any timed-out workers.
Metrics
Master uses Spark Metrics System (via MasterSource ) to report metrics about internal
status.
1196
Standalone Master — Cluster Manager of Spark Standalone
FIXME
Review org.apache.spark.metrics.MetricsConfig
Caution
How to access the metrics for master? See Master#onStart
Review masterMetricsSystem and applicationMetricsSystem
REST Server
The standalone Master starts the REST Server service for alternative application submission
that is supposed to work across Spark versions. It is enabled by default (see
spark.master.rest.enabled) and used by spark-submit for the standalone cluster mode, i.e. -
-deploy-mode is cluster .
The following INFOs show up when the Master Endpoint starts up ( Master#onStart is
called) with REST Server enabled:
Recovery Mode
A standalone Master can run with recovery mode enabled and be able to recover state
among the available swarm of masters. By default, there is no recovery, i.e. no persistence
and no election.
Only a master can schedule tasks so having one always on is important for
Note cases where you want to launch new tasks. Running tasks are unaffected by
the state of the master.
The Recovery Mode enables election of the leader master among the masters.
Check out the exercise Spark Standalone - Using ZooKeeper for High-Availability
Tip
of Master.
1197
Standalone Master — Cluster Manager of Spark Standalone
Leader Election
Master endpoint is LeaderElectable , i.e. FIXME
Caution FIXME
RPC Messages
Master communicates with drivers, executors and configures itself using RPC messages.
CompleteRecovery
RevokedLeadership
RegisterApplication
ExecutorStateChanged
DriverStateChanged
Heartbeat
MasterChangeAcknowledged
WorkerSchedulerStateResponse
UnregisterApplication
CheckForWorkerTimeOut
RegisterWorker
RequestSubmitDriver
RequestKillDriver
RequestDriverStatus
RequestMasterState
BoundPortsRequest
RequestExecutors
KillExecutors
1198
Standalone Master — Cluster Manager of Spark Standalone
RegisterApplication event
A RegisterApplication event is sent by AppClient to the standalone Master. The event
holds information about the application being deployed ( ApplicationDescription ) and the
driver’s endpoint reference.
executor’s memory, command, appUiUrl, and user with optional eventLogDir and
eventLogCodec for Event Logs, and the number of cores per executor.
A driver has a state, i.e. driver.state and when it’s in DriverState.RUNNING state the driver
has been assigned to a worker for execution.
1199
Standalone Master — Cluster Manager of Spark Standalone
The message holds information about the id and name of the driver.
A driver can be running on a single worker while a worker can have many drivers running.
When a worker receives a LaunchDriver message, it prints out the following INFO:
It then creates a DriverRunner and starts it. It starts a separate JVM process.
Workers' free memory and cores are considered when assigning some to waiting drivers
(applications).
DriverRunner
Warning It seems a dead piece of code. Disregard it for now.
It is a java.lang.Process
Internals of org.apache.spark.deploy.master.Master
1200
Standalone Master — Cluster Manager of Spark Standalone
The above command suspends ( suspend=y ) the process until a JPDA debugging client, e.g. you
When Master starts, it first creates the default SparkConf configuration whose values it
then overrides using environment variables and command-line options.
It starts RPC Environment with necessary endpoints and lives until the RPC environment
terminates.
Worker Management
Master uses master-forward-message-thread to schedule a thread every
spark.worker.timeout to check workers' availability and remove timed-out workers.
When a worker hasn’t responded for spark.worker.timeout , it is assumed dead and the
following WARN message appears in the logs:
1201
Standalone Master — Cluster Manager of Spark Standalone
SPARK_PUBLIC_DNS (default: hostname) - the custom master hostname for WebUI’s http
spark-defaults.conf from which all properties that start with spark. prefix are loaded.
Settings
FIXME
Caution
Where are `RETAINED_’s properties used?
spark.dead.worker.persistence (default: 15 )
StandaloneRecoveryModeFactory .
1202
Standalone Master — Cluster Manager of Spark Standalone
RpcEnv
RpcAddress
SecurityManager
SparkConf
startRpcEnvAndEndpoint Method
startRpcEnvAndEndpoint(
host: String,
port: Int,
webUiPort: Int,
conf: SparkConf): (RpcEnv, Int, Option[Int])
startRpcEnvAndEndpoint …FIXME
main …FIXME
$ ./bin/spark-class org.apache.spark.deploy.master.Main
...FIXME
1203
Standalone Master — Cluster Manager of Spark Standalone
1204
Standalone Worker
Standalone Worker
Standalone Worker (aka standalone slave) is a logical node in a Spark Standalone cluster.
Worker is a ThreadSafeRpcEndpoint that uses Worker for the RPC endpoint name when
registered.
You can have one or many standalone workers in a standalone cluster. They can be started
and stopped using management scripts.
receive Method
receive …FIXME
1205
Standalone Worker
handleRegisterResponse …FIXME
main …FIXME
startRpcEnvAndEndpoint(
host: String,
port: Int,
webUiPort: Int,
cores: Int,
memory: Int,
masterUrls: Array[String],
workDir: String,
workerNumber: Option[Int] = None,
conf: SparkConf = new SparkConf): RpcEnv
startRpcEnvAndEndpoint …FIXME
startRpcEnvAndEndpoint creates a Worker RPC endpoint (for the RPC environment and the
startRpcEnvAndEndpoint requests the RpcEnv to register the Worker RPC endpoint under
1206
Standalone Worker
RpcEnv
Number of cores
Amount of memory
SparkConf
SecurityManager
createWorkDir(): Unit
subdirectory.
In the end, createWorkDir creates workDir directory (including any necessary but
nonexistent parent directories).
createWorkDir reports…FIXME
onStart Method
onStart(): Unit
onStart …FIXME
1207
Standalone Worker
1208
web UI
Executor Summary
Executor Summary page displays information about the executors for the application id
given as the appId request parameter.
When an executor is added to the pool of available executors, it enters LAUNCHING state. It
can then enter either RUNNING or FAILED states.
ExecutorRunner.killProcess
1209
web UI
If no application for the appId could be found, Not Found page is displayed.
1210
ApplicationPage
ApplicationPage
ApplicationPage is a WebUIPage with app prefix.
1211
LocalSparkCluster — Single-JVM Spark Standalone Cluster
LocalSparkCluster — Single-JVM Spark
Standalone Cluster
LocalSparkCluster is responsible for local-cluster master URL.
LocalSparkCluster can be particularly useful to test distributed operation and fault recovery
FIXME
masterRpcEnvs
Used when…FIXME
FIXME
workerRpcEnvs
Used when…FIXME
Refer to Logging.
Number of workers
1212
LocalSparkCluster — Single-JVM Spark Standalone Cluster
SparkConf
start Method
start(): Array[String]
start …FIXME
stop Method
stop(): Unit
stop …FIXME
1213
Submission Gateways
Submission Gateways
Caution FIXME
From SparkSubmit.submit :
The latter is the default behaviour as of Spark 1.3, but Spark submit will fail over to use the
legacy gateway if the master endpoint turns out to be not a REST server.
1214
Management Scripts for Standalone Master
sbin/start-master.sh
sbin/start-master.sh script starts a Spark master on the machine the script is executed on.
./sbin/start-master.sh
org.apache.spark.deploy.master.Master \
--ip japila.local --port 7077 --webui-port 8080
It has support for starting Tachyon using --with-tachyon command line option. It assumes
tachyon/bin/tachyon command be available in Spark’s home directory.
sbin/spark-config.sh
bin/load-spark-env.sh
Command-line Options
You can use the following command-line options:
1215
Management Scripts for Standalone Master
overrides it.
sbin/stop-master.sh
You can stop a Spark Standalone master using sbin/stop-master.sh script.
./sbin/stop-master.sh
1216
Management Scripts for Standalone Workers
./sbin/start-slave.sh [masterURL]
The order of importance of Spark configuration settings is as follows (from least to the most
important):
Command-line options
Spark properties
1217
Management Scripts for Standalone Workers
SPARK_WORKER_INSTANCES 1
The number of worker instances to
run on a node
SPARK_WORKER_CORES
The number of cores to use by a
single executor
SPARK_WORKER_MEMORY 1G
The amount of memory to use, e.g.
1000M , 2G
sbin/spark-config.sh
bin/load-spark-env.sh
Command-line Options
You can use the following command-line options:
variable.
1218
Management Scripts for Standalone Workers
environment variable.
environment variable.
--help
Spark properties
After loading the default SparkConf, if --properties-file or SPARK_WORKER_OPTS define
spark.worker.ui.port , the value of the property is used as the port of the worker’s web UI.
or
$ cat worker.properties
spark.worker.ui.port=33333
sbin/spark-daemon.sh
Ultimately, the script calls sbin/spark-daemon.sh start to kick off
org.apache.spark.deploy.worker.Worker with --webui-port , --port and the master URL.
Internals of org.apache.spark.deploy.worker.Worker
Upon starting, a Spark worker creates the default SparkConf.
1219
Management Scripts for Standalone Workers
It starts sparkWorker RPC Environment and waits until the RpcEnv terminates.
RPC environment
The org.apache.spark.deploy.worker.Worker class starts its own sparkWorker RPC
environment with Worker endpoint.
It has support for starting Tachyon using --with-tachyon command line option. It assumes
tachyon/bin/tachyon command be available in Spark’s home directory.
sbin/spark-config.sh
bin/load-spark-env.sh
conf/spark-env.sh
The script uses the following environment variables (and sets them when unavailable):
SPARK_PREFIX
SPARK_HOME
SPARK_CONF_DIR
SPARK_MASTER_PORT
SPARK_MASTER_IP
The following command will launch 3 worker instances on each node. Each worker instance
will use two cores.
1220
Checking Status
If you however want to filter out the JVM processes that really belong to Spark you should
pipe the command’s output to OS-specific tools like grep .
$ jps -lm
999 org.apache.spark.deploy.master.Master --ip japila.local --port 7077 --webui-port 8
080
397
669 org.jetbrains.idea.maven.server.RemoteMavenServer
1198 sun.tools.jps.Jps -lm
spark-daemon.sh status
You can also check out ./sbin/spark-daemon.sh status .
When you start Spark Standalone using scripts under sbin , PIDs are stored in /tmp
directory by default. ./sbin/spark-daemon.sh status can read them and do the "boilerplate"
for you, i.e. status a PID.
$ ls /tmp/spark-*.pid
/tmp/spark-jacek-org.apache.spark.deploy.master.Master-1.pid
1221
Example 2-workers-on-1-node Standalone Cluster (one executor per worker)
You can use the Spark Standalone cluster in the following ways:
Use spark-shell with --master MASTER_URL
Important
Use SparkConf.setMaster(MASTER_URL) in your Spark application
For our learning purposes, MASTER_URL is spark://localhost:7077 .
./sbin/start-master.sh
Notes:
1222
Example 2-workers-on-1-node Standalone Cluster (one executor per worker)
./sbin/start-slave.sh spark://japila.local:7077
4. Check out master’s web UI at http://localhost:8080 to know the current setup - one
worker.
1223
Example 2-workers-on-1-node Standalone Cluster (one executor per worker)
5. Let’s stop the worker to start over with custom configuration. You use ./sbin/stop-
slave.sh to stop the worker.
./sbin/stop-slave.sh
6. Check out master’s web UI at http://localhost:8080 to know the current setup - one
worker in DEAD state.
1224
Example 2-workers-on-1-node Standalone Cluster (one executor per worker)
8. Check out master’s web UI at http://localhost:8080 to know the current setup - one
worker ALIVE and another DEAD.
1225
Example 2-workers-on-1-node Standalone Cluster (one executor per worker)
Figure 4. Master’s web UI with one worker ALIVE and one DEAD
9. Configuring cluster using conf/spark-env.sh
conf/spark-env.sh
SPARK_WORKER_CORES=2 (1)
SPARK_WORKER_INSTANCES=2 (2)
SPARK_WORKER_MEMORY=2g
./sbin/start-slave.sh spark://japila.local:7077
1226
Example 2-workers-on-1-node Standalone Cluster (one executor per worker)
$ ./sbin/start-slave.sh spark://japila.local:7077
starting org.apache.spark.deploy.worker.Worker, logging to
../logs/spark-jacek-org.apache.spark.deploy.worker.Worker-1-
japila.local.out
starting org.apache.spark.deploy.worker.Worker, logging to
../logs/spark-jacek-org.apache.spark.deploy.worker.Worker-2-
japila.local.out
11. Check out master’s web UI at http://localhost:8080 to know the current setup - at least
two workers should be ALIVE.
$ jps
Note 6580 Worker
4872 Master
6874 Jps
6539 Worker
1227
Example 2-workers-on-1-node Standalone Cluster (one executor per worker)
./sbin/stop-all.sh
1228
StandaloneSchedulerBackend
StandaloneSchedulerBackend
Caution FIXME
start(): Unit
Caution FIXME
1229
Spark on Mesos
Spark on Mesos
1230
Spark on Mesos
$ mesos-slave --master=127.0.0.1:5050
I0401 00:15:05.850455 1916461824 main.cpp:223] Build: 2016-03-17 14:20:58 by brew
I0401 00:15:05.850772 1916461824 main.cpp:225] Version: 0.28.0
I0401 00:15:05.852812 1916461824 containerizer.cpp:149] Using isolation: posix/cpu,pos
ix/mem,filesystem/posix
I0401 00:15:05.866186 1916461824 main.cpp:328] Starting Mesos slave
I0401 00:15:05.869470 218980352 slave.cpp:193] Slave started on 1)@10.1.47.199:5051
...
I0401 00:15:05.906355 218980352 slave.cpp:832] Detecting new master
I0401 00:15:06.762917 220590080 slave.cpp:971] Registered with master [email protected]
:5050; given slave ID 9867c491-5370-48cc-8e25-e1aff1d86542-S0
...
Figure 3. Mesos Management Console (Slaves tab) with one slave running
1231
Spark on Mesos
The preferred approach to launch Spark on Mesos and to give the location of Spark
binaries is through spark.executor.uri setting.
--conf spark.executor.uri=/Users/jacek/Downloads/spark-1.5.2-bin-hadoop2.6.tgz
Note
For us, on a bleeding edge of Spark development, it is very convenient to use
spark.mesos.executor.home setting, instead.
-c spark.mesos.executor.home=`pwd`
In Frameworks tab you should see a single active framework for spark-shell .
Figure 4. Mesos Management Console (Frameworks tab) with Spark shell active
Tip Consult slave logs under /tmp/mesos/slaves when facing troubles.
1232
Spark on Mesos
CoarseMesosSchedulerBackend
CoarseMesosSchedulerBackend is the scheduler backend for Spark on Mesos.
It requires a Task Scheduler, Spark context, mesos:// master URL, and Security Manager.
It accepts only two failures before blacklisting a Mesos slave (it is hardcoded and not
configurable).
It tracks:
1233
Spark on Mesos
Settings
spark.cores.max (default: Int.MaxValue ) - maximum number of cores to acquire
FIXME
MesosExternalShuffleClient
FIXME
(Fine)MesosSchedulerBackend
When spark.mesos.coarse is false , Spark on Mesos uses MesosSchedulerBackend
reviveOffers
It calls mesosDriver.reviveOffers() .
Caution FIXME
1234
Spark on Mesos
Settings
spark.mesos.coarse (default: true ) controls whether the scheduler backend for Mesos
FIXME Review
Caution MesosClusterScheduler.scala
MesosExternalShuffleService
Schedulers in Mesos
Available scheduler modes:
fine-grained mode
The main difference between these two scheduler modes is the number of tasks per Spark
executor per single Mesos executor. In fine-grained mode, there is a single task in a single
Spark executor that shares a single Mesos executor with the other Spark executors. In
coarse-grained mode, there is a single Spark executor per Mesos executor with many Spark
tasks.
Coarse-grained mode pre-starts all the executor backends, e.g. Executor Backends, so it
has the least overhead comparing to fine-grain mode. Since the executors are up before
tasks get launched, it is better for interactive sessions. It also means that the resources are
locked up in a task.
Spark on Mesos supports dynamic allocation in the Mesos coarse-grained scheduler since
Spark 1.5. It can add/remove executors based on load, i.e. kills idle executors and adds
executors when tasks queue up. It needs an external shuffle service on each node.
Mesos Fine-Grained Mode offers a better resource utilization. It has a slower startup for
tasks and hence it is fine for batch and relatively static streaming.
Commands
The following command is how you could execute a Spark application on Mesos:
1235
Spark on Mesos
Other Findings
From Four reasons to pay attention to Apache Mesos:
to run Spark workloads well you need a resource manager that not only can handle the
rapid swings in load inherent in analytics processing, but one that can do to smartly.
Matching of the task to the RIGHT resources is crucial and awareness of the physical
environment is a must. Mesos is designed to manage this problem on behalf of
workloads like Spark.
1236
MesosCoarseGrainedSchedulerBackend
MesosCoarseGrainedSchedulerBackend —
Coarse-Grained Scheduler Backend for Mesos
Caution FIXME
executorLimitOption Property
executorLimitOption is an internal attribute to…FIXME
resourceOffers Method
Caution FIXME
Caution FIXME
Caution FIXME
createCommand Method
Caution FIXME
1237
MesosCoarseGrainedSchedulerBackend
1238
About Mesos
About Mesos
Apache Mesos is an Apache Software Foundation open source cluster management and
scheduling framework. It abstracts CPU, memory, storage, and other compute resources
away from machines (physical or virtual).
Mesos provides API for resource management and scheduling across multiple nodes (in
datacenter and cloud environments).
An Apache Mesos cluster consists of three major components: masters, agents, and
frameworks.
Concepts
A Mesos master manages agents. It is responsible for tracking, pooling and distributing
agents' resources, managing active applications, and task delegation.
The Mesos master offers resources to frameworks that can accept or reject them based on
specific constraints.
Mesos API
Mesos typically runs with an agent on every virtual machine or bare metal server under
management (https://www.joyent.com/blog/mesos-by-the-pound)
Mesos uses Zookeeper for master election and discovery. Apache Auroa is a scheduler
that runs on Mesos.
1239
About Mesos
Mesos is written in C++, not Java, and includes support for Docker along with other
frameworks. Mesos, then, is the core of the Mesos Data Center Operating System, or
DCOS, as it was coined by Mesosphere.
This Operating System includes other handy components such as Marathon and
Chronos. Marathon provides cluster-wide “init” capabilities for application in containers
like Docker or cgroups. This allows one to programmatically automate the launching of
large cluster-based applications. Chronos acts as a Mesos API for longer-running batch
type jobs while the core Mesos SDK provides an entry point for other applications like
Hadoop and Spark.
The true goal is a full shared, generic and reusable on demand distributed architecture.
Out of the box it will include Cassandra, Kafka, Spark, and Akka.
to allow Mesos to centrally schedule YARN work via a Mesos based framework,
including a REST API for scaling up or down
1240
Execution Model
Execution Model
FIXME This is the single place for explaining jobs, stages, tasks. Move
Caution
relevant parts from the other places.
1241
Unified Memory Management
1242
Spark History Server
You can start History Server by executing start-history-server.sh shell script and stop it
using stop-history-server.sh .
option that specifies the properties file with the custom Spark properties.
If not specified explicitly, Spark History Server uses the default configuration file, i.e. spark-
defaults.conf.
Refer to Logging.
$ ./sbin/start-history-server.sh
starting org.apache.spark.deploy.history.HistoryServer, logging to .../spark/logs/spar
k-jacek-org.apache.spark.deploy.history.HistoryServer-1-japila.out
$ ./bin/spark-class org.apache.spark.deploy.history.HistoryServer
Using the more explicit approach with spark-class to start Spark History Server
Tip could be easier to trace execution by seeing the logs printed out to the standard
output and hence terminal directly.
When started, it prints out the following INFO message to the logs:
It registers signal handlers (using SignalUtils ) for TERM , HUP , INT to log their execution:
It creates a SecurityManager .
1244
Spark History Server
$ ./sbin/stop-history-server.sh
stopping org.apache.spark.deploy.history.HistoryServer
Settings
1245
Spark History Server
spark.history.ui.port 18080
spark.history.fs.logDirectory file:/tmp/spark-events
spark.history.retainedApplications 50
spark.history.ui.maxApplications (unbounded)
spark.history.kerberos.enabled false
spark.history.kerberos.principal (empty)
spark.history.kerberos.keytab (empty)
spark.history.provider org.apache.spark.deploy.history.FsHistoryProvider
1246
HistoryServer — WebUI For Active And Completed Spark Applications
Refer to Logging.
attachSparkUI Method
attachSparkUI(
appId: String,
attemptId: Option[String],
ui: SparkUI,
completed: Boolean): Unit
attachSparkUI …FIXME
1247
HistoryServer — WebUI For Active And Completed Spark Applications
initialize(): Unit
initialize …FIXME
main …FIXME
SparkConf
ApplicationHistoryProvider
SecurityManager
Port number
getAppUI …FIXME
withSparkUI Method
1248
HistoryServer — WebUI For Active And Completed Spark Applications
withSparkUI …FIXME
loadAppUi …FIXME
doGet Method
Note doGet is part of Java Servlet’s HttpServlet to handle HTTP GET requests.
doGet …FIXME
1249
SQLHistoryListener
SQLHistoryListener
SQLHistoryListener is a custom SQLListener for History Server. It attaches SQL tab to
History Server’s web UI only when the first SparkListenerSQLExecutionStart arrives and
shuts onExecutorMetricsUpdate off. It also handles ends of tasks in a slightly different way.
Support for SQL UI in History Server was added in SPARK-11206 Support SQL
Note
UI on the history server.
onOtherEvent
onTaskEnd
Caution FIXME
(which is SparkHistoryListenerFactory ).
org.apache.spark.sql.execution.ui.SQLHistoryListenerFactory
onExecutorMetricsUpdate
onExecutorMetricsUpdate does nothing.
1250
SQLHistoryListener
1251
FsHistoryProvider — File-System-Based History Provider
FsHistoryProvider — File-System-Based
History Provider
FsHistoryProvider is the default ApplicationHistoryProvider for Spark History Server.
Refer to Logging.
rebuildAppStore(
store: KVStore,
eventLog: FileStatus,
lastUpdated: Long): Unit
rebuildAppStore …FIXME
getAppUI Method
getAppUI …FIXME
1252
FsHistoryProvider — File-System-Based History Provider
SparkConf
1253
ApplicationHistoryProvider
ApplicationHistoryProvider
ApplicationHistoryProvider is the base of the history providers of Spark applications.
package org.apache.spark.deploy.history
getApplicationInfo
1254
HistoryServerArguments
HistoryServerArguments
HistoryServerArguments is the command-line parser for the History Server.
$ ./sbin/start-history-server.sh /tmp/spark-events
This is however deprecated since Spark 1.1.0 and you should see the following WARN
message in the logs:
WARN HistoryServerArguments: Setting log directory through the command line is depreca
ted as of Spark 1.1.0. Please set this through spark.history.fs.logDirectory instead.
The same WARN message shows up for --dir and -d command-line options.
--properties-file [propertiesFile] command-line option specifies the file with the custom
Spark properties.
When not specified explicitly, History Server uses the default configuration file,
Note
i.e. spark-defaults.conf.
Refer to Logging.
1255
ApplicationCacheOperations
ApplicationCacheOperations
ApplicationCacheOperations is the contract of…FIXME
package org.apache.spark.deploy.history
trait ApplicationCacheOperations {
// only required methods that have no implementation
// the others follow
def getAppUI(appId: String, attemptId: Option[String]): Option[LoadedAppUI]
def attachSparkUI(
appId: String,
attemptId: Option[String],
ui: SparkUI,
completed: Boolean): Unit
def detachSparkUI(appId: String, attemptId: Option[String], ui: SparkUI): Unit
}
attachSparkUI
detachSparkUI
1256
ApplicationCache
ApplicationCache
ApplicationCache is…FIXME
ApplicationCache uses Google Guava 14.0.1 library for the internal appLoader.
removalListener
metrics
ApplicationCacheOperations
retainedApplications
Clock
1257
ApplicationCache
loadApplicationEntry …FIXME
load simply relays to loadApplicationEntry with the appId and attemptId of the input
CacheKey .
get …FIXME
withSparkUI …FIXME
1258
Logging
Logging
Spark uses log4j for logging.
Logging Levels
The valid logging levels are log4j’s Levels (from most specific to least):
ERROR
WARN
INFO
DEBUG
conf/log4j.properties
You can set up the default logging for Spark shell in conf/log4j.properties . Use
conf/log4j.properties.template as a starting point.
Logger.getLogger(classOf[RackResolver]).getLevel
Logger.getLogger("org").setLevel(Level.OFF)
Logger.getLogger("akka").setLevel(Level.OFF)
1259
Logging
sbt
When running a Spark application from within sbt using run task, you can use the following
build.sbt to configure logging levels:
With the above configuration log4j.properties file should be on CLASSPATH which can be
in src/main/resources directory (that is included in CLASSPATH by default).
When run starts, you should see the following output in sbt:
[spark-activator]> run
[info] Running StreamingApp
log4j: Trying to find [log4j.properties] using context classloader sun.misc.Launcher$A
ppClassLoader@1b6d3586.
log4j: Using URL [file:/Users/jacek/dev/oss/spark-activator/target/scala-2.11/classes/
log4j.properties] for automatic log4j configuration.
log4j: Reading configuration from URL file:/Users/jacek/dev/oss/spark-activator/target
/scala-2.11/classes/log4j.properties
Disabling Logging
Use the following conf/log4j.properties to disable logging completely:
log4j.logger.org=OFF
1260
Performance Tuning
Performance Tuning
Goal: Improve Spark’s performance where feasible.
a TPC-DS workload, of two sizes: a 20 machine cluster with 850GB of data, and a 60
machine cluster with 2.5TB of data.
network optimization could only reduce job completion time by, at most, 2%
From Making Sense of Spark Performance - Kay Ousterhout (UC Berkeley) at Spark
Summit 2015:
reduceByKey is better
impacts CPU - time to serialize and network - time to send the data over the wire
1261
SparkListener — Intercepting Events from Spark Scheduler
SparkListener extends SparkListenerInterface with all the callback methods being no-
op/do-nothing.
You can develop your own custom SparkListener and register it using
SparkContext.addSparkListener method or spark.extraListeners Spark property.
With SparkListener you can focus on Spark events of your liking and process a subset of
all scheduling events.
Enable INFO logging level for org.apache.spark.SparkContext logger to see when custom
Spark listeners are registered.
1262
SparkListener — Intercepting Events from Spark Scheduler
BlockManagerMasterEndpoint
onBlockManagerAdded SparkListenerBlockManagerAdded
registered a BlockManager
BlockManagerMasterEndpoint
onBlockManagerRemoved SparkListenerBlockManagerRemoved removed a BlockManager
is when…FIXME)
BlockManagerMasterEndpoint
receives a UpdateBlockInfo
onBlockUpdated SparkListenerBlockUpdated (which is when BlockManager
reports a block status update to
driver).
SparkContext does
onEnvironmentUpdate SparkListenerEnvironmentUpdate
postEnvironmentUpdate
onExecutorMetricsUpdate SparkListenerExecutorMetricsUpdate
DAGScheduler does
cleanUpAfterSchedulerStop
onJobEnd SparkListenerJobEnd handleTaskCompletion
failJobAndIndependentStages
markMapStageJobAsFinished.
1263
SparkListener — Intercepting Events from Spark Scheduler
DAGScheduler handles
onJobStart SparkListenerJobStart JobSubmitted and
MapStageSubmitted messages
DAGScheduler handles
onTaskGettingResult SparkListenerTaskGettingResult
GettingResultEvent event
onUnpersistRDD SparkListenerUnpersistRDD
i.e. removes RDD blocks from
BlockManagerMaster (that can be
triggered explicitly or
1264
SparkListener — Intercepting Events from Spark Scheduler
StatsReportListener
SparkFirehoseListener
Allows users to receive all SparkListenerEvent events
by overriding the single onEvent method only.
ExecutorAllocationListener
HeartbeatReceiver
StreamingJobProgressListener
StorageStatusListener,
RDDOperationGraphListener,
EnvironmentListener, For web UI
BlockStatusListener and
StorageListener
SpillListener
ApplicationEventListener
StreamingQueryListenerBus
SQLListener /
Support for History Server
SQLHistoryListener
StreamingListenerBus
JobProgressListener
1265
LiveListenerBus
LiveListenerBus
LiveListenerBus is used to announce application-wide events in a Spark application. It
Internally, it saves the input SparkContext for later use and starts listenerThread. It makes
sure that it only happens when LiveListenerBus has not been started before (i.e. started
is disabled).
1266
LiveListenerBus
post puts the input event onto the internal eventQueue queue and releases the internal
eventLock semaphore. If the event placement was not successful (and it could happen
The event publishing is only possible when stopped flag has been enabled.
If LiveListenerBus has been stopped, the following ERROR appears in the logs:
onDropEvent is called when no further events can be added to the internal eventQueue
It simply prints out the following ERROR message to the logs and ensures that it happens
only once.
ERROR Dropping SparkListenerEvent because no remaining room in event queue. This likel
y means one of the SparkListeners is too slow and cannot keep up with the rate at whic
h tasks are being started by the scheduler.
Note It uses the internal logDroppedEvent atomic variable to track the state.
stop(): Unit
stop releases the internal eventLock semaphore and waits until listenerThread dies. It can
only happen after all events were posted (and polling eventQueue gives nothing).
It checks that started flag is enabled (i.e. true ) and throws a IllegalStateException
otherwise.
1267
LiveListenerBus
events from the event queue is only after the listener was started and only one event at a
time.
Settings
Table 1. Spark Properties
Spark Property Default Value Description
The comma-separated list of fully-
qualified class names of Spark listeners
spark.extraListeners (empty)
that should be registered (when
SparkContext is initialized)
addToQueue …FIXME
1268
LiveListenerBus
1269
ReplayListenerBus
ReplayListenerBus
ReplayListenerBus is a custom SparkListenerBus that can replay JSON-encoded
SparkListenerEvent events.
replay(
logData: InputStream,
sourceName: String,
maybeTruncated: Boolean = false): Unit
replay reads JSON-encoded SparkListenerEvent events from logData (one event per
objects.
Note replay uses jackson from json4s library to parse the AST for JSON.
When there is an exception parsing a JSON event, you may see the following WARN
message in the logs (for the last line) or a JsonParseException .
WARN Got JsonParseException from log file $sourceName at line [lineNumber], the file m
ight not have finished writing cleanly.
Any other non-IO exceptions end up with the following ERROR messages in the logs:
1270
ReplayListenerBus
1271
SparkListenerBus — Internal Contract for Spark Event Buses
1272
SparkListenerBus — Internal Contract for Spark Event Buses
SparkListenerStageCompleted onStageCompleted
SparkListenerJobStart onJobStart
SparkListenerJobEnd onJobEnd
SparkListenerJobEnd onJobEnd
SparkListenerTaskStart onTaskStart
SparkListenerTaskGettingResult onTaskGettingResult
SparkListenerTaskEnd onTaskEnd
SparkListenerEnvironmentUpdate onEnvironmentUpdate
SparkListenerBlockManagerAdded onBlockManagerAdded
SparkListenerBlockManagerRemoved onBlockManagerRemoved
SparkListenerUnpersistRDD onUnpersistRDD
SparkListenerApplicationStart onApplicationStart
SparkListenerApplicationEnd onApplicationEnd
SparkListenerExecutorMetricsUpdate onExecutorMetricsUpdate
SparkListenerExecutorAdded onExecutorAdded
SparkListenerExecutorRemoved onExecutorRemoved
SparkListenerBlockUpdated onBlockUpdated
1273
SparkListenerBus — Internal Contract for Spark Event Buses
ListenerBus is an event bus that post events (of type E ) to all registered listeners (of type
L ).
It manages listeners of type L , i.e. it can add to and remove listeners from an internal
listeners collection.
It can post events of type E to all registered listeners (using postToAll method). It simply
iterates over the internal listeners collection and executes the abstract doPostEvent
method.
In case of exception while posting an event to a listener you should see the following
ERROR message in the logs and the exception.
Refer to Logging.
1274
EventLoggingListener — Spark Listener for Persisting Events
When event logging is enabled, EventLoggingListener writes events to a log file under
spark.eventLog.dir directory. All Spark events are logged (except
SparkListenerBlockUpdated and SparkListenerExecutorMetricsUpdate).
Tip Use Spark History Server to view the event logs in a browser.
Refer to Logging.
start(): Unit
1275
EventLoggingListener — Spark Listener for Persisting Events
The log file’s working name is created based on appId with or without the compression
codec used and appAttemptId , i.e. local-1461696754069 . It also uses .inprogress
extension.
The working log .inprogress is attempted to be deleted. In case it could not be deleted, the
following WARN message is printed out to the logs:
The buffered output stream is created with metadata with Spark’s version and
SparkListenerLogStart class' name as the first line.
{"Event":"SparkListenerLogStart","Spark Version":"2.0.0-SNAPSHOT"}
At this point, EventLoggingListener is ready for event logging and you should see the
following INFO message in the logs:
Caution FIXME
stop(): Unit
1276
EventLoggingListener — Spark Listener for Persisting Events
stop closes PrintWriter for the log file and renames the file to be without .inprogress
extension.
If the target log file exists (one without .inprogress extension), it overwrites the file if
spark.eventLog.overwrite is enabled. You should see the following WARN message in the
logs:
If the target log file exists and overwrite is disabled, an java.io.IOException is thrown with
the following message:
Settings
1277
EventLoggingListener — Spark Listener for Persisting Events
spark.eventLog.enabled false
Enables ( true ) or disables ( false )
persisting Spark events.
spark.eventLog.buffer.kb 100
Size of the buffer to use when writing
to output streams.
spark.eventLog.compress false
Enables ( true ) or disables ( false )
event compression.
1278
StatsReportListener — Logging Summary Statistics
StatsReportListener — Logging Summary
Statistics
org.apache.spark.scheduler.StatsReportListener (see the listener’s scaladoc) is a
Refer to Logging.
Caution FIXME
Example
$ ./bin/spark-shell -c spark.extraListeners=org.apache.spark.scheduler.StatsReportList
ener
...
INFO SparkContext: Registered listener org.apache.spark.scheduler.StatsReportListener
...
scala> spark.read.text("README.md").count
...
INFO StatsReportListener: Finished stage: Stage(0, 0); Name: 'count at <console>:24';
Status: succeeded; numTasks: 1; Took: 212 msec
INFO StatsReportListener: task runtime:(count: 1, mean: 198.000000, stdev: 0.000000, m
ax: 198.000000, min: 198.000000)
INFO StatsReportListener: 0% 5% 10% 25% 50% 75% 90%
95% 100%
INFO StatsReportListener: 198.0 ms 198.0 ms 198.0 ms 198.0
ms 198.0 ms 198.0 ms 198.0 ms 198.0 ms 198.0 ms
INFO StatsReportListener: shuffle bytes written:(count: 1, mean: 59.000000, stdev: 0.0
00000, max: 59.000000, min: 59.000000)
1279
StatsReportListener — Logging Summary Statistics
1280
StatsReportListener — Logging Summary Statistics
0.0 B 0.0 B
INFO StatsReportListener: fetch wait time:(count: 1, mean: 0.000000, stdev: 0.000000,
max: 0.000000, min: 0.000000)
INFO StatsReportListener: 0% 5% 10% 25% 50% 75% 90%
95% 100%
INFO StatsReportListener: 0.0 ms 0.0 ms 0.0 ms 0.0 ms 0.0 ms 0.0 ms 0.0 ms
0.0 ms 0.0 ms
INFO StatsReportListener: remote bytes read:(count: 1, mean: 0.000000, stdev: 0.000000
, max: 0.000000, min: 0.000000)
INFO StatsReportListener: 0% 5% 10% 25% 50% 75% 90%
95% 100%
INFO StatsReportListener: 0.0 B 0.0 B 0.0 B 0.0 B 0.0 B 0.0 B 0.0 B
0.0 B 0.0 B
INFO StatsReportListener: task result size:(count: 1, mean: 1960.000000, stdev: 0.0000
00, max: 1960.000000, min: 1960.000000)
INFO StatsReportListener: 0% 5% 10% 25% 50% 75% 90%
95% 100%
INFO StatsReportListener: 1960.0 B 1960.0 B 1960.0 B 1960.0
B 1960.0 B 1960.0 B 1960.0 B 1960.0 B 1960.0 B
INFO StatsReportListener: executor (non-fetch) time pct: (count: 1, mean: 75.757576, s
tdev: 0.000000, max: 75.757576, min: 75.757576)
INFO StatsReportListener: 0% 5% 10% 25% 50% 75% 90%
95% 100%
INFO StatsReportListener: 76 % 76 % 76 % 76 % 76 % 76 % 76 %
76 % 76 %
INFO StatsReportListener: fetch wait time pct: (count: 1, mean: 0.000000, stdev: 0.000
000, max: 0.000000, min: 0.000000)
INFO StatsReportListener: 0% 5% 10% 25% 50% 75% 90%
95% 100%
INFO StatsReportListener: 0 % 0 % 0 % 0 % 0 % 0 % 0 %
0 % 0 %
INFO StatsReportListener: other time pct: (count: 1, mean: 24.242424, stdev: 0.000000,
max: 24.242424, min: 24.242424)
INFO StatsReportListener: 0% 5% 10% 25% 50% 75% 90%
95% 100%
INFO StatsReportListener: 24 % 24 % 24 % 24 % 24 % 24 % 24 %
24 % 24 %
res0: Long = 99
1281
JsonProtocol
JsonProtocol
Caution FIXME
taskInfoFromJson Method
Caution FIXME
taskMetricsFromJson Method
Caution FIXME
taskMetricsToJson Method
Caution FIXME
sparkEventFromJson Method
Caution FIXME
1282
Debugging Spark
Debugging Spark
Using spark-shell and IntelliJ IDEA
Start spark-shell with SPARK_SUBMIT_OPTS environment variable that configures the JVM’s
JDWP.
SPARK_SUBMIT_OPTS="-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005"
./bin/spark-shell
Attach IntelliJ IDEA to the JVM process using Run > Attach to Local Process menu.
Using sbt
Use sbt -jvm-debug 5005 , connect to the remote JVM at the port 5005 using IntelliJ IDEA,
place breakpoints on the desired lines of the source code of Spark.
1283
Building Apache Spark from Sources
Since [SPARK-6363][BUILD] Make Scala 2.11 the default Scala version the
Note
default version of Scala in Apache Spark is 2.11.
The build process for Scala 2.11 takes less than 15 minutes (on a decent machine like my
shiny MacBook Pro with 8 cores and 16 GB RAM) and is so simple that it’s unlikely to refuse
the urge to do it yourself.
Build Profiles
Caution FIXME
1284
Building Apache Spark from Sources
1285
Building Apache Spark from Sources
Please note the messages that say the version of Spark (Building Spark Project Parent POM
2.0.0-SNAPSHOT), Scala version (maven-clean-plugin:2.6.1:clean (default-clean) @ spark-
parent_2.11) and the Spark modules built.
1286
Building Apache Spark from Sources
The above command gives you the latest version of Apache Spark 2.0.0-SNAPSHOT built
for Scala 2.11.8 (see the configuration of scala-2.11 profile).
Tip You can also know the version of Spark using ./bin/spark-shell --version .
Making Distribution
./make-distribution.sh is the shell script to make a distribution. It uses the same profiles as
Once finished, you will have the distribution in the current directory, i.e. spark-2.0.0-
SNAPSHOT-bin-2.7.2.tgz .
1287
Spark and Hadoop
Parquet
RCfile
Avro
ORC
FIXME What are the differences between the formats and how are they used
Caution
in Spark.
Introduction to Hadoop
This page is the place to keep information more general about Hadoop and not
Note related to Spark on YARN or files Using Input and Output (I/O) (HDFS). I don’t
really know what it could be, though. Perhaps nothing at all. Just saying.
The Apache Hadoop software library is a framework that allows for the distributed
processing of large data sets across clusters of computers using simple programming
models. It is designed to scale up from single servers to thousands of machines, each
offering local computation and storage. Rather than rely on hardware to deliver high-
availability, the library itself is designed to detect and handle failures at the application
layer, so delivering a highly-available service on top of a cluster of computers, each of
which may be prone to failures.
HDFS (Hadoop Distributed File System) is a distributed file system designed to run
on commodity hardware. It is a data storage with files split across a cluster.
Currently, it’s more about the ecosystem of solutions that all use Hadoop infrastructure
for their work.
1288
Spark and Hadoop
Yahoo has progressively invested in building and scaling Apache Hadoop clusters with
a current footprint of more than 40,000 servers and 600 petabytes of storage spread
across 19 clusters.
Deep learning can be defined as first-class steps in Apache Oozie workflows with
Hadoop for data processing and Spark pipelines for machine learning.
You can find some preliminary information about Spark pipelines for machine learning in
the chapter ML Pipelines.
HDFS provides fast analytics – scanning over large amounts of data very quickly, but it was
not built to handle updates. If data changed, it would need to be appended in bulk after a
certain volume or time interval, preventing real-time visibility into this data.
HBase complements HDFS’ capabilities by providing fast and random reads and writes
and supporting updating data, i.e. serving small queries extremely quickly, and allowing
data to be updated in place.
From How does partitioning work for data from files on HDFS?:
When Spark reads a file from HDFS, it creates a single partition for a single input split.
Input split is set by the Hadoop InputFormat used to read this file. For instance, if you
use textFile() it would be TextInputFormat in Hadoop, which would return you a
single partition for a single block of HDFS (but the split between partitions would be
done on line split, not the exact block split), unless you have a compressed text file. In
case of compressed file you would get a single partition for a single file (as compressed
text files are not splittable).
If you have a 30GB uncompressed text file stored on HDFS, then with the default HDFS
block size setting (128MB) it would be stored in 235 blocks, which means that the RDD
you read from this file would have 235 partitions. When you call repartition(1000) your
RDD would be marked as to be repartitioned, but in fact it would be shuffled to 1000
partitions only when you will execute an action on top of this RDD (lazy execution
concept)
With HDFS you can store any data (regardless of format and size). It can easily handle
unstructured data like video or other binary files as well as semi- or fully-structured data
like CSV files or databases.
There is the concept of data lake that is a huge data repository to support analytics.
1289
Spark and Hadoop
HDFS partition files into so called splits and distributes them across multiple nodes in a
cluster to achieve fail-over and resiliency.
Further reading
Introducing Kudu: The New Hadoop Storage Engine for Fast Analytics on Fast Data
1290
SparkHadoopUtil
SparkHadoopUtil
Enable DEBUG logging level for org.apache.spark.deploy.SparkHadoopUtil logger
to see what happens inside.
Add the following line to conf/log4j.properties :
Tip
log4j.logger.org.apache.spark.deploy.SparkHadoopUtil=DEBUG
Refer to Logging.
Caution FIXME
substituteHadoopVariables Method
Caution FIXME
transferCredentials Method
Caution FIXME
newConfiguration Method
Caution FIXME
conf Method
Caution FIXME
stopCredentialUpdater Method
Caution FIXME
1291
SparkHadoopUtil
user as a thread local variable (and distributed to child threads). It is later used for
authenticating HDFS and YARN calls.
Caution FIXME How to use SPARK_USER to change the current user name?
You should see the current username printed out in the following DEBUG message in the
logs:
1292
Spark and software in-memory file systems
Tachyon is designed to function as a software file system that is compatible with the
HDFS interface prevalent in the big data analytics space. The point of doing this is that
it can be used as a drop in accelerator rather than having to adapt each framework to
use a distributed caching layer explicitly.
1293
Spark and The Others
Note I’m going to keep the noise (enterprisey adornments) to the very minimum.
Apache Twill is an abstraction over Apache Hadoop YARN that allows you to use
YARN’s distributed capabilities with a programming model that is similar to running
threads.
1294
Distributed Deep Learning on Spark
In the comments to the article, some people announced their plans of using it with AWS
GPU cluster.
1295
Spark Packages
Spark Packages
Spark Packages is a community index of packages for Apache Spark.
Spark Packages is a community site hosting modules that are not part of Apache Spark. It
offers packages for reading different files formats (than those natively supported by Spark)
or from NoSQL databases like Cassandra, code testing, etc.
When you want to include a Spark package in your application, you should be using --
packages command line option.
1296
Interactive Notebooks
Interactive Notebooks
This document aims at presenting and eventually supporting me to select the open-source
web-based visualisation tool for Apache Spark with Scala 2.11 support.
Requirements
1. Support for Apache Spark 2.0
2. Support for Scala 2.11 (the default Scala version for Spark 2.0)
3. Web-based
6. Active Development and Community (the number of commits per week and month,
github, gitter)
7. Autocompletion
Optional Requirements:
1. Sharing SparkContext
Candidates
Apache Zeppelin
Spark Notebook
Beaker
Jupyter Notebook
Apache Toree
Jupyter Notebook
You can combine code execution, rich text, mathematics, plots and rich media
1297
Interactive Notebooks
Jupyter Notebook (formerly known as the IPython Notebook)- open source, interactive
data science and scientific computational environment supporting over 40 programming
languages.
1298
Apache Zeppelin
Apache Zeppelin
Apache Zeppelin is a web-based notebook platform that enables interactive data analytics
with interactive data visualizations and notebook sharing. You can make data-driven,
interactive and collaborative documents with SQL, Scala, Python, R in a single notebook
document.
It shares a single SparkContext between languages (so you can pass data between Scala
and Python easily).
This is an excellent tool for prototyping Scala/Spark code with SQL queries to analyze data
(by data visualizations) that could be used by non-Scala developers like data analysts using
SQL and Python.
Zeppelin aims at more analytics and business people (not necessarily for
Note
Spark/Scala developers for whom Spark Notebook may appear a better fit).
Clients talk to the Zeppelin Server using HTTP REST or Websocket endpoints.
text (default)
HTML
table
Angular
Features
1. Apache License 2.0 licensed
2. Interactive
3. Web-Based
6. Multiple Language and Data Processing Backends called Interpreters, including the
built-in Apache Spark integration, Apache Flink, Apache Hive, Apache Cassandra,
Apache Tajo, Apache Phoenix, Apache Ignite, Apache Geode
7. Display Systems
1299
Apache Zeppelin
1300
Spark Notebook
Spark Notebook
Spark Notebook is a Scala-centric tool for interactive and reactive data science using
Apache Spark.
This is an excellent tool for prototyping Scala/Spark code with SQL queries to analyze data
(by data visualizations). It seems to have more advanced data visualizations (comparing to
Apache Zeppelin), and seems rather focused on Scala, SQL and Apache Spark.
It can visualize the output of SQL queries directly as tables and charts (which Apache
Zeppelin cannot yet).
1301
Spark Tips and Tricks
command is printed out to the standard error output, i.e. System.err , or not.
All the Spark shell scripts use org.apache.spark.launcher.Main class internally that checks
SPARK_PRINT_LAUNCH_COMMAND and when set (to any value) will print out the entire command
$ SPARK_PRINT_LAUNCH_COMMAND=1 ./bin/spark-shell
Spark Command: /Library/Java/JavaVirtualMachines/Current/Contents/Home/bin/java -cp /U
sers/jacek/dev/oss/spark/conf/:/Users/jacek/dev/oss/spark/assembly/target/scala-2.11/s
park-assembly-1.6.0-SNAPSHOT-hadoop2.7.1.jar:/Users/jacek/dev/oss/spark/lib_managed/ja
rs/datanucleus-api-jdo-3.2.6.jar:/Users/jacek/dev/oss/spark/lib_managed/jars/datanucle
us-core-3.2.10.jar:/Users/jacek/dev/oss/spark/lib_managed/jars/datanucleus-rdbms-3.2.9
.jar -Dscala.usejavacp=true -Xms1g -Xmx1g org.apache.spark.deploy.SparkSubmit --master
spark://localhost:7077 --class org.apache.spark.repl.Main --name Spark shell spark-sh
ell
========================================
scala> sc.version
res0: String = 1.6.0-SNAPSHOT
scala> org.apache.spark.SPARK_VERSION
res1: String = 1.6.0-SNAPSHOT
1302
Spark Tips and Tricks
You may see the following WARN messages in the logs when Spark finished the resolving
process:
1303
Access private members in Scala in Spark shell
Open spark-shell and execute :paste -raw that allows you to enter any valid Scala code,
including package .
package org.apache.spark
object spark {
def test = {
import org.apache.spark.scheduler._
println(DAGScheduler.RESUBMIT_TIMEOUT == 200)
}
}
scala> spark.test
true
scala> sc.version
res0: String = 1.6.0-SNAPSHOT
1304
SparkException: Task not serializable
Using Scala version 2.11.7 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_66)
Type in expressions to have them evaluated.
Type :help for more information.
1305
SparkException: Task not serializable
Further reading
Job aborted due to stage failure: Task not serializable
1306
Running Spark Applications on Windows
Note A Spark application could be spark-shell or your own custom Spark application.
What makes the huge difference between the operating systems is Hadoop that is used
internally for file system access in Spark.
You may run into few minor issues when you are on Windows due to the way Hadoop works
with Windows' POSIX-incompatible NTFS filesystem.
You do not have to install Apache Hadoop to work with Spark or run Spark
Note
applications.
Tip Read the Apache Hadoop project’s Problems running Hadoop on Windows.
Among the issues is the infamous java.io.IOException when running Spark Shell (below a
stacktrace from Spark 2.0.2 on Windows 10 so the line numbers may be different in your
case).
16/12/26 21:34:11 ERROR Shell: Failed to locate the winutils binary in the hadoop bina
ry path
java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop b
inaries.
at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:379)
at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:394)
at org.apache.hadoop.util.Shell.<clinit>(Shell.java:387)
at org.apache.hadoop.hive.conf.HiveConf$ConfVars.findHadoopBinary(HiveConf.java:2327
)
at org.apache.hadoop.hive.conf.HiveConf$ConfVars.<clinit>(HiveConf.java:365)
at org.apache.hadoop.hive.conf.HiveConf.<clinit>(HiveConf.java:105)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.spark.util.Utils$.classForName(Utils.scala:228)
at org.apache.spark.sql.SparkSession$.hiveClassesArePresent(SparkSession.scala:963)
at org.apache.spark.repl.Main$.createSparkSession(Main.scala:91)
You need to have Administrator rights on your laptop. All the following
commands must be executed in a command-line window ( cmd ) ran as
Administrator, i.e. using Run as administrator option while executing cmd .
Note
Read the official document in Microsoft TechNet — Start a Command Prompt as
an Administrator.
1307
Running Spark Applications on Windows
You should select the version of Hadoop the Spark distribution was compiled
Note with, e.g. use hadoop-2.7.1 for Spark 2 (here is the direct link to winutils.exe
binary).
set HADOOP_HOME=c:\hadoop
set PATH=%HADOOP_HOME%\bin;%PATH%
Execute the following command in cmd that you started using the option Run as
administrator.
Check the permissions (that is one of the commands that are executed under the covers):
winutils.exe ls -F C:\tmp\hive
Open spark-shell and observe the output (perhaps with few WARN messages that you
can simply disregard).
As a verification step, execute the following line to display the content of a DataFrame :
1308
Running Spark Applications on Windows
Disregard WARN messages when you start spark-shell . They are harmless.
If you see the above output, you’re done. You should now be able to run Spark applications
on your Windows. Congrats!
<configuration>
<property>
<name>hive.exec.scratchdir</name>
<value>/tmp/mydir</value>
<description>Scratch space for Hive jobs</description>
</property>
</configuration>
Start a Spark application, e.g. spark-shell , with HADOOP_CONF_DIR environment variable set
to the directory with hive-site.xml .
HADOOP_CONF_DIR=conf ./bin/spark-shell
1309
Running Spark Applications on Windows
1310
One-liners using PairRDDFunctions
Exercise
How would you go about solving a requirement to pair elements of the same key and
creating a new RDD out of the matched values?
val users = Seq((1, "user1"), (1, "user2"), (2, "user1"), (2, "user3"), (3,"user2"), (3
,"user4"), (3,"user1"))
// Input RDD
val us = sc.parallelize(users)
// Desired output
Seq("user1","user2"),("user1","user3"),("user1","user4"),("user2","user4"))
1311
Learning Jobs and Partitions Using take Action
scala> r1.partitions.size
res63: Int = 16
When you execute r1.take(1) only one job gets run since it is enough to compute one task
on one partition.
1312
Learning Jobs and Partitions Using take Action
However, when you execute r1.take(2) two jobs get run as the implementation assumes
one job with one partition, and if the elements didn’t total to the number of elements
requested in take , quadruple the partitions to work on in the following jobs.
Can you guess how many jobs are run for r1.take(15) ? How many tasks per job?
Answer: 3.
1313
Spark Standalone - Using ZooKeeper for High-Availability of Master
Start ZooKeeper.
spark.deploy.recoveryMode=ZOOKEEPER
spark.deploy.zookeeper.url=<zookeeper_host>:2181
spark.deploy.zookeeper.dir=/spark
$ cp ./sbin/start-master{,-2}.sh
1314
Spark Standalone - Using ZooKeeper for High-Availability of Master
You can check how many instances you’re currently running using jps command as
follows:
$ jps -lm
5024 sun.tools.jps.Jps -lm
4994 org.apache.spark.deploy.master.Master --ip japila.local --port 7077 --webui-port
8080 -h localhost -p 17077 --webui-port 18080 --properties-file ha.conf
4808 org.apache.spark.deploy.master.Master --ip japila.local --port 7077 --webui-port
8080 -h localhost -p 7077 --webui-port 8080 --properties-file ha.conf
4778 org.apache.zookeeper.server.quorum.QuorumPeerMain config/zookeeper.properties
./sbin/start-slave.sh spark://localhost:7077,localhost:17077
Find out which standalone Master is active (there can only be one). Kill it. Observe how the
other standalone Master takes over and lets the Spark shell register with itself. Check out
the master’s UI.
Optionally, kill the worker, make sure it goes away instantly in the active master’s logs.
1315
Spark’s Hello World using Spark shell and Scala
1316
WordCount using Spark shell
In the following example you’re going to count the words in README.md file that sits in your
Spark distribution and save the result under README.count directory.
You’re going to use the Spark shell for the example. Execute spark-shell .
wc.saveAsTextFile("README.count") (4)
1. Read the text file - refer to Using Input and Output (I/O).
3. Map each word into a pair and count them by word (key).
After you have executed the example, see the contents of the README.count directory:
$ ls -lt README.count
total 16
-rw-r--r-- 1 jacek staff 0 9 paź 13:36 _SUCCESS
-rw-r--r-- 1 jacek staff 1963 9 paź 13:36 part-00000
-rw-r--r-- 1 jacek staff 1663 9 paź 13:36 part-00001
The files part-0000x contain the pairs of word and the count.
1317
WordCount using Spark shell
$ cat README.count/part-00000
(package,1)
(this,1)
(Version"](http://spark.apache.org/docs/latest/building-spark.html#specifying-the-hado
op-version),1)
(Because,1)
(Python,2)
(cluster.,1)
(its,1)
([run,1)
...
Further (self-)development
Please read the questions and give answers first before looking at the link given.
1318
Your first complete Spark application (using Scala and sbt)
Overview
You’re going to use sbt as the project build tool. It uses build.sbt for the project’s
description as well as the dependencies, i.e. the version of Apache Spark and others.
With the files in a directory, executing sbt package results in a package that can be
deployed onto a Spark cluster using spark-submit .
build.sbt
scalaVersion := "2.11.7"
SparkMe Application
1319
Your first complete Spark application (using Scala and sbt)
The application uses a single command-line parameter (as args(0) ) that is the file to
process. The file is read and the number of lines printed out.
package pl.japila.spark
object SparkMeApp {
def main(args: Array[String]) {
val conf = new SparkConf().setAppName("SparkMe Application")
val sc = new SparkContext(conf)
val c = lines.count
println(s"There are $c lines in $fileName")
}
}
sbt.version=0.13.9
With the file the build is more predictable as the version of sbt doesn’t depend on
Tip
the sbt launcher.
Packaging Application
Execute sbt package to package the application.
The application uses only classes that comes with Spark so package is enough.
1320
Your first complete Spark application (using Scala and sbt)
spark-submit the SparkMe application and specify the file to process (as it is the only and
build.sbt is sbt’s build definition and is only used as an input file for
Note
demonstration purposes. Any file is going to work fine.
1321
Spark (notable) use cases
Technology "things":
Spark Streaming on Hadoop YARN cluster processing messages from Apache Kafka
using the new direct API.
Business "things":
Predictive Analytics = Manage risk and capture new business opportunities with real-
time analytics and probabilistic forecasting of customers, products and partners.
data lakes, clickstream analytics, real time analytics, and data warehousing on Hadoop
1322
Using Spark SQL to update data in Hive using ORC files
With transactional tables in Hive together with insert, update, delete, it does the
"concatenate " for you automatically in regularly intervals. Currently this works only
with tables in orc.format (stored as orc)
Hive was originally not designed for updates, because it was.purely warehouse
focused, the most recent one can do updates, deletes etc in a transactional way.
Criteria:
Spark Streaming jobs are receiving a lot of small events (avg 10kb)
1323
Using Spark SQL to update data in Hive using ORC files
1324
Developing Custom SparkListener to monitor DAGScheduler in Scala
Requirements
1. IntelliJ IDEA (or eventually sbt alone if you’re adventurous).
Add the following line to build.sbt (the main configuration file for the sbt project) that adds
the dependency on Apache Spark.
name := "custom-spark-listener"
organization := "pl.jaceklaskowski.spark"
version := "1.0"
scalaVersion := "2.11.8"
Custom Listener -
pl.jaceklaskowski.spark.CustomSparkListener
Create a Scala class — CustomSparkListener — for your custom SparkListener . It should
be under src/main/scala directory (create one if it does not exist).
The aim of the class is to intercept scheduler events about jobs being started and tasks
completed.
1325
Developing Custom SparkListener to monitor DAGScheduler in Scala
package pl.jaceklaskowski.spark
$ sbt package
[info] Loading global plugins from /Users/jacek/.sbt/0.13/plugins
[info] Loading project definition from /Users/jacek/dev/workshops/spark-workshop/solut
ions/custom-spark-listener/project
[info] Updating {file:/Users/jacek/dev/workshops/spark-workshop/solutions/custom-spark
-listener/project/}custom-spark-listener-build...
[info] Resolving org.fusesource.jansi#jansi;1.4 ...
[info] Done updating.
[info] Set current project to custom-spark-listener (in build file:/Users/jacek/dev/wo
rkshops/spark-workshop/solutions/custom-spark-listener/)
[info] Updating {file:/Users/jacek/dev/workshops/spark-workshop/solutions/custom-spark
-listener/}custom-spark-listener...
[info] Resolving jline#jline;2.12.1 ...
[info] Done updating.
[info] Compiling 1 Scala source to /Users/jacek/dev/workshops/spark-workshop/solutions
/custom-spark-listener/target/scala-2.11/classes...
[info] Packaging /Users/jacek/dev/workshops/spark-workshop/solutions/custom-spark-list
ener/target/scala-2.11/custom-spark-listener_2.11-1.0.jar ...
[info] Done packaging.
[success] Total time: 8 s, completed Oct 27, 2016 11:23:50 AM
You should find the result jar file with the custom scheduler listener ready under
target/scala-2.11 directory, e.g. target/scala-2.11/custom-spark-listener_2.11-1.0.jar .
1326
Developing Custom SparkListener to monitor DAGScheduler in Scala
Start spark-shell with additional configurations for the extra custom listener and the jar that
includes the class.
Create a Dataset and execute an action like show to start a job as follows:
scala> spark.read.text("README.md").count
[CustomSparkListener] Job started with 2 stages: SparkListenerJobStart(1,1473946006715
,WrappedArray(org.apache.spark.scheduler.StageInfo@71515592, org.apache.spark.schedule
r.StageInfo@6852819d),{spark.rdd.scope.noOverride=true, spark.rdd.scope={"id":"14","na
me":"collect"}, spark.sql.execution.id=2})
[CustomSparkListener] Stage 1 completed with 1 tasks.
[CustomSparkListener] Stage 2 completed with 1 tasks.
res0: Long = 7
The lines with [CustomSparkListener] came from your custom Spark listener.
Congratulations! The exercise’s over.
Questions
1. What are the pros and cons of using the command line version vs inside a Spark
application?
1327
Developing RPC Environment
1328
Developing RPC Environment
at java.lang.reflect.Method.invoke(Method.java:497)
at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:784)
at scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:1039)
at scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.a
pply(IMain.scala:636)
at scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.a
pply(IMain.scala:635)
at scala.reflect.internal.util.ScalaClassLoader$class.asContext(ScalaClassLoad
er.scala:31)
at scala.reflect.internal.util.AbstractFileClassLoader.asContext(AbstractFileC
lassLoader.scala:19)
at scala.tools.nsc.interpreter.IMain$WrappedRequest.loadAndRunReq(IMain.scala:
635)
at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:567)
at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:563)
at scala.tools.nsc.interpreter.ILoop.reallyInterpret$1(ILoop.scala:802)
at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:836)
at scala.tools.nsc.interpreter.ILoop.command(ILoop.scala:694)
at scala.tools.nsc.interpreter.ILoop.processLine(ILoop.scala:404)
at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply$mcZ$sp(Sp
arkILoop.scala:39)
at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply(SparkILoo
p.scala:38)
at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply(SparkILoo
p.scala:38)
at scala.tools.nsc.interpreter.IMain.beQuietDuring(IMain.scala:213)
at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:38)
at org.apache.spark.repl.SparkILoop.loadFiles(SparkILoop.scala:94)
at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply$mcZ$sp(ILoop.sca
la:922)
at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:911)
at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:911)
at scala.reflect.internal.util.ScalaClassLoader$.savingContextLoader(ScalaClas
sLoader.scala:97)
at scala.tools.nsc.interpreter.ILoop.process(ILoop.scala:911)
at org.apache.spark.repl.Main$.main(Main.scala:49)
at org.apache.spark.repl.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:6
2)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImp
l.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$r
unMain(SparkSubmit.scala:680)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:120)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
1329
Developing RPC Environment
1330
Developing Custom RDD
1331
Working with Datasets from JDBC Data Sources (and PostgreSQL)
Download the jar for PostgreSQL JDBC Driver 42.1.1 directly from the Maven
Note
repository.
Execute the command to have the jar downloaded into ~/.ivy2/jars directory by spark-shell
Start ./bin/spark-shell with --driver-class-path command line option and the driver jar.
It will give you the proper setup for accessing PostgreSQL using the JDBC driver.
1332
Working with Datasets from JDBC Data Sources (and PostgreSQL)
Note Use user and password options to specify the credentials if needed.
1333
Working with Datasets from JDBC Data Sources (and PostgreSQL)
Troubleshooting
If things can go wrong, they sooner or later go wrong. Here is a list of possible issues and
their solutions.
1334
Working with Datasets from JDBC Data Sources (and PostgreSQL)
Ensure that the JDBC driver sits on the CLASSPATH. Use --driver-class-path as described
above ( --packages or --jars do not work).
PostgreSQL Setup
Installation
Create Database
Accessing Database
Creating Table
Dropping Database
Installation
1335
Working with Datasets from JDBC Data Sources (and PostgreSQL)
This page serves as a cheatsheet for the author so he does not have to
Caution
search Internet to find the installation steps.
Note Consult 17.3. Starting the Database Server in the official documentation.
1336
Working with Datasets from JDBC Data Sources (and PostgreSQL)
log_statement = 'all'
Tip
Add log_statement = 'all' to /usr/local/var/postgres/postgresql.conf on Mac
OS X with PostgreSQL installed using brew .
$ postgres -D /usr/local/var/postgres
Create Database
$ createdb sparkdb
Accessing Database
Use psql sparkdb to access the database.
$ psql sparkdb
psql (9.6.2)
Type "help" for help.
sparkdb=#
Execute SELECT version() to know the version of the database server you have connected
to.
1337
Working with Datasets from JDBC Data Sources (and PostgreSQL)
Creating Table
Create a table using CREATE TABLE command.
Execute select * from projects; to ensure that you have the following records in
projects table:
Dropping Database
$ dropdb sparkdb
1338
Working with Datasets from JDBC Data Sources (and PostgreSQL)
1339
Causing Stage to Fail
Recipe
Start a Spark cluster, e.g. 1-node Hadoop YARN.
start-yarn.sh
// 2-stage job -- it _appears_ that a stage can be failed only when there is a shuffle
sc.parallelize(0 to 3e3.toInt, 2).map(n => (n % 2, n)).groupByKey.count
Use 2 executors at least so you can kill one and keep the application up and running (on one
executor).
1340
Courses
Spark courses
Spark Fundamentals I from Big Data University.
Data Science and Engineering with Apache Spark from University of California and
Databricks (includes 5 edX courses):
1341
Books
Books
O’Reilly
Manning
Packt
Spark Cookbook
Apress
1342
Spark SQL — Batch and Streaming Queries Over Structured Data on Massive Scale
The primary difference between Spark SQL’s and the "bare" Spark Core’s RDD computation
models is the framework for loading, querying and persisting structured and semi-structured
data using structured queries that can be expressed using good ol' SQL, HiveQL and the
custom high-level SQL-like, declarative, type-safe Dataset API called Structured Query
DSL.
You can find more information about Spark SQL in my Mastering Spark SQL
Tip
gitbook.
1343
Spark Structured Streaming — Streaming Datasets
Structured streaming offers a high-level declarative streaming API built on top of Datasets
(inside Spark SQL’s engine) for continuous incremental execution of structured queries.
You can find more information about Spark Structured Streaming in my separate
Tip
notebook titled Spark Structured Streaming.
1344
Spark Streaming — Streaming RDDs
You can find more information about Spark Streaming in my separate book in the
Tip
notebook repository at GitBook.
1345
BlockRDD
BlockRDD
BlockRDD is an RDD that is created when Spark Streaming’s ReceiverInputDStream is
compute …FIXME
getPartitions Method
getPartitions: Array[Partition]
getPartitions …FIXME
getPreferredLocations Method
getPreferredLocations …FIXME
SparkContext
1346
BlockRDD
Collection of BlockIds
1347
Spark GraphX — Distributed Graph Computations
GraphX models graphs as property graphs where vertices and edges can have properties.
Graph
Graph abstract class represents a collection of vertices and edges .
import org.apache.spark.graphx._
import org.apache.spark.rdd.RDD
val vertices: RDD[(VertexId, String)] =
sc.parallelize(Seq(
(0L, "Jacek"),
(1L, "Agata"),
(2L, "Julian")))
1348
Spark GraphX — Distributed Graph Computations
Transformations
mapVertices
mapEdges
mapTriplets
reverse
subgraph
mask
groupEdges
Joins
outerJoinVertices
Computation
aggregateMessages
fromEdgeTuples
fromEdges
apply
1349
Spark GraphX — Distributed Graph Computations
GraphImpl
GraphImpl is the default implementation of Graph abstract class.
Such a situation, in which we need to find the best matching in a weighted bipartite
graph, poses what is known as the stable marriage problem. It is a classical problem
that has a well-known solution, the Gale–Shapley algorithm.
1350
Spark GraphX — Distributed Graph Computations
1351
Graph Algorithms
Graph Algorithms
GraphX comes with a set of built-in graph algorithms.
PageRank
Triangle Count
Connected Components
Identifies independent disconnected subgraphs.
Collaborative Filtering
What kinds of people like what kinds of products.
1352