The scheduler watches for newly created Pods that have no Node assigned. For every Pod that the scheduler discovers, the scheduler becomes responsible for finding the best Node for that Pod to run on. Scheduling in general is quite an extensive field in computer science which takes into account various range of constraints and limitations. Each workload may require a different approach to achieve optimal scheduling results. The kube-scheduler provided by Kubernetes project was constructed with a goal to provide high throughput at the cost of being simple. To help in building a scheduler (the default or a custom one) and to share elements of the scheduling logic, the scheduling framework was implemented. The framework does not provide all pieces to build a new scheduler from scratch. Queues, caches, scheduling algorithms and other building elements are still needed to assemble a fully functional unit. This document aims at describing how all the individual pieces are put together and what’s their role in the overall architecture so a developer can quickly orient in the code.
The default scheduler instance has a loop running indefinitely which (everytime there’s a pod) is responsible for invoking the scheduling logic and making sure a pod gets either a node assigned or requeued for future processing. Each loop consists of a blocking scheduling and a non-blocking binding cycle. The scheduling cycle is responsible for running the scheduling algorithm selecting the most suitable node for placing the pod. The binding cycle makes sure the kube-apiserver is made aware of the selected node at the right time. A pod may be bound immediately, or in the case of gang scheduling, wait until all its sibling pods have their node assigned.
Each cycle honors the following steps:
- Get the next pod for scheduling
- Schedule a pod with provided algorithm
- If a pod fails to be scheduled due to
FitError
, run preemption plugin inPostFilterPlugin
(if the plugin is registered) to nominate a node where the pods can run. If preemption was successful, let the current pod be aware of the nominated node. Handle the error, get the next pod and start over. - If the scheduling algorithm finds a suitable node, store the pod into
the scheduler cache (
AssumePod
operation) and run plugins from theReserve
andPermit
extension point in that order. In case any of the plugins fails, end the current scheduling cycle, increase relevant metrics and handle the scheduling error through theError
handler. - Upon successfully running all extension points, proceed to the binding cycle. At the same time start processing another pod (if there’s any).
Consists of the following four steps ran in the same order:
- Invoking WaitOnPermit
(internal API) of plugins from
Permit
extension point. Some plugins from the extension point may send a request for an operation requiring to wait for a condition (e.g. wait for additional resources to be available or wait for all pods in a gang to be assumed). Under the hood,WaitOnPermit
waits for such a condition to be met within a timeout threshold. - Invoking plugins from PreBind extension point
- Invoking plugins from Bind extension point
- Invoking plugins from PostBind extension point
In case of processing of any of the extension points fails, Unreserve
operation
of all Reserve
plugins is invoked (e.g. free resources allocated for a gang of pods).
The scheduler codebase spans across various locations. Last but not least to mention:
- cmd/kube-scheduler/app: location of the controller code alongside definition of CLI arguments (honors the standard setup for all Kubernetes controllers)
- pkg/scheduler: the default scheduler codebase root directory
- pkg/scheduler/core: location of the default scheduling algorithm
- pkg/scheduler/framework: scheduling framework alongside plugins
- pkg/scheduler/internal: implementation of the cache, queues and other internal elements
- staging/src/k8s.io/kube-scheduler: location of ComponentConfig API types
- test/e2e/scheduling: scheduling e2e
- test/integration/scheduler scheduling integration tests
- test/integration/scheduler_perf scheduling performance benchmarks
Code under cmd/kube-scheduler/app
is responsible for collecting scheduler
configuration and initializing logic allowing the kube-scheduler to run
as part of the Kubernetes control plane. The code includes:
- Initializing command line options
(along with a default
ComponentConfig
) and validation - Initializing metrics
(
/metrics
), health check (/healthz
) and other handlers (authorization, authentication, panic recovery, etc.) - Reading and defaulting configuration of KubeSchedulerConfiguration
- Building a registry with plugins (in-tree, out-of-tree)
- Initializing the scheduler with various options such as profiles, algorithm source, pod back off, etc.
- Invocation of LogOrWriteConfig which logs the final scheduler configuration for debugging purposes
- Right before running,
/configz
is registered, events broadcaster started, leader election initiated, and the server with all the configured handlers and informers is started.
Once initialized, the scheduler can run.
In more detail, there’s a Setup
function accomplishing what is essentially
the initialization of the scheduler’s core process.
First, it validates the options that have been passed through (the flags added
in NewSchedulerCommand()
are set directly on this options struct’s fields).
If the options passed so far don’t raise any errors, it then calls opts.Config()
which sets up the final internal settings including secure serving, leader election,
clients, and begins parsing options related to the algorithm source
(like loading config files and initializing empty profiles as well as handling
deprecated options like policy config). The next lines call c.Complete()
to complete
the config by filling in any empty values. At this point any out-of-tree plugins
are registered by creating a blank registry and adding entries in that registry
for each plugin’s New function. It should be noted that the Registry is simply
a map of plugin names to their factory functions. For the default scheduler,
this step does nothing (because our main function in cmd/kube-scheduler/scheduler.go
passes nothing to NewSchedulerCommand()
).
This means the default set of plugins is initialized in scheduler.New()
.
Given the initialization is performed outside the scheduling framework, different consumers of the framework can initialize the environment differently to cover their needs. For example, a simulator can inject its own object through informers. Or custom plugins may be provided instead of the default ones. Known consumers of the scheduling framework:
The code is located under pkg/scheduler
.
This is where implementation of the default scheduler lives.
Various elements of the scheduler are initialized and put together here:
- Default scheduling options such as node percentage, initial and maximum backoff, profiles
- Scheduler cache and queues
- Scheduling profiles instantiated to tailor a framework for each profile to better suit pod placement (each profile defines a set of plugins to use)
- Handler functions for getting the next pod for scheduling (
NextPod
) and error handling (Error
)
The following steps are taken during the process of creating a scheduler instance:
- Scheduler cache is initialized
- Both in-tree and out-of-tree registries with plugins are merged together
- Metrics are registered
- Configurator building a scheduler instance (wiring the cache, plugin registry, scheduling algorithm and other elements together)
- Event handlers are registered to allow the scheduler to react on changes in PVs, PVCs, services and other objects relevant for scheduling (eventually, each plugin will define a set of events on which it reacts, see kubernetes/kubernetes#100347 for more details).
The following diagram shows how individual elements are connected together once initialized. Event handlers make sure pods are properly enqueued in the scheduling queues, the cache is updated with pods and nodes as they go (to provide up-to-date snapshot). Scheduling algorithm and the binding cycle have the right instances of the framework available (one instance of the framework per a profile).
Its code is currently located under pkg/scheduler/framework
.
It contains various plugins
responsible for filtering and scoring nodes (among others).
Used as building blocks for any scheduling algorithm.
When a plugin is initialized, it’s passed a framework handler which provides interfaces to access and/or manipulate pods, nodes, clientset, event recorder and other handlers every plugin needs to implement its functionality.
Cache is responsible for capturing the current state of a cluster. Keeping a list of nodes and assumed pods alongside states of pods and images. The cache provides methods for reconciling pod and node objects (invoked through event handlers) keeping the state of the cluster up to date. Allowing to update the snapshot of a cluster (to pin the cluster state while a scheduling algorithm is run) with the latest state at the beginning of each scheduling cycle.
The cache also allows to run assume operation which temporarily stores a pod in the cache and makes it look as the pod is actually already running on a designated node for all consumers of the snapshot. Assume operation exists to remove the time the pod actually gets updated on the kube-apiserver side and thus increasing the scheduler’s throughput. The following operations manipulate with the assumed pods:
AssumePod
: to signal the scheduling algorithm found a feasible node so the next pod can be attempted while the current pod enters the binding cycleFinishBinding
: used to signal Bind finished so the pod can be removed from the list of assumed podsForgetPod
: removes pod from the list of assumed pods, used in case the pod fails to get processed in the binding cycle successfully (e.g. duringReserve
,Permit
,PreBind
orBind
evaluation)
The cache keeps track of the following three metrics:
scheduler_cache_size_assumed_pods
: number of pods in the assume pods listscheduler_cache_size_pods
: number of pods in the cachescheduler_cache_size_nodes
: number of nodes in the cache
The snapshot captures the state of a cluster carrying information about all nodes in a cluster and objects located on each node. Namely node objects, pods assigned on each node, requested resources of all pods on each node, node’s allocatable, images pulled and other information needed to make a scheduling decision. Every time a pod is scheduled, a snapshot of the current state of the cluster is captured. To avoid a case where a pod or node gets changed while plugins are processed which might lead to data inconsistency as some plugins might get a different view of the cluster.
A configurator builds the scheduler instance by wiring plugins, cache, queues, handlers and other elements together. Each profile is initialized with its own framework (with all frameworks sharing informers, event recorders, etc.).
At this point it’s still possible to have the configurator create the instance from a policy file. Though, this approach is deprecated and will be removed from the configuration eventually. Keeping only the kube scheduler configuration as the only way to provide the configuration.
The codebase defines a ScheduleAlgorithm interface. Any implementation of the interface can be used as a scheduling algorithm. There are two methods:
Schedule
: responsible for scheduling a pod using plugins fromPreFilter
up toNormalizeScore
extension points, provides ScheduleResult containing a scheduling decision (the most suitable nodes) with additional accompanying information such as how many nodes were evaluated and how many nodes were found feasible for scheduling.Extenders
: currently exposed only for testing
Each cycle of the default algorithm implementation consists of:
- Taking the current snapshot from the scheduling cache
- Filter out all nodes
not feasible for scheduling a pod
- Run PreFilter plugins first (preprocessing phase, e.g. computing pod [anti-]affinity relations)
- Run Filter plugins in parallel: filter out all nodes which does not satisfy pod’s constraints (e.g. sufficient resources, node affinity, etc.), including running filter extenders
- Run PostFilter plugins if no node can fit the incoming pod
- In case there are at least two feasible nodes for scheduling, run scoring plugins:
- Run PreScore plugins first (preprocessing phase)
- Run Score plugins in parallel: each node is given a score vector (each coordinate corresponding to one plugin)
- Run NormalizeScore plugins: to have all plugins given a score in <0; 100> interval
- Compute weighted score for each node (each score plugin can have a weight assigned indicating how much its score is preferred over others)
- Run score extenders and add it to the total score of each node
- Select
and give back a node
with the highest score. If there’s only a single feasible node
skip
PreScore
,Score
andNormalizeScore
extension points and give back the node right away. If there’s no feasible node, report it.
Be aware of:
- If a plugin provides score normalization, it needs to return non-nil when ScoreExtensions() gets invoked