OpenStack Docs - Scheduling
OpenStack Docs - Scheduling
Scheduling
(hypervisor-hyper-v.html) (cells.html) (https://bugs.launchpad.net/openstack-manuals/+ lebug?
eld.title=Scheduling%20in%20Con guration%20Reference& eld.comment=%0A%0A%0AThis bug tracker is for errors with the documentation, use the following as
a template and remove or add elds as you see t. Convert [ ] into [x] to check boxes:%0A%0A- [ ] This doc is inaccurate in this way: ______%0A- [ ] This is a doc
addition request.%0A- [ ] I have a x to the document that I can paste below including example: input and output. %0A%0AIf you have a troubleshooting or support
issue, use the following resources:%0A%0A - Ask OpenStack: http://ask.openstack.org%0A - The mailing list: http://lists.openstack.org%0A - IRC: 'openstack' channel
on Freenode%0A%0A-----------------------------------%0ARelease:%200.9%20on%202017-06-
12%2011:09%0ASHA:%201865f28305fdb1eb6b5e1a434ac7e292c3421513%0ASource:%20http://git.openstack.org/cgit/openstack/openstack-
manuals/tree/doc/con g-reference/source/compute/scheduler.rst%0AURL: https://docs.openstack.org/mitaka/con g-
reference/compute/scheduler.html& eld.tags=con g-reference)
Contents (../index.html)
Filter scheduler
Filters
AggregateCoreFilter
AggregateDiskFilter
AggregateImagePropertiesIsolation
AggregateInstanceExtraSpecsFilter
AggregateIoOpsFilter
AggregateMultiTenancyIsolation
AggregateNumInstancesFilter
AggregateRamFilter
AggregateTypeA nityFilter
AllHostsFilter
AvailabilityZoneFilter
ComputeCapabilitiesFilter
ComputeFilter
CoreFilter
NUMATopologyFilter
Di erentHostFilter
DiskFilter
GroupA nityFilter
GroupAntiA nityFilter
ImagePropertiesFilter
IsolatedHostsFilter
IoOpsFilter
JsonFilter
MetricsFilter
NumInstancesFilter
PciPassthroughFilter
RamFilter
RetryFilter
SameHostFilter
ServerGroupA nityFilter
ServerGroupAntiA nityFilter
SimpleCIDRA nityFilter
TrustedFilter
TypeA nityFilter
Weights
Chance scheduler
Utilization aware scheduling
Host aggregates and availability zones
Command-line interface
Con gure scheduler to support host aggregates
Example: Specify compute hosts with SSDs
XenServer hypervisor pools to support live migration
Con guration reference
Compute uses the nova-scheduler service to determine how to dispatch compute requests. For example, the nova-scheduler service determines on which
host a VM should launch. In the context of lters, the term host means a physical node that has a nova-compute service running on it. You can con gure the
scheduler through a variety of options.
Compute is con gured with the following default scheduler options in the /etc/nova/nova.conf le:
https://docs.openstack.org/mitaka/config-reference/compute/scheduler.html 1/13
11/2/2017 OpenStack Docs: Scheduling
scheduler_driver_task_period = 60
scheduler_driver = nova.scheduler.filter_scheduler.FilterScheduler
scheduler_available_filters = nova.scheduler.filters.all_filters
scheduler_default_filters = RetryFilter, AvailabilityZoneFilter, RamFilter, DiskFilter, ComputeFilter, ComputeCapabilitiesFilter, ImagePropertiesF
By default, the scheduler_driver is con gured as a lter scheduler, as described in the next section. In the default con guration, this scheduler considers hosts
that meet all the following criteria:
The scheduler caches its list of available hosts; use the scheduler_driver_task_period option to specify how often the list is updated.
Note
Do not con gure service_down_time to be much smaller than scheduler_driver_task_period; otherwise, hosts appear to be dead while the host list is
being cached.
For information about the volume scheduler, see the Block Storage section of OpenStack Administrator Guide (http://docs.openstack.org/admin-
guide/blockstorage-manage-volumes.html).
When evacuating instances from a host, the scheduler service honors the target host de ned by the administrator on the evacuate command. If a target is not
de ned by the administrator, the scheduler determines the target host. For information about instance evacuation, see Evacuate instances
(http://docs.openstack.org/admin-guide/compute-node-down.html#evacuate-instances) section of the OpenStack Administrator Guide.
Filter scheduler¶
The lter scheduler (nova.scheduler.filter_scheduler.FilterScheduler) is the default scheduler for scheduling virtual machine instances. It supports
ltering and weighting to make informed decisions on where a new instance should be created.
Filters¶
When the lter scheduler receives a request for a resource, it rst applies lters to determine which hosts are eligible for consideration when dispatching a
resource. Filters are binary: either a host is accepted by the lter, or it is rejected. Hosts that are accepted by the lter are then processed by a di erent
algorithm to decide which hosts to use for that request, described in the Weights section.
Filtering
The scheduler_available_filters con guration option in nova.conf provides the Compute service with the list of the lters that are used by the scheduler.
The default setting speci es all of the lter that are included with the Compute service:
scheduler_available_filters = nova.scheduler.filters.all_filters
https://docs.openstack.org/mitaka/config-reference/compute/scheduler.html 2/13
11/2/2017 OpenStack Docs: Scheduling
This con guration option can be speci ed multiple times. For example, if you implemented your own custom lter in Python called myfilter.MyFilter and you
wanted to use both the built-in lters and your custom lter, your nova.conf le would contain:
scheduler_available_filters = nova.scheduler.filters.all_filters
scheduler_available_filters = myfilter.MyFilter
The scheduler_default_filters con guration option in nova.conf de nes the list of lters that are applied by the nova-scheduler service. The default lters
are:
AggregateCoreFilter¶
Filters host by CPU core numbers with a per-aggregate cpu_allocation_ratio value. If the per-aggregate value is not found, the value falls back to the global
setting. If the host is in more than one aggregate and more than one value is found, the minimum value will be used. For information about how to use this
lter, see Host aggregates and availability zones. See also CoreFilter.
AggregateDiskFilter¶
Filters host by disk allocation with a per-aggregate disk_allocation_ratio value. If the per-aggregate value is not found, the value falls back to the global
setting. If the host is in more than one aggregate and more than one value is found, the minimum value will be used. For information about how to use this
lter, see Host aggregates and availability zones. See also DiskFilter.
AggregateImagePropertiesIsolation¶
Matches properties de ned in an image’s metadata against those of aggregates to determine host matches:
If a host belongs to an aggregate and the aggregate de nes one or more metadata that matches an image’s properties, that host is a candidate to boot
the image’s instance.
If a host does not belong to any aggregate, it can boot instances from all images.
For example, the following aggregate myWinAgg has the Windows operating system as metadata (named ‘windows’):
In this example, because the following Win-2012 image has the windows property, it boots on the sf-devel host (all other lters being equal):
You can con gure the AggregateImagePropertiesIsolation lter by using the following options in the nova.conf le:
AggregateInstanceExtraSpecsFilter¶
Matches properties de ned in extra specs for an instance type against admin-de ned properties on a host aggregate. Works with speci cations that are scoped
with aggregate_instance_extra_specs. Multiple values can be given, as a comma-separated list. For backward compatibility, also works with non-scoped
speci cations; this action is highly discouraged because it con icts with ComputeCapabilitiesFilter lter when you enable both lters. For information about how
to use this lter, see the Host aggregates and availability zones section.
AggregateIoOpsFilter¶
Filters host by disk allocation with a per-aggregate max_io_ops_per_host value. If the per-aggregate value is not found, the value falls back to the global setting.
If the host is in more than one aggregate and more than one value is found, the minimum value will be used. For information about how to use this lter, see
Host aggregates and availability zones. See also IoOpsFilter.
AggregateMultiTenancyIsolation¶
https://docs.openstack.org/mitaka/config-reference/compute/scheduler.html 3/13
11/2/2017 OpenStack Docs: Scheduling
Ensures that the tenant (or list of tenants) creates all instances only on speci c Host aggregates and availability zones. If a host is in an aggregate that has the
filter_tenant_id metadata key, the host creates instances from only that tenant or list of tenants. A host can be in di erent aggregates. If a host does not
belong to an aggregate with the metadata key, the host can create instances from all tenants. This setting does not isolate the aggregate from other tenants.
Any other tenant can continue to build instances on the speci ed aggregate.
AggregateNumInstancesFilter¶
Filters host by number of instances with a per-aggregate max_instances_per_host value. If the per-aggregate value is not found, the value falls back to the
global setting. If the host is in more than one aggregate and thus more than one value is found, the minimum value will be used. For information about how to
use this lter, see Host aggregates and availability zones. See also NumInstancesFilter.
AggregateRamFilter¶
Filters host by RAM allocation of instances with a per-aggregate ram_allocation_ratio value. If the per-aggregate value is not found, the value falls back to the
global setting. If the host is in more than one aggregate and thus more than one value is found, the minimum value will be used. For information about how to
use this lter, see Host aggregates and availability zones. See also RamFilter.
AggregateTypeA nityFilter¶
This lter passes hosts if no instance_type key is set or the instance_type aggregate metadata value contains the name of the instance_type requested. The
value of the instance_type metadata entry is a string that may contain either a single instance_type name or a comma-separated list of instance_type
names, such as m1.nano or m1.nano,m1.small. For information about how to use this lter, see Host aggregates and availability zones. See also
TypeA nityFilter.
AllHostsFilter¶
This is a no-op lter. It does not eliminate any of the available hosts.
AvailabilityZoneFilter¶
Filters hosts by availability zone. You must enable this lter for the scheduler to respect availability zones in requests.
ComputeCapabilitiesFilter¶
Matches properties de ned in extra specs for an instance type against compute capabilities. If an extra specs key contains a colon (:), anything before the colon
is treated as a namespace and anything after the colon is treated as the key to be matched. If a namespace is present and is not capabilities, the lter ignores
the namespace. For backward compatibility, also treats the extra specs key as the key to be matched if no namespace is present; this action is highly
discouraged because it con icts with AggregateInstanceExtraSpecsFilter lter when you enable both lters.
ComputeFilter¶
Passes all hosts that are operational and enabled.
CoreFilter¶
Only schedules instances on hosts if su cient CPU cores are available. If this lter is not set, the scheduler might over-provision a host based on cores. For
example, the virtual cores running on an instance may exceed the physical cores.
You can con gure this lter to enable a xed amount of vCPU overcommitment by using the cpu_allocation_ratio con guration option in nova.conf. The
default setting is:
cpu_allocation_ratio = 16.0
With this setting, if 8 vCPUs are on a node, the scheduler allows instances up to 128 vCPU to be run on that node.
cpu_allocation_ratio = 1.0
Note
The Compute API always returns the actual number of CPU cores available on a compute node regardless of the value of the cpu_allocation_ratio
con guration key. As a result changes to the cpu_allocation_ratio are not re ected via the command line clients or the dashboard. Changes to this
con guration key are only taken into account internally in the scheduler.
NUMATopologyFilter¶
Filters hosts based on the NUMA topology that was speci ed for the instance through the use of avor extra_specs in combination with the image properties,
as described in detail in the related nova-spec document (http://specs.openstack.org/openstack/nova-specs/specs/juno/implemented/virt-driver-numa-
placement.html). Filter will try to match the exact NUMA cells of the instance to those of the host. It will consider the standard over-subscription limits each cell,
and provide limits to the compute host accordingly.
Note
If instance has no topology de ned, it will be considered for any host. If instance has a topology de ned, it will be considered only for NUMA capable
hosts.
https://docs.openstack.org/mitaka/config-reference/compute/scheduler.html 4/13
11/2/2017 OpenStack Docs: Scheduling
Di erentHostFilter¶
Schedules the instance on a di erent host from a set of instances. To take advantage of this lter, the requester must pass a scheduler hint, using
different_host as the key and a list of instance UUIDs as the value. This lter is the opposite of the SameHostFilter. Using the nova command-line client, use
the --hint ag. For example:
{
"server": {
"name": "server-1",
"imageRef": "cedef40a-ed67-4d10-800e-17455edce175",
"flavorRef": "1"
},
"os:scheduler_hints": {
"different_host": [
"a0cf03a5-d921-4877-bb5c-86d26cf818e1",
"8c19174f-4220-44f0-824a-cd1eeef10287"
]
}
}
DiskFilter¶
Only schedules instances on hosts if there is su cient disk space available for root and ephemeral storage.
You can con gure this lter to enable a xed amount of disk overcommitment by using the disk_allocation_ratio con guration option in the nova.conf
con guration le. The default setting disables the possibility of the overcommitment and allows launching a VM only if there is a su cient amount of disk space
available on a host:
disk_allocation_ratio = 1.0
DiskFilter always considers the value of the disk_available_least property and not the one of the free_disk_gb property of a hypervisor’s statistics:
$ nova hypervisor-stats
+----------------------+-------+
| Property | Value |
+----------------------+-------+
| count | 1 |
| current_workload | 0 |
| disk_available_least | 29 |
| free_disk_gb | 35 |
| free_ram_mb | 3441 |
| local_gb | 35 |
| local_gb_used | 0 |
| memory_mb | 3953 |
| memory_mb_used | 512 |
| running_vms | 0 |
| vcpus | 2 |
| vcpus_used | 0 |
+----------------------+-------+
As it can be viewed from the command output above, the amount of the available disk space can be less than the amount of the free disk space. It happens
because the disk_available_least property accounts for the virtual size rather than the actual size of images. If you use an image format that is sparse or
copy on write so that each virtual instance does not require a 1:1 allocation of a virtual disk to a physical storage, it may be useful to allow the overcommitment
of disk space.
To enable scheduling instances while overcommitting disk resources on the node, adjust the value of the disk_allocation_ratio con guration option to
greater than 1.0:
Note
If the value is set to >1, we recommend keeping track of the free disk space, as the value approaching 0 may result in the incorrect functioning of
instances using it at the moment.
https://docs.openstack.org/mitaka/config-reference/compute/scheduler.html 5/13
11/2/2017 OpenStack Docs: Scheduling
GroupA nityFilter¶
Note
The GroupA nityFilter ensures that an instance is scheduled on to a host from a set of group hosts. To take advantage of this lter, the requester must pass a
scheduler hint, using group as the key and an arbitrary name as the value. Using the nova command-line client, use the --hint ag. For example:
This lter should not be enabled at the same time as GroupAntiA nityFilter or neither lter will work properly.
GroupAntiA nityFilter¶
Note
The GroupAntiA nityFilter ensures that each instance in a group is on a di erent host. To take advantage of this lter, the requester must pass a scheduler hint,
using group as the key and an arbitrary name as the value. Using the nova command-line client, use the --hint ag. For example:
This lter should not be enabled at the same time as GroupA nityFilter or neither lter will work properly.
ImagePropertiesFilter¶
Filters hosts based on properties de ned on the instance’s image. It passes hosts that can support the speci ed image properties contained in the instance.
Properties include the architecture, hypervisor type, hypervisor version (for Xen hypervisor type only), and virtual machine mode.
For example, an instance might require a host that runs an ARM-based processor, and QEMU as the hypervisor. You can decorate an image with these
properties by using:
architecture
describes the machine architecture required by the image. Examples are i686, x86_64, arm, and ppc64.
hypervisor_type
describes the hypervisor required by the image. Examples are xen, qemu, and xenapi.
Note
hypervisor_version_requires
describes the hypervisor version required by the image. The property is supported for Xen hypervisor type only. It can be used to enable support for multiple
hypervisor versions, and to prevent instances with newer Xen tools from being provisioned on an older version of a hypervisor. If available, the property
value is compared to the hypervisor version of the compute host.
To lter the hosts by the hypervisor version, add the hypervisor_version_requires property on the image as metadata and pass an operator and a
required hypervisor version as its value:
vm_mode
describes the hypervisor application binary interface (ABI) required by the image. Examples are xen for Xen 3.0 paravirtual ABI, hvm for native ABI, uml for
User Mode Linux paravirtual ABI, exe for container virt executable ABI.
IsolatedHostsFilter¶
Allows the admin to de ne a special (isolated) set of images and a special (isolated) set of hosts, such that the isolated images can only run on the isolated hosts,
and the isolated hosts can only run isolated images. The ag restrict_isolated_hosts_to_isolated_images can be used to force isolated hosts to only run
isolated images.
https://docs.openstack.org/mitaka/config-reference/compute/scheduler.html 6/13
11/2/2017 OpenStack Docs: Scheduling
The admin must specify the isolated set of images and hosts in the nova.conf le using the isolated_hosts and isolated_images con guration options. For
example:
IoOpsFilter¶
The IoOpsFilter lters hosts by concurrent I/O operations on it. Hosts with too many concurrent I/O operations will be ltered out. The max_io_ops_per_host
option speci es the maximum number of I/O intensive instances allowed to run on a host. A host will be ignored by the scheduler if more than
max_io_ops_per_host instances in build, resize, snapshot, migrate, rescue or unshelve task states are running on it.
JsonFilter¶
The JsonFilter allows a user to construct a custom lter by passing a scheduler hint in JSON format. The following operators are supported:
=
<
>
in
<=
>=
not
or
and
$free_ram_mb
$free_disk_mb
$total_usable_ram_mb
$vcpus_total
$vcpus_used
{
"server": {
"name": "server-1",
"imageRef": "cedef40a-ed67-4d10-800e-17455edce175",
"flavorRef": "1"
},
"os:scheduler_hints": {
"query": "[>=,$free_ram_mb,1024]"
}
}
MetricsFilter¶
Filters hosts based on meters weight_setting. Only hosts with the available meters are passed so that the metrics weigher will not fail due to these hosts.
NumInstancesFilter¶
Hosts that have more instances running than speci ed by the max_instances_per_host option are ltered out when this lter is in place.
PciPassthroughFilter¶
The lter schedules instances on a host if the host has devices that meet the device requests in the extra_specs attribute for the avor.
RamFilter¶
Only schedules instances on hosts that have su cient RAM available. If this lter is not set, the scheduler may over provision a host based on RAM (for example,
the RAM allocated by virtual machine instances may exceed the physical RAM).
You can con gure this lter to enable a xed amount of RAM overcommitment by using the ram_allocation_ratio con guration option in nova.conf. The
default setting is:
ram_allocation_ratio = 1.5
This setting enables 1.5 GB instances to run on any compute node with 1 GB of free RAM.
RetryFilter¶
Filters out hosts that have already been attempted for scheduling purposes. If the scheduler selects a host to respond to a service request, and the host fails to
respond to the request, this lter prevents the scheduler from retrying that host for the service request.
https://docs.openstack.org/mitaka/config-reference/compute/scheduler.html 7/13
11/2/2017 OpenStack Docs: Scheduling
This lter is only useful if the scheduler_max_attempts con guration option is set to a value greater than zero.
SameHostFilter¶
Schedules the instance on the same host as another instance in a set of instances. To take advantage of this lter, the requester must pass a scheduler hint,
using same_host as the key and a list of instance UUIDs as the value. This lter is the opposite of the DifferentHostFilter. Using the nova command-line
client, use the --hint ag:
{
"server": {
"name": "server-1",
"imageRef": "cedef40a-ed67-4d10-800e-17455edce175",
"flavorRef": "1"
},
"os:scheduler_hints": {
"same_host": [
"a0cf03a5-d921-4877-bb5c-86d26cf818e1",
"8c19174f-4220-44f0-824a-cd1eeef10287"
]
}
}
ServerGroupA nityFilter¶
The ServerGroupA nityFilter ensures that an instance is scheduled on to a host from a set of group hosts. To take advantage of this lter, the requester must
create a server group with an affinity policy, and pass a scheduler hint, using group as the key and the server group UUID as the value. Using the nova
command-line tool, use the --hint ag. For example:
ServerGroupAntiA nityFilter¶
The ServerGroupAntiA nityFilter ensures that each instance in a group is on a di erent host. To take advantage of this lter, the requester must create a server
group with an anti-affinity policy, and pass a scheduler hint, using group as the key and the server group UUID as the value. Using the nova command-line
client, use the --hint ag. For example:
SimpleCIDRA nityFilter¶
Schedules the instance based on host IP subnet range. To take advantage of this lter, the requester must specify a range of valid IP address in CIDR format, by
passing two scheduler hints:
build_near_host_ip
The rst IP address in the subnet (for example, 192.168.1.1)
cidr
The CIDR that corresponds to the subnet (for example, /24)
Using the nova command-line client, use the --hint ag. For example, to specify the IP subnet 192.168.1.1/24:
{
"server": {
"name": "server-1",
"imageRef": "cedef40a-ed67-4d10-800e-17455edce175",
"flavorRef": "1"
},
"os:scheduler_hints": {
"build_near_host_ip": "192.168.1.1",
"cidr": "24"
}
}
TrustedFilter¶
https://docs.openstack.org/mitaka/config-reference/compute/scheduler.html 8/13
11/2/2017 OpenStack Docs: Scheduling
Filters hosts based on their trust. Only passes hosts that meet the trust requirements speci ed in the instance properties.
TypeA nityFilter¶
Dynamically limits hosts to one instance type. An instance can only be launched on a host, if no instance with di erent instances types are running on it, or if the
host has no running instances at all.
Weights¶
When resourcing instances, the lter scheduler lters and weights each host in the list of acceptable hosts. Each time the scheduler selects a host, it virtually
consumes resources on it, and subsequent selections are adjusted accordingly. This process is useful when the customer asks for the same large amount of
instances, because weight is computed for each requested instance.
All weights are normalized before being summed up; the host with the largest weight is given the highest priority.
Weighting hosts
If cells are used, cells are weighted by the scheduler in the same manner as hosts.
Hosts and cells are weighted based on the following options in the /etc/nova/nova.conf le:
True - Raises an exception. To avoid the raised exception, you should use the scheduler lter
MetricFilter to lter out hosts with unavailable meters.
False - Treated as a negative factor in the weighting process (uses the
weight_of_unavailable option).
[metrics] weight_of_unavailable If required is set to False, and any one of the meters set by weight_setting is unavailable, the
weight_of_unavailable value is returned to the scheduler.
For example:
https://docs.openstack.org/mitaka/config-reference/compute/scheduler.html 9/13
11/2/2017 OpenStack Docs: Scheduling
[DEFAULT]
scheduler_host_subset_size = 1
scheduler_weight_classes = nova.scheduler.weights.all_weighers
ram_weight_multiplier = 1.0
io_ops_weight_multiplier = 2.0
soft_affinity_weight_multiplier = 1.0
soft_anti_affinity_weight_multiplier = 1.0
[metrics]
weight_multiplier = 1.0
weight_setting = name1=1.0, name2=-1.0
required = false
weight_of_unavailable = -10000.0
For example:
[cells]
scheduler_weight_classes = nova.cells.weights.all_weighers
mute_weight_multiplier = -10.0
ram_weight_multiplier = 1.0
offset_weight_multiplier = 1.0
Chance scheduler¶
As an administrator, you work with the lter scheduler. However, the Compute service also uses the Chance Scheduler,
nova.scheduler.chance.ChanceScheduler, which randomly selects from lists of ltered hosts.
Host aggregates are not explicitly exposed to users. Instead administrators map avors to host aggregates. Administrators do this by setting metadata on a host
aggregate, and matching avor extra speci cations. The scheduler then endeavors to match user requests for instance of the given avor to a host aggregate
with the same key-value pair in its metadata. Compute nodes can be in more than one host aggregate.
Administrators are able to optionally expose a host aggregate as an availability zone. Availability zones are di erent from host aggregates in that they are
explicitly exposed to the user, and hosts can only be in a single availability zone. Administrators can con gure a default availability zone where instances will be
scheduled when the user fails to specify one.
Command-line interface¶
The nova command-line client supports the following aggregate-related commands.
nova aggregate-list
Print a list of all aggregates.
nova aggregate-create <name> [availability-zone]
Create a new aggregate named <name>, and optionally in availability zone [availability-zone] if speci ed. The command returns the ID of the newly
created aggregate. Hosts can be made available to multiple host aggregates. Be careful when adding a host to an additional host aggregate when the host is
also in an availability zone. Pay attention when using the aggregate-set-metadata and aggregate-update commands to avoid user confusion when they
boot instances in di erent availability zones. An error occurs if you cannot add a particular host to an aggregate zone for which it is not intended.
nova aggregate-delete <id>
Delete an aggregate with id <id>.
nova aggregate-details <id>
https://docs.openstack.org/mitaka/config-reference/compute/scheduler.html 10/13
11/2/2017 OpenStack Docs: Scheduling
Show details of the aggregate with id <id>.
nova aggregate-add-host <id> <host>
Add host with name <host> to aggregate with id <id>.
nova aggregate-remove-host <id> <host>
Remove the host with name <host> from the aggregate with id <id>.
nova aggregate-set-metadata <id> <key=value> [<key=value> ...]
Add or update metadata (key-value pairs) associated with the aggregate with id <id>.
nova aggregate-update <id> <name> [<availability_zone>]
Update the name and availability zone (optional) for the aggregate.
nova host-list
List all hosts by service.
nova host-update –maintenance [enable | disable]
Put/resume host into/from maintenance.
Note
Only administrators can access these commands. If you try to use these commands and the user name and tenant that you use to access the Compute
service do not have the admin role or the appropriate privileges, these errors occur:
ERROR: Policy doesn't allow compute_extension:aggregates to be performed. (HTTP 403) (Request-ID: req-299fbff6-6729-4cef-93b2-e7e1f96b48
ERROR: Policy doesn't allow compute_extension:hosts to be performed. (HTTP 403) (Request-ID: req-ef2400f6-6776-4ea3-b6f1-7704085c27d1)
To con gure the scheduler to support host aggregates, the scheduler_default_filters con guration option must contain the
AggregateInstanceExtraSpecsFilter in addition to the other lters used by the scheduler. Add the following line to /etc/nova/nova.conf on the host that
runs the nova-scheduler service to enable host aggregates ltering, as well as the other lters that are typically enabled:
scheduler_default_filters=AggregateInstanceExtraSpecsFilter,RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilt
Use the nova avor-create command to create the ssd.large avor called with an ID of 6, 8 GB of RAM, 80 GB root disk, and four vCPUs.
https://docs.openstack.org/mitaka/config-reference/compute/scheduler.html 11/13
11/2/2017 OpenStack Docs: Scheduling
Once the avor is created, specify one or more key-value pairs that match the key-value pairs on the host aggregates with scope
aggregate_instance_extra_specs. In this case, that is the aggregate_instance_extra_specs:ssd=true key-value pair. Setting a key-value pair on a avor is
done using the nova avor-key command.
Once it is set, you should see the extra_specs property of the ssd.large avor populated with a key of ssd and a corresponding value of true.
Now, when a user requests an instance with the ssd.large avor, the scheduler only considers hosts with the ssd=true key-value pair. In this example, these
are node1 and node2.
(https://creativecommons.org/licenses/by/3.0/)
Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License (https://creativecommons.org/licenses/by/3.0/). See all
OpenStack Legal Documents (http://www.openstack.org/legal).
QUESTIONS? (HTTP://ASK.OPENSTACK.ORG)
https://docs.openstack.org/mitaka/config-reference/compute/scheduler.html 12/13
11/2/2017 OpenStack Docs: Scheduling
OpenStack Documentation
Contents
(../index.html)
Glossary (../common/glossary.html)
OpenStack
Projects (http://openstack.org/projects/)
OpenStack Security (http://openstack.org/projects/openstack-security/)
Common Questions (http://openstack.org/projects/openstack-faq/)
Blog (http://openstack.org/blog/)
News (http://openstack.org/news/)
Community
User Groups (http://openstack.org/community/)
Events (http://openstack.org/community/events/)
Jobs (http://openstack.org/community/jobs/)
Companies (http://openstack.org/foundation/companies/)
Contribute (http://docs.openstack.org/infra/manual/developers.html)
Documentation
OpenStack Manuals (http://docs.openstack.org)
Getting Started (http://openstack.org/software/start/)
API Documentation (http://developer.openstack.org)
Wiki (https://wiki.openstack.org)
Stay In Touch
https://docs.openstack.org/mitaka/config-reference/compute/scheduler.html 13/13