All Config Reference Files in Openstack
All Config Reference Files in Openstack
T
F
A
o
n
Ju
docs.openstack.org
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
ii
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Table of Contents
OpenStack configuration overview ................................................................................ xix
Conventions .......................................................................................................... xix
Document change history ...................................................................................... xx
Configuration file format ....................................................................................... xx
1. Block Storage .............................................................................................................. 1
Introduction to the Block Storage service ................................................................ 1
cinder.conf configuration file ............................................................................. 2
Volume drivers ........................................................................................................ 3
Backup drivers ....................................................................................................... 91
Block Storage sample configuration files ................................................................ 93
Log files used by Block Storage ........................................................................... 133
Fibre Channel Zone Manager .............................................................................. 133
Volume encryption with static key ....................................................................... 135
Additional options ............................................................................................... 139
New, updated and deprecated options in Juno for OpenStack Block Storage ........ 157
2. Compute ................................................................................................................. 163
Overview of nova.conf ........................................................................................ 163
Configure logging ................................................................................................ 165
Configure authentication and authorization ........................................................ 165
Configure resize .................................................................................................. 165
Database configuration ....................................................................................... 166
Configure the Oslo RPC messaging system ........................................................... 166
Configure the Compute API ................................................................................ 169
Configure the EC2 API ......................................................................................... 172
Fibre Channel support in Compute ...................................................................... 172
Hypervisors .......................................................................................................... 172
Scheduling ........................................................................................................... 206
Cells ..................................................................................................................... 222
Conductor ........................................................................................................... 226
Example nova.conf configuration files .............................................................. 227
Compute log files ................................................................................................ 231
Compute sample configuration files ..................................................................... 232
New, updated and deprecated options in Juno for OpenStack Compute ............... 271
3. Dashboard ............................................................................................................... 276
Configure the dashboard ..................................................................................... 276
Customize the dashboard .................................................................................... 280
Additional sample configuration files ................................................................... 281
Dashboard log files ............................................................................................. 292
4. Database Service ..................................................................................................... 294
Configure the database ....................................................................................... 303
Configure the RPC messaging system ................................................................... 307
5. Data processing service ............................................................................................ 311
6. Identity service ........................................................................................................ 319
Caching layer ....................................................................................................... 319
Identity service configuration file ......................................................................... 321
Identity service sample configuration files ............................................................ 338
New, updated and deprecated options in Juno for OpenStack Identity ................. 367
7. Image Service .......................................................................................................... 372
iii
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
iv
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
List of Figures
1.1. Ceph architecture ..................................................................................................... 3
1.2. Repository Creation Plan screen ................................................................................ 9
1.3. Local configuration ................................................................................................. 86
1.4. Remote configuration ............................................................................................. 87
2.1. VMware driver architecture .................................................................................. 189
2.2. Filtering ................................................................................................................ 207
2.3. Weighting hosts ................................................................................................... 217
2.4. KVM, Flat, MySQL, and Glance, OpenStack or EC2 API .......................................... 230
2.5. KVM, Flat, MySQL, and Glance, OpenStack or EC2 API .......................................... 231
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
List of Tables
1.1. Description of Ceph storage configuration options ................................................... 5
1.2. Description of Coraid AoE driver configuration options ............................................. 9
1.3. Description of Dell EqualLogic volume driver configuration options ......................... 11
1.4. Description of GlusterFS storage configuration options ........................................... 20
1.5. Description of HDS HNAS iSCSI and NFS driver configuration options ....................... 21
1.6. Configuration options ............................................................................................. 25
1.7. Description of HDS HUS iSCSI driver configuration options ....................................... 27
1.8. Configuration options ............................................................................................. 29
1.9. Huawei storage driver configuration options .......................................................... 43
1.10. Description of GPFS storage configuration options ................................................ 45
1.11. Volume Create Options for GPFS Volume Drive ..................................................... 46
1.12. List of configuration flags for Storwize storage and SVC driver .............................. 50
1.13. Description of IBM Storwise driver configuration options ....................................... 51
1.14. Description of IBM XIV and DS8000 volume driver configuration options ............... 54
1.15. Description of LVM configuration options ............................................................. 55
1.16. Description of NetApp cDOT iSCSI driver configuration options ............................. 56
1.17. Description of NetApp cDOT NFS driver configuration options ............................... 57
1.18. Description of extra specs options for NetApp Unified Driver with Clustered Data
ONTAP .......................................................................................................................... 60
1.19. Description of NetApp 7-Mode iSCSI driver configuration options .......................... 62
1.20. Description of NetApp 7-Mode NFS driver configuration options ........................... 63
1.21. Description of NetApp E-Series driver configuration options .................................. 65
1.22. Description of Nexenta iSCSI driver configuration options ...................................... 68
1.23. Description of Nexenta NFS driver configuration options ....................................... 69
1.24. Description of NFS storage configuration options .................................................. 70
1.25. Description of ProphetStor Fibre Channel and iSCSi drivers configuration options
...................................................................................................................................... 73
1.26. Description of SolidFire driver configuration options .............................................. 77
1.27. Description of VMware configuration options ....................................................... 78
1.28. Extra spec entry to VMDK disk file type mapping .................................................. 79
1.29. Extra spec entry to clone type mapping ................................................................ 79
1.30. Description of Windows configuration options ...................................................... 83
1.31. Description of Xen storage configuration options .................................................. 87
1.32. Description of Zadara Storage driver configuration options ................................... 88
1.33. Description of ZFS Storage Appliance iSCSI driver configuration options ................. 90
1.34. Description of Ceph backup driver configuration options ....................................... 91
1.35. Description of IBM Tivoli Storage Manager backup driver configuration options
...................................................................................................................................... 92
1.36. Description of Swift backup driver configuration options ....................................... 93
1.37. Log files used by Block Storage services ............................................................... 133
1.38. Description of zoning configuration options ........................................................ 133
1.39. Description of zoning manager configuration options ......................................... 134
1.40. Description of zoning fabrics configuration options ............................................. 134
1.41. Description of authorization token configuration options .................................... 139
1.42. Description of Huawei storage driver configuration options ................................. 140
1.43. Description of NAS configuration options ............................................................ 141
1.44. Description of HP MSA Fiber Channel driver configuration options ....................... 141
1.45. Description of Nimble driver configuration options .............................................. 141
vi
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
vii
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
viii
218
231
232
233
233
233
235
235
236
236
237
238
239
240
240
240
241
241
242
242
242
243
243
244
244
244
244
245
247
247
248
249
251
252
252
252
253
253
253
254
254
254
256
256
257
257
257
257
258
258
259
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
ix
260
260
262
263
271
274
275
293
294
295
296
297
297
297
297
298
298
299
299
300
300
301
301
301
301
302
302
302
303
303
303
304
304
305
306
306
307
308
308
309
309
310
311
311
313
314
315
315
315
316
317
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
317
318
318
318
319
321
322
323
323
324
325
325
326
326
326
327
327
327
328
328
331
332
333
333
333
333
333
333
334
334
334
335
335
335
336
336
336
337
337
367
370
371
372
374
375
376
376
377
377
378
378
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
xi
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
xii
428
428
429
429
429
430
432
433
434
434
435
435
435
436
436
438
438
439
440
440
440
441
441
441
442
442
442
443
443
444
445
445
446
446
447
447
448
448
448
448
449
466
474
474
477
478
479
480
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
xiii
juno
480
480
481
481
481
482
486
487
487
487
487
488
488
489
490
491
491
491
492
492
492
493
493
496
496
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
xiv
juno
496
497
498
498
498
498
498
499
500
501
501
501
502
502
503
503
503
506
507
508
509
509
509
509
510
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
xv
juno
510
510
510
510
511
511
511
512
522
523
523
523
525
526
526
526
527
529
529
530
532
532
533
534
536
538
538
539
542
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
xvi
juno
543
545
545
545
546
546
547
548
548
548
548
550
550
551
551
551
551
551
552
552
552
553
553
553
553
554
554
556
556
557
557
557
557
559
559
560
560
561
561
561
563
563
564
565
565
566
566
566
566
566
568
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
xvii
juno
568
568
569
569
570
570
571
571
571
571
571
571
572
592
595
595
596
596
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
List of Examples
1.1. Default (single-instance) configuration .................................................................... 11
3.1. Before .................................................................................................................. 279
3.2. After .................................................................................................................... 279
xviii
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Conventions
The OpenStack documentation uses several typesetting conventions.
Notices
Notices take these forms:
Note
A handy tip or reminder.
Important
Something you must be aware of before proceeding.
Warning
Critical information about the risk of data loss or security issues.
Command prompts
$ prompt
Any user, including the root user, can run commands that are prefixed with
the $ prompt.
xix
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
The root user must run commands that are prefixed with the # prompt. You
can also prefix these commands with the sudo command, if available, to run
them.
Summary of Changes
Update for Icehouse: Updated all configuration tables, include sample configuration files,
add chapters for Database Service, Orchestration, and Telemetry.
January 9, 2014
Removes content addressed in installation, merges duplicated content, and revises legacy
references.
Havana release.
Moves Block Storage driver configuration information from the Block Storage Administration Guide to this reference.
Options can have different types for values. The comments in the sample config files always
mention these. The following types are used by OpenStack:
boolean value
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
integer value
list value
multi valued
string value
xxi
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Sections
Configuration options are grouped by section. Most configuration files support at least the
following sections:
[DEFAULT]
Contains most configuration options. If the documentation for a configuration option does not specify its section, assume that it appears in this section.
[database]
Configuration options for the database that stores the state of the OpenStack service.
Substitution
The configuration file supports variable substitution. After you set a configuration option,
it can be referenced in later configuration values when you precede it with a $, like $OPTION.
The following example uses the values of rabbit_host and rabbit_port to define the
value of the rabbit_hosts option, in this case as controller:5672.
# The RabbitMQ broker address where a single node is used.
# (string value)
rabbit_host = controller
# The RabbitMQ broker port where a single node is used.
# (integer value)
rabbit_port = 5672
# RabbitMQ HA cluster host:port pairs. (list value)
rabbit_hosts = $rabbit_host:$rabbit_port
To avoid substitution, use $$, it is replaced by a single $. For example, if your LDAP DNS
password is $xkj432, specify it, as follows:
ldap_dns_password = $$xkj432
The code uses the Python string.Template.safe_substitute() method to implement variable substitution. For more details on how variable substitution is resolved, see
http://docs.python.org/2/library/string.html#template-strings and PEP 292.
Whitespace
To include whitespace in a configuration value, use a quoted string. For example:
ldap_dns_passsword='a password with spaces'
xxii
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
1. Block Storage
Table of Contents
Introduction to the Block Storage service ........................................................................ 1
cinder.conf configuration file ..................................................................................... 2
Volume drivers ................................................................................................................ 3
Backup drivers ............................................................................................................... 91
Block Storage sample configuration files ........................................................................ 93
Log files used by Block Storage ................................................................................... 133
Fibre Channel Zone Manager ...................................................................................... 133
Volume encryption with static key ............................................................................... 135
Additional options ....................................................................................................... 139
New, updated and deprecated options in Juno for OpenStack Block Storage ................ 157
The OpenStack Block Storage service works with many different storage drivers that you
can configure by using these instructions.
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
api_paste_config = /etc/cinder/api-paste.ini
iscsi_helper=tgtadm
volume_name_template = volume-%s
volume_group = cinder-volumes
verbose = True
auth_strategy = keystone
#osapi_volume_listen_port=5900
# Add these when not using the defaults.
rabbit_host = 10.10.10.10
rabbit_port = 5672
rabbit_userid = rabbit
rabbit_password = secure_password
rabbit_virtual_host = /nova
Volume drivers
To use different volume drivers for the cinder-volume service, use the parameters described in these sections.
The volume drivers are included in the Block Storage repository (https://github.com/openstack/cinder). To set a volume driver, use the volume_driver flag. The default is:
volume_driver = cinder.volume.drivers.lvm.LVMISCSIDriver
Figure1.1.Ceph architecture
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
RADOS
Ceph is based on RADOS: Reliable Autonomic Distributed Object Store. RADOS distributes
objects across the storage cluster and replicates objects for fault tolerance. RADOS contains
the following major components:
Object Storage Device (OSD) Daemon. The storage daemon for the RADOS service, which
interacts with the OSD (physical or logical storage unit for your data).
You must run this daemon on each server in your cluster. For each OSD, you can have an
associated hard drive disk. For performance purposes, pool your hard drive disk with raid
arrays, logical volume management (LVM), or B-tree file system (Btrfs) pooling. By default, the following pools are created: data, metadata, and RBD.
Meta-Data Server (MDS). Stores metadata. MDSs build a POSIX file system on top of objects for Ceph clients. However, if you do not use the Ceph file system, you do not need a
metadata server.
Monitor (MON). A lightweight daemon that handles all communications with external
applications and clients. It also provides a consensus for distributed decision making in a
Ceph/RADOS cluster. For instance, when you mount a Ceph shared on a client, you point
to the address of a MON server. It checks the state and the consistency of the data. In an
ideal setup, you must run at least three ceph-mon daemons on separate servers.
Ceph developers recommend that you use Btrfs as a file system for storage. XFS might
be a better alternative for production environments;XFS is an excellent alternative to Btrfs.
The ext4 file system is also compatible but does not exploit the power of Ceph.
Note
If using Btrfs, ensure that you use the correct version (see Ceph Dependencies).
For more information about usable file systems, see ceph.com/ceph-storage/file-system/.
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
RBD and QEMU-RBD. Linux kernel and QEMU block devices that stripe data across multiple objects.
Driver options
The following table contains the configuration options supported by the Ceph RADOS
Block Device driver.
Description
[DEFAULT]
rados_connect_timeout = -1
rbd_ceph_conf =
rbd_flatten_volume_from_snapshot = False
(BoolOpt) Flatten volumes created from snapshots to remove dependency from volume to snapshot
rbd_max_clone_depth = 5
rbd_pool = rbd
rbd_secret_uuid = None
(StrOpt) The libvirt uuid of the secret for the rbd_user volumes
rbd_store_chunk_size = 4
rbd_user = None
volume_tmp_dir = None
Supported operations
Create, delete, attach, and detach volumes.
Create, list, and delete volume snapshots.
Create a volume from a snapshot.
Copy an image to a volume.
Copy a volume to an image.
Clone a volume.
Get volume statistics.
5
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
This document describes how to configure the OpenStack Block Storage service for use with
Coraid storage appliances.
Terminology
These terms are used in this section:
Term
Definition
AoE
ATA-over-Ethernet protocol
ESM provides live monitoring and management of EtherDrive appliances that use the AoE protocol, such as the
SRX and VSX.
SAN
SRX
VSX
Requirements
To support the OpenStack Block Storage service, your SAN must include an SRX for physical
storage, a VSX running at least CorOS v2.0.6 for snapshot support, and an ESM running at
least v2.1.1 for storage repository orchestration. Ensure that all storage appliances are installed and connected to your network before you configure OpenStack volumes.
In order for the node to communicate with the SAN, you must install the Coraid AoE Linux
driver on each Compute node on the network that runs an OpenStack instance.
Overview
To configure the OpenStack Block Storage for use with Coraid storage appliances, perform
the following procedures:
1.
2.
3.
Create a storage repository by using the ESM GUI and record the FQRN.
4.
5.
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
2.
3.
4.
5.
Optionally, specify the Ethernet interfaces that the node can use to communicate with
the SAN.
The AoE driver may use every Ethernet interface available to the node unless limited
with the aoe_iflist parameter. For more information about the aoe_iflist parameter, see the aoe readme file included with the AoE driver.
# modprobe aoe_iflist="eth1 eth2 ..."
2.
3.
Choose Menu > Create Storage Profile. If the option is unavailable, you might not
have appropriate permissions. Make sure you are logged in to the ESM as the SAN administrator.
4.
5.
Select a RAID type (if more than one is available) for the selected profile type.
6.
7.
8.
Select the number of drives to be initialized for each RAID (LUN) from the drop-down
menu (if the selected RAID type requires multiple drives).
9.
Type the number of RAID sets (LUNs) you want to create in the repository by using this
profile.
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
2.
3.
Note
The reserved space does not include space used for parity or space used
for mirrors. If parity and/or mirrors are required, the actual space allocated to the repository from the SAN is greater than that specified in reserved
space.
UnlimitedUnlimited means that the amount of space allocated to the repository is
unlimited and additional space is allocated to the repository automatically when space
is required and available.
Note
Drives specified in the associated Storage Profile must be available on the
SAN in order to allocate additional resources.
4.
Note
If the Storage Profile associated with the repository has platinum availability, the Resizeable LUN box is automatically checked.
5.
Check the Show Allocation Plan API calls box. Click Next.
6.
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
performance_classavailability_class:profile_name:repository_name.
In this example, the FQRN is Bronze-Platinum:BP1000:OSTest, and is highlighted.
Description
[DEFAULT]
coraid_esm_address =
coraid_group = admin
coraid_password = password
coraid_repository_key = coraid_repository
coraid_user = admin
Access to storage devices and storage repositories can be controlled using Access Control
Groups configured in ESM. Configuring cinder.conf to log on to ESM as the SAN administrator (user name admin), will grant full access to the devices and repositories configured
in ESM.
9
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Optionally, you can configure an ESM Access Control Group and user. Then, use the
cinder.conf file to configure access to the ESM through that group, and user limits access from the OpenStack instance to devices and storage repositories that are defined in
the group.
To manage access to the SAN by using Access Control Groups, you must enable the Use Access Control setting in the ESM System Setup > Security screen.
For more information, see the ESM Online Help.
Restart Cinder.
# service openstack-cinder-api restart
# service openstack-cinder-scheduler restart
# service openstack-cinder-volume restart
2.
Create a volume.
$ cinder type-create volume_type_name
where volume_type_name is the name you assign the volume. You will see output
similar to the following:
+--------------------------------------+-------------+
|
ID
|
Name
|
+--------------------------------------+-------------+
| 7fa6b5ab-3e20-40f0-b773-dd9e16778722 | JBOD-SAS600 |
+--------------------------------------+-------------+
Record the value in the ID field; you use this value in the next step.
3.
Description
UUID
coraid_repository_key
The key name used to associate the Cinder volume type with the ESM in the cinder.conf file.
If no key name was defined, this is default value for
coraid_repository.
FQRN
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Supported operations
Create, delete, attach, and detach volumes.
Create, list, and delete volume snapshots.
Clone a volume.
The OpenStack Block Storage service supports:
Multiple instances of Dell EqualLogic Groups or Dell EqualLogic Group Storage Pools and
multiple pools on a single array.
Multiple instances of Dell EqualLogic Groups or Dell EqualLogic Group Storage Pools or
multiple pools on a single array.
The Dell EqualLogic volume driver's ability to access the EqualLogic Group is dependent upon the generic block storage driver's SSH settings in the /etc/cinder/cinder.conf file
(see the section called Block Storage sample configuration files [93] for reference).
Description
[DEFAULT]
eqlx_chap_login = admin
eqlx_chap_password = password
eqlx_cli_max_retries = 5
eqlx_cli_timeout = 30
eqlx_group_name = group-0
eqlx_pool = default
eqlx_use_chap = False
The following sample /etc/cinder/cinder.conf configuration lists the relevant settings for a typical Block Storage service using a single Dell EqualLogic Group:
11
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
san_ssh_port = 22
ssh_conn_timeout = 30
san_private_key = SAN_KEY_PATH
ssh_min_pool_conn = 1
ssh_max_pool_conn = 5
SAN_UNAME
SAN_PW
EQLX_GROUP
EQLX_POOL
The pool where the Block Storage service will create volumes and snapshots. Default pool is default. This option cannot be used for multiple pools utilized by the
Block Storage service on a single Dell EqualLogic Group.
EQLX_UNAME
EQLX_PW
The corresponding password of EQLX_UNAME. The default password is randomly generated in hexadecimal, so
you must set this password manually.
SAN_KEY_PATH (optional)
The filename of the private key used for SSH authentication. This provides password-less login to the EqualLogic
Group. Not used when san_password is set. There is no
default value.
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
System requirements
EMC SMI-S Provider V4.6.2.8 and higher is required. You can download SMI-S from the
EMC's support web site (login is required). See the EMC SMI-S Provider release notes for installation instructions.
EMC storage VMAX Family is supported.
Supported operations
VMAX drivers support these operations:
Create, delete, attach, and detach volumes.
Create, list, and delete volume snapshots.
Copy an image to a volume.
Copy a volume to an image.
Clone a volume.
Extend a volume.
Retype a volume.
Create a volume from a snapshot.
VMAX drivers also support the following features:
FAST automated storage tiering policy.
Dynamic masking view creation.
Striped volume creation.
On openSUSE:
# zypper install python-pywbem
On Fedora:
# yum install pywbem
Set up SMI-S
You can install SMI-S on a non-OpenStack host. Supported platforms include different flavors of Windows, Red Hat, and SUSE Linux. SMI-S can be installed on a physical server or a
VM hosted by an ESX server. Note that the supported hypervisor for a VM running SMI-S
13
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
is ESX only. See the EMC SMI-S Provider release notes for more information on supported
platforms and installation instructions.
Note
You must discover storage arrays on the SMI-S server before you can use the
VMAX drivers. Follow instructions in the SMI-S release notes.
SMI-S is usually installed at /opt/emc/ECIM/ECOM/bin on Linux and C:\Program
Files\EMC\ECIM\ECOM\bin on Windows. After you install and configure SMI-S, go to
that directory and type TestSmiProvider.exe.
Use addsys in TestSmiProvider.exe to add an array. Use dv and examine the output after
the array is added. Make sure that the arrays are recognized by the SMI-S server before using the EMC VMAX drivers.
In this example, two backend configuration groups are enabled: CONF_GROUP_ISCSI and
CONF_GROUP_FC. Each configuration group has a section describing unique parameters
for connections, drivers, the volume_backend_name, and the name of the EMC-specific
configuration file containing additional settings. Note that the file name is in the format /
etc/cinder/cinder_emc_config_[confGroup].xml.
Once the cinder.conf and EMC-specific configuration files have been created, cinder
commands need to be issued in order to create and associate OpenStack volume types with
the declared volume_backend_names:
$
$
$
$
cinder
cinder
cinder
cinder
type-create VMAX_ISCSI
type-key VMAX_ISCSI set volume_backend_name=ISCSI_backend
type-create VMAX_FC
type-key VMAX_FC set volume_backend_name=FC_backend
By issuing these commands, the Block Storage volume type VMAX_ISCSI is associated with
the ISCSI_backend, and the type VMAX_FC is associated with the FC_backend.
Restart the cinder-volume service.
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Where:
EcomServerIp and EcomServerPort are the IP address and port number of the
ECOM server which is packaged with SMI-S.
EcomUserName and EcomPassword are credentials for the ECOM server.
PortGroups supplies the names of VMAX port groups that have been pre-configured
to expose volumes managed by this backend. Each supplied port group should have sufficient number and distribution of ports (across directors and switches) as to ensure adequate bandwidth and failure protection for the volume connections. PortGroups can
contain one or more port groups of either iSCSI or FC ports. When a dynamic masking
view is created by the VMAX driver, the port group is chosen randomly from the PortGroup list, to evenly distribute load across the set of groups provided. Make sure that
the PortGroups set contains either all FC or all iSCSI port groups (for a given backend),
as appropriate for the configured driver (iSCSI or FC).
The Array tag holds the unique VMAX array serial number.
The Pool tag holds the unique pool name within a given array. For backends not using
FAST automated tiering, the pool is a single pool that has been created by the administrator. For backends exposing FAST policy automated tiering, the pool is the bind pool to
be used with the FAST policy.
The FastPolicy tag conveys the name of the FAST Policy to be used. By including this
tag, volumes managed by this backend are treated as under FAST control. Omitting the
FastPolicy tag means FAST is not enabled on the provided storage pool.
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Install the python-pywbem package for your distribution. See the section called Install
the python-pywbem package [13].
2.
Download SMI-S from PowerLink and install it. Add your VMAX arrays to SMI-S.
For information, see the section called Set up SMI-S [13] and the SMI-S release
notes.
3.
4.
Configure connectivity. For FC driver, see the section called FC Zoning with
VMAX [15]. For iSCSI driver, see the section called iSCSI with VMAX [15].
Note
Hosts attaching to VMAX storage managed by the OpenStack environment
cannot also be attached to storage on the same VMAX not being managed by
OpenStack. This is due to limitations on VMAX Initiator Group membership.
FA port groups
VMAX array FA ports to be used in a new masking view are chosen from the list provided in
the EMC configuration file.
16
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
System requirements
Flare version 5.32 or later.
You must activate VNX Snapshot and Thin Provisioning license for the array. Ensure that
all the iSCSI ports from the VNX are accessible through OpenStack hosts.
Navisphere CLI v7.32 or later.
EMC storage VNX Series are supported.
Supported operations
Create, delete, attach, and detach volumes.
Create, list, and delete volume snapshots.
Create a volume from a snapshot.
17
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Install NaviSecCLI
On Ubuntu x64, download the NaviSecCLI deb package from EMC's OpenStack GitHub web
site.
For all the other variants of Linux, download the NaviSecCLI rpm package from EMC's support web site for VNX2 series or VNX1 series. Login is required.
2.
/etc/init.d/open-iscsi start
iscsiadm -m discovery -t st -p 10.10.61.35
cd /etc/iscsi
more initiatorname.iscsi
iscsiadm -m node
Log in to VNX from the node using the target corresponding to the SPA port:
# iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm01234567890.a0 -p 10.10.
61.35 -l
Click Register, select CLARiiON/VNX, and enter the host name myhost1 and IP address myhost1. Click Register. Now host 1.1.1.1 also appears under Hosts->Host
List.
18
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
5.
Log in to VNX from the node using the target corresponding to the SPB port:
# iscsiadm -m node -T iqn.1992-04.com.emc:cx.apm01234567890.b8 -p 10.10.
10.11 -l
6.
7.
Log out:
# iscsiadm -m node -u
Note
To find out max_luns_per_storage_group for each VNX model, refer to
the EMC's support web site (login is required).
Restart the cinder-volume service.
2.
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
The previous example creates two volume types: TypeA and TypeB. For TypeA, storagetype:provisioning is set to thick. Similarly for TypeB,
storagetype:provisioning is set to thin. If storagetype:provisioning is
not specified, it will be default to thick.
GlusterFS driver
GlusterFS is an open-source scalable distributed file system that is able to grow to petabytes
and beyond in size. More information can be found on Gluster's homepage.
This driver enables the use of GlusterFS in a similar fashion as NFS. It supports basic volume
operations, including snapshot/clone.
Note
You must use a Linux kernel of version 3.4 or greater (or version 2.6.32 or
greater in Red Hat Enterprise Linux/CentOS 6.3+) when working with Gluster-based volumes. See Bug 1177103 for more information.
To use Block Storage with GlusterFS, first set the volume_driver in cinder.conf:
volume_driver=cinder.volume.drivers.glusterfs.GlusterfsDriver
The following table contains the configuration options supported by the GlusterFS driver.
Description
[DEFAULT]
glusterfs_mount_point_base = $state_path/mnt
glusterfs_qcow2_volumes = False
glusterfs_shares_config = /etc/cinder/glusterfs_shares
glusterfs_sparsed_volumes = True
System requirements
Use the HDS ssc command to communicate with an HNAS array. This utility package is available in the physical media distributed with the hardware or it can be copied from the SMU
(/usr/local/bin/ssc).
Platform: Ubuntu 12.04 LTS or newer.
20
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Supported operations
The base NFS driver combined with the HNAS driver extensions support these operations:
Create, delete, attach, and detach volumes.
Create, list, and delete volume snapshots.
Create a volume from a snapshot.
Copy an image to a volume.
Copy a volume to an image.
Clone a volume.
Extend a volume.
Get volume statistics.
Configuration
The HDS driver supports the concept of differentiated services (also referred to as quality of
service) by mapping volume types to services provided through HNAS. HNAS supports a variety of storage options and file system capabilities which are selected through volume typing and the use of multiple back-ends. The HDS driver maps up to 4 volume types into separate exports/filesystems, and can support any number using multiple back-ends.
Configuration is read from an XML-formatted file (one per backend). Examples are shown
for single and multi back-end cases.
Note
Configuration is read from an XML file. This example shows the configuration
for single back-end and for multi-back-end cases.
The default volume type needs to be set in configuration file. If there is no
default volume type, only matching volume types will work.
Description
[DEFAULT]
hds_hnas_iscsi_config_file = /opt/hds/hnas/
cinder_iscsi_conf.xml
hds_hnas_nfs_config_file = /opt/hds/hnas/
cinder_nfs_conf.xml
HNAS setup
Before using iSCSI and NFS services, use the HNAS Web Interface to create storage pool(s),
filesystem(s), and assign an EVS. For NFS, NFS exports should be created. For iSCSI, a SCSI
Domain needs to be set.
21
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Single back-end
In a single back-end deployment, only one OpenStack Block Storage instance runs on the
OpenStack Block Storage server and controls one HNAS array: this deployment requires
these configuration files:
1. Set the hds_hnas_iscsi_config_file option in the /etc/cinder/cinder.conf file to use the HNAS iSCSI volume driver. Or
hds_hnas_nfs_config_file to use HNAS NFS driver. This option points to a configuration file.1
For HNAS iSCSI driver:
volume_driver = cinder.volume.drivers.hds.iscsi.HDSISCSIDriver
hds_hnas_iscsi_config_file = /opt/hds/hnas/cinder_iscsi_conf.xml
For HNAS NFS, configure hds_hnas_nfs_config_file at the location specified previously. For example, /opt/hds/hnas/cinder_nfs_conf.xml:
<?xml version="1.0" encoding="UTF-8" ?>
<config>
<mgmt_ip0>172.17.44.16</mgmt_ip0>
<hnas_cmd>ssc</hnas_cmd>
<username>supervisor</username>
<password>supervisor</password>
<chap_enabled>False</chap_enabled>
<svc_0>
<volume_type>default</volume_type>
<hdp>172.17.44.100:/virtual-01</hdp>
</svc_0>
</config>
Up to 4 service stanzas can be included in the XML file; named svc_0, svc_1, svc_2 and
svc_3. Additional services can be enabled using multi-backend as described below.
1
22
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Multi back-end
In a multi back-end deployment, more than one OpenStack Block Storage instance runs on
the same server. In this example, two HNAS arrays are used, possibly providing different
storage performance:
1.
For HNAS iSCSI, configure /etc/cinder/cinder.conf: the hnas1 and hnas2 configuration blocks are created. Set the hds_hnas_iscsi_config_file option to
point to an unique configuration file for each block. Set the volume_driver option
for each back-end to cinder.volume.drivers.hds.iscsi.HDSISCSIDriver.
enabled_backends=hnas1,hnas2
[hnas1]
volume_driver = cinder.volume.drivers.hds.iscsi.HDSISCSIDriver
hds_hnas_iscsi_config_file = /opt/hds/hnas/cinder_iscsi1_conf.xml
volume_backend_name=hnas-1
[hnas2]
volume_driver = cinder.volume.drivers.hds.iscsi.HDSISCSIDriver
hds_hnas_iscsi_config_file = /opt/hds/hnas/cinder_iscsi2_conf.xml
volume_backend_name=hnas-2
2.
3.
1.
For NFS, configure /etc/cinder/cinder.conf: the hnas1 and hnas2 configuration blocks are created. Set the hds_hnas_nfs_config_file option to point to
an unique configuration file for each block. Set the volume_driver option for each
back-end to cinder.volume.drivers.hds.nfs.HDSNFSDriver.
23
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
enabled_backends=hnas1,hnas2
[hnas1]
volume_driver = cinder.volume.drivers.hds.nfs.HDSNFSDriver
hds_hnas_nfs_config_file = /opt/hds/hnas/cinder_nfs1_conf.xml
volume_backend_name=hnas-1
[hnas2]
volume_driver = cinder.volume.drivers.hds.nfs.HDSNFSDriver
hds_hnas_nfs_config_file = /opt/hds/hnas/cinder_nfs2_conf.xml
volume_backend_name=hnas-2
2.
3.
24
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
able free space. In each configuration file, you must define the default volume type in
the service labels.
hdp
iscsi_ip
Typically a OpenStack Block Storage volume instance has only one such service label. For example, any svc_0, svc_1, svc_2 or svc_3 can be associated with it. But any mix of these
service labels can be used in the same instance 3.
Table1.6.Configuration options
Option
Type
mgmt_ip0
Required
Default
Description
hnas_cmd
Optional
ssc
chap_enabled
Optional
True
(iSCSI only) chap_enabled is a boolean tag used to enable CHAP authentication protocol.
username
Required
supervisor
password
Required
supervisor
Optional
(at least one la- Service labels: these four predefined names help four difbel has to be de- ferent sets of configuration options. Each can specify HDP
fined)
and a unique volume type.
volume_type
Required
default
iscsi_ip
Required
25
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
Option
Type
hdp
Required
Default
juno
Description
HDP, for HNAS iSCSI is the virtual filesystem label or the
path (for HNAS NFS) where volume, or snapshot should be
created.
System requirements
Use the HDS hus-cmd command to communicate with an HUS array. You can download
this utility package from the HDS support site (https://hdssupport.hds.com/).
Platform: Ubuntu 12.04 LTS or newer.
Supported operations
Create, delete, attach, and detach volumes.
Create, list, and delete volume snapshots.
Create a volume from a snapshot.
Copy an image to a volume.
Copy a volume to an image.
Clone a volume.
Extend a volume.
Get volume statistics.
Configuration
The HDS driver supports the concept of differentiated services, where a volume type can be
associated with the fine-tuned performance characteristics of an HDP the dynamic pool
where volumes are created4. For instance, an HDP can consist of fast SSDs to provide speed.
HDP can provide a certain reliability based on things like its RAID level characteristics. HDS
driver maps volume type to the volume_type option in its configuration file.
Configuration is read from an XML-format file. Examples are shown for single and multi
back-end cases.
Note
Configuration is read from an XML file. This example shows the configuration
for single back-end and for multi-back-end cases.
It is not recommended to manage an HUS array simultaneously from multiple
OpenStack Block Storage instances or servers. 5
4
Do not confuse differentiated services with the OpenStack Block Storage volume services.
It is okay to manage multiple HUS arrays by using multiple OpenStack Block Storage instances (or servers).
26
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
[DEFAULT]
hds_cinder_config_file = /opt/hds/hus/
cinder_hus_conf.xml
HUS setup
Before using iSCSI services, use the HUS UI to create an iSCSI domain for each EVS providing
iSCSI services.
Single back-end
In a single back-end deployment, only one OpenStack Block Storage instance runs on the
OpenStack Block Storage server and controls one HUS array: this deployment requires these
configuration files:
1. Set the hds_cinder_config_file option in the /etc/cinder/cinder.conf file
to use the HDS volume driver. This option points to a configuration file.6
volume_driver = cinder.volume.drivers.hds.hds.HUSDriver
hds_cinder_config_file = /opt/hds/hus/cinder_hds_conf.xml
Multi back-end
In a multi back-end deployment, more than one OpenStack Block Storage instance runs on
the same server. In this example, two HUS arrays are used, possibly providing different storage performance:
6
27
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
2.
Configure /opt/hds/hus/cinder_hus1_conf.xml:
<?xml version="1.0" encoding="UTF-8" ?>
<config>
<mgmt_ip0>172.17.44.16</mgmt_ip0>
<mgmt_ip1>172.17.44.17</mgmt_ip1>
<hus_cmd>hus-cmd</hus_cmd>
<username>system</username>
<password>manager</password>
<svc_0>
<volume_type>regular</volume_type>
<iscsi_ip>172.17.39.132</iscsi_ip>
<hdp>9</hdp>
</svc_0>
<snapshot>
<hdp>13</hdp>
</snapshot>
<lun_start>
3000
</lun_start>
<lun_end>
4000
</lun_end>
</config>
3.
juno
28
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
2000
</lun_start>
<lun_end>
3000
</lun_end>
</config>
Table1.8.Configuration options
Option
Type
mgmt_ip0
Required
Default
Description
Management Port 0 IP address
mgmt_ip1
Required
hus_cmd
Optional
username
Optional
29
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
Default
juno
Option
Type
password
Optional
Description
Optional
(at least one la- Service labels: these four predefined names help four difbel has to be de- ferent sets of configuration options -- each can specify iSCfined)
SI port address, HDP and a unique volume type.
snapshot
Required
A service label which helps specify configuration for snapshots, such as, HDP.
volume_type
Required
volume_type tag is used to match volume type. Default meets any type of volume_type, or if it is not
specified. Any other volume_type is selected if exactly
matched during create_volume.
iscsi_ip
Required
iSCSI port IP address where volume attaches for this volume type.
hdp
Required
lun_start
Optional
lun_end
Optional
4096
System requirements
To use the HP 3PAR drivers, install the following software and components on the HP 3PAR
storage system:
HP 3PAR Operating System software version 3.1.3 MU1 or higher
HP 3PAR Web Services API Server must be enabled and running
One Common Provisioning Group (CPG)
Additionally, you must install the hp3parclient version 3.1.0 or newer from the Python
standard library on the system with the enabled Block Storage service volume drivers.
Supported operations
Create, delete, attach, and detach volumes.
Create, list, and delete volume snapshots.
Create a volume from a snapshot.
Copy an image to a volume.
Copy a volume to an image.
30
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Clone a volume.
Extend a volume.
Migrate a volume with back-end assistance.
Volume type support for both HP 3PAR drivers includes the ability
to set the following capabilities in the OpenStack Block Storage API
cinder.api.contrib.types_extra_specs volume type extra specs extension module:
hp3par:cpg
hp3par:snap_cpg
hp3par:provisioning
hp3par:persona
hp3par:vvs
To work with the default filter scheduler, the key values are case sensitive and scoped with
hp3par:. For information about how to set the key-value pairs and associate them with a
volume type, run the following command:
$ cinder help type-key
Note
Volumes that are cloned only support extra specs keys cpg, snap_cpg, provisioning and vvs. The others are ignored. In addition the comments section of the
cloned volume in the HP 3PAR StoreServ storage array is not populated.
If volume types are not used or a particular key is not set for a volume type, the following
defaults are used:
hp3par:cpg - Defaults to the hp3par_cpg setting in the cinder.conf file.
hp3par:snap_cpg - Defaults to the hp3par_snap setting in the cinder.conf file. If
hp3par_snap is not set, it defaults to the hp3par_cpg setting.
hp3par:provisioning - Defaults to thin provisioning, the valid values are thin and
full.
hp3par:persona - Defaults to the 2 - Generic-ALUA persona. The valid values are,
1 - Generic, 2 - Generic-ALUA, 6 - Generic-legacy, 7 - HPUX-legacy, 8
- AIX-legacy, 9 - EGENERA, 10 - ONTAP-legacy, 11 - VMware, 12 - OpenVMS, 13 - HPUX, and 15 - WindowsServer.
QoS support for both HP 3PAR drivers includes the ability to set the following capabilities in
the OpenStack Block Storage API cinder.api.contrib.qos_specs_manage qos specs
extension module:
minBWS
maxBWS
31
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
minIOPS
maxIOPS
latency
priority
The qos keys above no longer require to be scoped but must be created and associated to a
volume type. For information about how to set the key-value pairs and associate them with
a volume type, run the following commands:
$ cinder help qos-create
$ cinder help qos-key
$ cinder help qos-associate
The following keys require that the HP 3PAR StoreServ storage array has a Priority Optimization license installed.
hp3par:vvs - The virtual volume set name that has been predefined by the Administrator with Quality of Service (QoS) rules associated to it. If you specify extra_specs
hp3par:vvs, the qos_specs minIOPS, maxIOPS, minBWS, and maxBWS settings are ignored.
minBWS - The QoS I/O issue bandwidth minimum goal in MBs. If not set, the I/O issue
bandwidth rate has no minimum goal.
maxBWS - The QoS I/O issue bandwidth rate limit in MBs. If not set, the I/O issue bandwidth rate has no limit.
minIOPS - The QoS I/O issue count minimum goal. If not set, the I/O issue count has no
minimum goal.
maxIOPS - The QoS I/O issue count rate limit. If not set, the I/O issue count rate has no
limit.
latency - The latency goal in milliseconds.
priority - The priority of the QoS rule over other rules. If not set, the priority is normal,
valid values are low, normal and high.
Note
Since the Icehouse release, minIOPS and maxIOPS must be used together to set
I/O limits. Similarly, minBWS and maxBWS must be used together. If only one is
set the other will be set to the same value.
Install the hp3parclient Python package on the OpenStack Block Storage system.
32
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
2.
Verify that the HP 3PAR Web Services API server is enabled and running on the HP
3PAR storage system.
a.
b.
c.
3.
or
# setwsapi -https enable
Note
To stop the Web Services API Server, use the stopwsapi command. For other options run the setwsapi h command.
4.
If you are not using an existing CPG, create a CPG on the HP 3PAR storage system to
be used as the default location for creating volumes.
5.
33
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Note
You can enable only one driver on each cinder instance unless you enable
multiple back-end support. See the Cinder multiple back-end support instructions to enable this feature.
Note
You can configure one or more iSCSI addresses by using the
hp3par_iscsi_ips option. When you configure multiple addresses, the driver selects the iSCSI port with the fewest active volumes at attach time. The IP address might include an IP port by using a colon (:)
to separate the address from port. If you do not define an IP port, the
default port 3260 is used. Separate IP addresses with a comma (,). The
iscsi_ip_address/iscsi_port options might be used as an alternative to hp3par_iscsi_ips for single port iSCSI configuration.
6.
Save the changes to the cinder.conf file and restart the cinder-volume service.
The HP 3PAR Fibre Channel and iSCSI drivers are now enabled on your OpenStack system. If
you experience problems, review the Block Storage service log files for errors.
34
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
HP LeftHand/StoreVirtual driver
The HPLeftHandISCSIDriver is based on the Block Storage service (Cinder) plug-in architecture. Volume operations are run by communicating with the HP LeftHand/StoreVirtual system over HTTPS, or SSH connections. HTTPS communications use the hplefthandclient,
which is part of the Python standard library.
The HPLeftHandISCSIDriver can be configured to run in one of two possible modes,
legacy mode which uses SSH/CLIQ to communicate with the HP LeftHand/StoreVirtual array, or standard mode which uses a new REST client to communicate with the array. No
new functionality has been, or will be, supported in legacy mode. For performance improvements and new functionality, the driver must be configured for standard mode,
the hplefthandclient must be downloaded, and HP LeftHand/StoreVirtual Operating System software version 11.5 or higher is required on the array. To configure the driver in
standard mode, see the section called HP LeftHand/StoreVirtual REST driver standard
mode [35]. To configure the driver in legacy mode, see the section called HP LeftHand/StoreVirtual CLIQ driver legacy mode [38].
For information about how to manage HP LeftHand/StoreVirtual storage systems, see the
HP LeftHand/StoreVirtual user documentation.
System requirements
To use the HP LeftHand/StoreVirtual driver in standard mode, do the following:
Install LeftHand/StoreVirtual Operating System software version 11.5 or higher on the
HP LeftHand/StoreVirtual storage system.
Create a cluster group.
Install the hplefthandclient version 1.0.2 from the Python Package Index on the system
with the enabled Block Storage service volume drivers.
Supported operations
Create, delete, attach, and detach volumes.
Create, list, and delete volume snapshots.
Create a volume from a snapshot.
Copy an image to a volume.
Copy a volume to an image.
Clone a volume.
Extend a volume.
35
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
The following keys require the HP LeftHand/StoreVirtual storage array be configured for
hplh:ao
hplh:data_pl
If volume types are not used or a particular key is not set for a volume type, the following defaults are used:
hplh:provisioning
hplh:ao
hplh:data_pl
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Install the hplefthandclient Python package on the OpenStack Block Storage system.
# pip install 'hplefthandclient>=1.0.2,<2.0'
2.
If you are not using an existing cluster, create a cluster on the HP LeftHand storage system to be used as the cluster for creating volumes.
3.
You can enable only one driver on each cinder instance unless you enable multiple
back-end support. See the Cinder multiple back-end support instructions to enable this
feature.
If the hplefthand_iscsi_chap_enabled is set to true, the driver will associate
randomly-generated CHAP secrets with all hosts on the HP LeftHand/StoreVirtual system. OpenStack Compute nodes use these secrets when creating iSCSI connections.
Important
CHAP secrets are passed from OpenStack Block Storage to Compute in
clear text. This communication should be secured to ensure that CHAP secrets are not discovered.
37
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Note
CHAP secrets are added to existing hosts as well as newly-created ones. If
the CHAP option is enabled, hosts will not be able to access the storage
without the generated secrets.
4.
Save the changes to the cinder.conf file and restart the cinder-volume service.
Supported operations
Create, delete, attach, and detach volumes.
Create, list, and delete volume snapshots.
Create a volume from a snapshot.
Copy an image to a volume.
Copy a volume to an image.
If you are not using an existing cluster, create a cluster on the HP Lefthand storage system to be used as the cluster for creating volumes.
2.
38
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
san_clustername=ClusterLefthand
# LeftHand iSCSI driver
volume_driver=cinder.volume.drivers.san.hp.hp_lefthand_iscsi.
HPLeftHandISCSIDriver
## OPTIONAL SETTINGS
# LeftHand provisioning, to disable thin provisioning, set to
# set to False.
san_thin_provision=True
# Typically, this parameter is set to False, for this driver.
# To configure the CLIQ commands to run locally instead of over ssh,
# set this parameter to True
san_is_local=False
3.
Save the changes to the cinder.conf file and restart the cinder-volume service.
The HP LeftHand/StoreVirtual driver is now enabled in legacy mode on your OpenStack system. If you experience problems, review the Block Storage service log files for errors.
To configure the VSA
1.
2.
Add server associations on the VSA with the associated CHAPS and initiator information. The name should correspond to the hostname of the nova-compute node. For
Xen, this is the hypervisor host name. To do this, use either CLIQ or the Centralized
Management Console.
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Supported operations
OceanStor T series unified storage supports these operations:
Create, delete, attach, and detach volumes.
Create, list, and delete volume snapshots.
Create a volume from a snapshot.
Copy an image to a volume.
Copy a volume to an image.
Clone a volume.
OceanStor Dorado5100 supports these operations:
Create, delete, attach, and detach volumes.
Create, list, and delete volume snapshots.
Copy an image to a volume.
Copy a volume to an image.
OceanStor Dorado2100 G2 supports these operations:
Create, delete, attach, and detach volumes.
Copy an image to a volume.
Copy a volume to an image.
OceanStor HVS supports these operations:
Create, delete, attach, and detach volumes.
Create, list, and delete volume snapshots.
Copy an image to a volume.
Copy a volume to an image.
Create a volume from a snapshot.
Clone a volume.
40
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
<config>
<Storage>
<Product>T</Product>
<Protocol>iSCSI</Protocol>
<ControllerIP0>x.x.x.x</ControllerIP0>
<ControllerIP1>x.x.x.x</ControllerIP1>
<UserName>xxxxxxxx</UserName>
<UserPassword>xxxxxxxx</UserPassword>
</Storage>
<LUN>
<LUNType>Thick</LUNType>
<StripUnitSize>64</StripUnitSize>
<WriteType>1</WriteType>
<MirrorSwitch>1</MirrorSwitch>
<Prefetch Type="3" value="0"/>
<StoragePool Name="xxxxxxxx"/>
<StoragePool Name="xxxxxxxx"/>
</LUN>
<iSCSI>
<DefaultTargetIP>x.x.x.x</DefaultTargetIP>
<Initiator Name="xxxxxxxx" TargetIP="x.x.x.x"/>
<Initiator Name="xxxxxxxx" TargetIP="x.x.x.x"/>
</iSCSI>
<Host OSType=Linux HostIP=x.x.x.x, x.x.x.x/>
</config>
41
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
<ControllerIP1>x.x.x.x</ControllerIP1>
<UserName>xxxxxxxx</UserName>
<UserPassword>xxxxxxxx</UserPassword>
</Storage>
<LUN>
<LUNType>Thick</LUNType>
<WriteType>1</WriteType>
<MirrorSwitch>1</MirrorSwitch>
</LUN>
<iSCSI>
<DefaultTargetIP>x.x.x.x</DefaultTargetIP>
<Initiator Name="xxxxxxxx" TargetIP="x.x.x.x"/>
<Initiator Name="xxxxxxxx" TargetIP="x.x.x.x"/>
</iSCSI>
<Host OSType=Linux HostIP=x.x.x.x, x.x.x.x/>
</config>
Note
You do not need to configure the iSCSI target IP address for the Fibre Channel
driver. In the prior example, delete the iSCSI configuration:
<iSCSI>
<DefaultTargetIP>x.x.x.x</DefaultTargetIP>
<Initiator Name="xxxxxxxx" TargetIP="x.x.x.x"/>
<Initiator Name="xxxxxxxx" TargetIP="x.x.x.x"/>
</iSCSI>
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
OceanStor HVS storage system supports the QoS function. You must create a QoS policy for
the HVS storage system and create the volume type to enable QoS as follows:
Create volume type: QoS_high
cinder type-create QoS_high
Configure extra_specs for QoS_high:
cinder type-key QoS_high set capabilities:QoS_support="<is> True"
drivers:flow_strategy=OpenStack_QoS_high drivers:io_priority=high
Note
OpenStack_QoS_high is a QoS policy created by a user for the HVS storage
system. QoS_high is the self-defined volume type. Set the io_priority option to high, normal, or low.
OceanStor HVS storage system supports the SmartTier function. SmartTier has three tiers.
You can create the volume type to enable SmartTier as follows:
Create volume type: Tier_high
cinder type-create Tier_high
Configure extra_specs for Tier_high:
cinder type-key Tier_high set capabilities:Tier_support="<is> True"
drivers:distribute_policy=high drivers:transfer_strategy=high
Note
distribute_policy and transfer_strategy can only be set to high,
normal, or low.
Type
Default
Product
Required
Protocol
Required
ControllerIP0
Required
ControllerIP1
Required
HVSURL
Required
UserName
Required
43
Description
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
Default
juno
Flag name
Type
UserPassword
Required
Description
LUNType
Optional
Thin
StripUnitSize
Optional
64
WriteType
Optional
MirrorSwitch
Optional
Prefetch Type
Optional
Prefetch Value
Optional
StoragePool
Required
DefaultTargetIP
Optional
Initiator Name
Optional
Initiator TargetIP
Optional
OSType
Optional
HostIP
Optional
Password of an administrator
Linux
Note
1. You can configure one iSCSI target port for each or all compute nodes. The
driver checks whether a target port IP address is configured for the current
compute node. If not, select DefaultTargetIP.
2. You can configure multiple storage pools in one configuration file, which
supports the use of multiple storage pools in a storage system. (HVS allows
configuration of only one storage pool.)
3. For details about LUN configuration information, see the createlun command in the command-line interface (CLI) documentation or run the help -c
createlun on the storage system CLI.
4. After the driver is loaded, the storage system obtains any modification of the
driver configuration file in real time and you do not need to restart the cinder-volume service.
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
attached, network attached, SAN attached, or a combination of these methods. GPFS provides many features beyond common data access, including data replication, policy based
storage management, and space efficient file snapshot and clone operations.
Note
GPFS software must be installed and running on nodes where Block Storage
and Compute services run in the OpenStack environment. A GPFS file system
must also be created and mounted on these nodes before starting the cinder-volume service. The details of these GPFS specific steps are covered in
GPFS: Concepts, Planning, and Installation Guide and GPFS: Administration and
Programming Reference.
Optionally, the Image Service can be configured to store images on a GPFS file system.
When a Block Storage volume is created from an image, if both image data and volume data reside in the same GPFS file system, the data from image file is moved efficiently to the
volume file using copy-on-write optimization strategy.
The following table contains the configuration options supported by the GPFS driver.
Description
[DEFAULT]
gpfs_images_dir = None
gpfs_images_share_mode = None
gpfs_max_clone_depth = 0
(IntOpt) Specifies an upper limit on the number of indirections required to reach a specific block due to snapshots
or clones. A lengthy chain of copy-on-write snapshots or
clones can have a negative impact on performance, but
improves space utilization. 0 indicates unlimited clone
depth.
45
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
gpfs_mount_point_base = None
gpfs_sparse_volumes = True
gpfs_storage_pool = system
(StrOpt) Specifies the storage pool that volumes are assigned to. By default, the system storage pool is used.
Note
The gpfs_images_share_mode flag is only valid if the Image Service is configured to use GPFS with the gpfs_images_dir flag. When
the value of this flag is copy_on_write, the paths specified by the
gpfs_mount_point_base and gpfs_images_dir flags must both reside in
the same GPFS file system and in the same GPFS file set.
Description
fstype
fslabel
Sets the file system label for the file system specified by
fstype option. This value is only used if fstype is specified.
data_pool_name
replicas
dio
write_affinity_depth
block_group_factor
write_affinity_failure_group
Specifies the range of nodes (in GPFS shared nothing architecture) where replicas of blocks in the volume file are
to be written. See GPFS: Administration and Programming
Reference for more details on this option.
46
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Note
If using iSCSI, ensure that the compute nodes have iSCSI network access to the
Storwize family or SVC system.
Note
OpenStack Nova's Grizzly version supports iSCSI multipath. Once this is configured on the Nova host (outside the scope of this documentation), multipath is
enabled.
If using Fibre Channel (FC), each Storwize family or SVC node should have at least one WWPN port configured. If the storwize_svc_multipath_enabled flag is set to True in the
47
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Cinder configuration file, the driver uses all available WWPNs to attach the volume to the
instance (details about the configuration flags appear in the next section). If the flag is not
set, the driver uses the WWPN associated with the volume's preferred node (if available),
otherwise it uses the first available WWPN of the system. The driver obtains the WWPNs directly from the storage system; you do not need to provide these WWPNs directly to the
driver.
Note
If using FC, ensure that the compute nodes have FC connectivity to the Storwize
family or SVC system.
Note
CHAP secrets are added to existing hosts as well as newly-created ones. If the
CHAP option is enabled, hosts will not be able to access the storage without the
generated secrets.
Note
Not all OpenStack Compute drivers support CHAP authentication. Please check
compatibility before using.
Note
CHAP secrets are passed from OpenStack Block Storage to Compute in clear
text. This communication should be secured to ensure that CHAP secrets are
not discovered.
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Note
Make sure the compute node running the cinder-volume management driver has SSH network access to the storage system.
To allow the driver to communicate with the Storwize family or SVC system, you must provide the driver with a user on the storage system. The driver has two authentication methods: password-based authentication and SSH key pair authentication. The user should have
an Administrator role. It is suggested to create a new user for the management driver.
Please consult with your storage and security administrator regarding the preferred authentication method and how passwords or SSH keys should be stored in a secure manner.
Note
When creating a new user on the Storwize or SVC system, make sure the user
belongs to the Administrator group or to another group that has an Administrator role.
If using password authentication, assign a password to the user on the Storwize or SVC
system. The driver configuration flags for the user and password are san_login and
san_password, respectively.
If you are using the SSH key pair authentication, create SSH private and public keys using
the instructions below or by any other method. Associate the public key with the user by
uploading the public key: select the "choose file" option in the Storwize family or SVC management GUI under "SSH public key". Alternatively, you may associate the SSH public key using the command line interface; details can be found in the Storwize and SVC documentation. The private key should be provided to the driver using the san_private_key configuration flag.
The command prompts for a file to save the key pair. For example, if you select 'key' as the
filename, two files are created: key and key.pub. The key file holds the private SSH key
and key.pub holds the public SSH key.
The command also prompts for a pass phrase, which should be empty.
The private key file should be provided to the driver using the san_private_key configuration flag. The public key should be uploaded to the Storwize family or SVC system using
the storage management GUI or command line interface.
Note
Ensure that Cinder has read permissions on the private key file.
49
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Type
Default
san_ip
Required
san_ssh_port
Optional
san_login
Required
san_password
Required a
Description
Management IP or host name
22
Management port
Management login username
Management login password
san_private_key
Required
storwize_svc_volpool_name
Required
storwize_svc_vol_rsize
Optional
storwize_svc_vol_warning
Optional
0 (disabled)
storwize_svc_vol_autoexpand
Optional
True
storwize_svc_vol_grainsize
Optional
256
storwize_svc_vol_compression
Optional
False
storwize_svc_vol_easytier
Optional
True
storwize_svc_vol_iogrp
Optional
storwize_svc_flashcopy_timeout
Optional
120
storwize_svc_connection_protocol
Optional
iSCSI
storwize_svc_iscsi_chap_enabled
Optional
True
storwize_svc_multipath_enabled
Optional
False
storwize_svc_multihost_enabled
Optional
True
The authentication requires either a password (san_password) or SSH private key (san_private_key). One must be specified. If both are specified, the driver uses only the SSH private key.
b
The driver creates thin-provisioned volumes by default. The storwize_svc_vol_rsize flag defines the initial physical allocation percentage for thin-provisioned volumes, or if set to -1, the driver creates full allocated volumes. More details about the
available options are available in the Storwize family and SVC documentation.
c
Defines whether thin-provisioned volumes can be auto expanded by the storage system, a value of True means that auto expansion is enabled, a value of False disables auto expansion. Details about this option can be found in the autoexpand flag of
the Storwize family and SVC command line interface mkvdisk command.
d
Defines whether Real-time Compression is used for the volumes created with OpenStack. Details on Real-time Compression can
be found in the Storwize family and SVC documentation. The Storwize or SVC system must have compression enabled for this
feature to work.
e
Defines whether Easy Tier is used for the volumes created with OpenStack. Details on EasyTier can be found in the Storwize
family and SVC documentation. The Storwize or SVC system must have Easy Tier enabled for this feature to work.
f
The driver wait timeout threshold when creating an OpenStack snapshot. This is actually the maximum amount of time that the
driver waits for the Storwize family or SVC system to prepare a new FlashCopy mapping. The driver accepts a maximum wait
time of 600 seconds (10 minutes).
50
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Multipath for iSCSI connections requires no storage-side configuration and is enabled if the compute host has multipath configured.
h
This option allows the driver to map a vdisk to more than one host at a time. This scenario occurs during migration of a virtual
machine with an attached volume; the volume is simultaneously mapped to both the source and destination compute hosts. If your
deployment does not require attaching vdisks to multiple hosts, setting this flag to False will provide added safety.
Description
[DEFAULT]
storwize_svc_allow_tenant_qos = False
storwize_svc_connection_protocol = iSCSI
storwize_svc_flashcopy_timeout = 120
(IntOpt) Maximum number of seconds to wait for FlashCopy to be prepared. Maximum value is 600 seconds (10
minutes)
storwize_svc_iscsi_chap_enabled = True
storwize_svc_multihostmap_enabled = True
storwize_svc_multipath_enabled = False
(BoolOpt) Connect with multipath (FC only; iSCSI multipath is controlled by Nova)
storwize_svc_npiv_compatibility_mode = False
storwize_svc_stretched_cluster_partner = None
(StrOpt) If operating in stretched cluster mode, specify the name of the pool in which mirrored copies are
stored.Example: "pool2"
storwize_svc_vol_autoexpand = True
storwize_svc_vol_compression = False
storwize_svc_vol_easytier = True
storwize_svc_vol_grainsize = 256
storwize_svc_vol_iogrp = 0
storwize_svc_vol_rsize = 2
storwize_svc_vol_warning = 0
storwize_svc_volpool_name = volpool
51
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
52
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Volume types can be used, for example, to provide users with different
performance levels (such as, allocating entirely on an HDD tier, using Easy Tier for an
HDD-SDD mix, or allocating entirely on an SSD tier)
resiliency levels (such as, allocating volumes in pools with different RAID levels)
features (such as, enabling/disabling Real-time Compression)
Note
To enable this feature, both pools involved in a given volume migration must
have the same values for extent_size. If the pools have different values for
extent_size, the data will still be moved directly between the pools (not
host-side copy), but the operation will be synchronous.
Extend volumes
The IBM Storwize/SVC driver allows for extending a volume's size, but only for volumes
without snapshots.
Volume retype
The IBM Storwize/SVC driver enables you to modify volume types. When you modify volume types, you can also change these extra specs properties:
rsize
warning
autoexpand
grainsize
53
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
compression
easytier
iogrp
Note
When you change the rsize, grainsize or compression properties, volume copies are asynchronously synchronized on the array.
Note
To change the iogrp property, IBM Storwize/SVC firmware version 6.4.0 or later is required.
Description
[DEFAULT]
san_clustername =
san_ip =
san_login = admin
san_password =
xiv_chap = disabled
xiv_ds8k_connection_type = iscsi
xiv_ds8k_proxy =
xiv_ds8k_openstack.nova_proxy.XIVDS8KNovaProxy
Note
To use the IBM Storage Driver for OpenStack you must download and install the package available at: http://www.ibm.com/
support/fixcentral/swg/selectFixes?parent=Enterprise%2BStorage%2BServers&product=ibm/Storage_Disk/XIV+Storage+System+%282810,
+2812%29&release=All&platform=All&function=all
For full documentation refer to IBM's online documentation available at http://
pic.dhe.ibm.com/infocenter/strhosts/ic/topic/com.ibm.help.strghosts.doc/nova-homepage.html.
54
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
LVM
The default volume back-end uses local volumes managed by LVM.
This driver supports different transport protocols to attach volumes, currently iSCSI and iSER.
Set the following in your cinder.conf, and use the following options to configure for
iSCSI transport:
volume_driver = cinder.volume.drivers.lvm.LVMISCSIDriver
Description
[DEFAULT]
lvm_mirrors = 0
lvm_type = default
volume_group = cinder-volumes
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
al management software to achieve the desired functionality. It uses NetApp APIs to interact with the clustered Data ONTAP instance.
Configuration options for clustered Data ONTAP family with iSCSI protocol
Configure the volume driver, storage family and storage protocol to the NetApp unified
driver, clustered Data ONTAP, and iSCSI respectively by setting the volume_driver,
netapp_storage_family and netapp_storage_protocol options in
cinder.conf as follows:
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family = ontap_cluster
netapp_storage_protocol = iscsi
netapp_vserver = openstack-vserver
netapp_server_hostname = myhostname
netapp_server_port = port
netapp_login = username
netapp_password = password
Note
To use the iSCSI protocol, you must override the default value of
netapp_storage_protocol with iscsi.
Description
[DEFAULT]
netapp_login = None
netapp_password = None
netapp_server_hostname = None
(StrOpt) The hostname (or IP address) for the storage system or proxy server.
netapp_server_port = 80
netapp_size_multiplier = 1.2
netapp_storage_family = ontap_cluster
(StrOpt) The storage family type used on the storage system; valid values are ontap_7mode for using Data ONTAP
operating in 7-Mode, ontap_cluster for using clustered Data ONTAP, or eseries for using E-Series.
netapp_storage_protocol = None
netapp_transport_type = http
(StrOpt) The transport protocol used when communicating with the storage system or proxy server. Valid values
are http or https.
netapp_vserver = None
56
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
extra_specs support). If this option is specified, the exports
belonging to the Vserver will only be used for provisioning in the future. Block storage volumes on exports not belonging to the Vserver specified by this option will continue to function normally.
Note
If you specify an account in the netapp_login that only has virtual storage
server (Vserver) administration privileges (rather than cluster-wide administration privileges), some advanced features of the NetApp unified driver will not
work and you may see warnings in the OpenStack Block Storage logs.
Tip
For more information on these options and other deployment and operational
scenarios, visit the OpenStack NetApp community.
Configure the volume driver, storage family, and storage protocol to NetApp unified
driver, clustered Data ONTAP, and NFS respectively by setting the volume_driver,
netapp_storage_family and netapp_storage_protocol options in
cinder.conf as follows:
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family = ontap_cluster
netapp_storage_protocol = nfs
netapp_vserver = openstack-vserver
netapp_server_hostname = myhostname
netapp_server_port = port
netapp_login = username
netapp_password = password
nfs_shares_config = /etc/cinder/nfs_shares
Description
[DEFAULT]
expiry_thres_minutes = 720
57
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
been accessed in the last M minutes, where M is the value
of this parameter, will be deleted from the cache to create
free space on the NFS share.
netapp_copyoffload_tool_path = None
netapp_login = None
netapp_password = None
netapp_server_hostname = None
(StrOpt) The hostname (or IP address) for the storage system or proxy server.
netapp_server_port = 80
netapp_storage_family = ontap_cluster
(StrOpt) The storage family type used on the storage system; valid values are ontap_7mode for using Data ONTAP
operating in 7-Mode, ontap_cluster for using clustered Data ONTAP, or eseries for using E-Series.
netapp_storage_protocol = None
netapp_transport_type = http
(StrOpt) The transport protocol used when communicating with the storage system or proxy server. Valid values
are http or https.
netapp_vserver = None
thres_avl_size_perc_start = 20
thres_avl_size_perc_stop = 60
Note
Additional NetApp NFS configuration options are shared with the generic NFS
driver. These options can be found here: Table1.24, Description of NFS storage configuration options [70].
Note
If you specify an account in the netapp_login that only has virtual storage
server (Vserver) administration privileges (rather than cluster-wide administra58
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
tion privileges), some advanced features of the NetApp unified driver will not
work and you may see warnings in the OpenStack Block Storage logs.
NetApp NFS Copy Offload client
A feature was added in the Icehouse release of the NetApp unified driver that enables Image Service images to be efficiently copied to a destination Block Storage volume. When
the Block Storage and Image Service are configured to use the NetApp NFS Copy Offload
client, a controller-side copy will be attempted before reverting to downloading the image
from the Image Service. This improves image provisioning times while reducing the consumption of bandwidth and CPU cycles on the host(s) running the Image and Block Storage
services. This is due to the copy operation being performed completely within the storage
cluster.
The NetApp NFS Copy Offload client can be used in either of the following scenarios:
The Image Service is configured to store images in an NFS share that is exported from a
NetApp FlexVol volume and the destination for the new Block Storage volume will be on
an NFS share exported from a different FlexVol volume than the one used by the Image
Service. Both FlexVols must be located within the same cluster.
The source image from the Image Service has already been cached in an NFS image cache
within a Block Storage backend. The cached image resides on a different FlexVol volume
than the destination for the new Block Storage volume. Both FlexVols must be located
within the same cluster.
To use this feature, you must configure the Image Service, as follows:
Set the default_store configuration option to file.
Set the filesystem_store_datadir configuration option to the path to the Image
Service NFS export.
Set the show_image_direct_url configuration option to True.
Set the show_multiple_locations configuration option to True.
Set the filesystem_store_metadata_file configuration option to a metadata
file. The metadata file should contain a JSON object that contains the correct information
about the NFS export used by the Image Service, similar to:
{
"share_location": "nfs://192.168.0.1/myGlanceExport",
"mount_point": "/var/lib/glance/images",
"type": "nfs"
}
To use this feature, you must configure the Block Storage service, as follows:
Set the netapp_copyoffload_tool_path configuration option to the path to the
NetApp Copy Offload binary.
Set the glance_api_version configuration option to 2.
59
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Important
This feature requires that:
The storage system must have Data ONTAP v8.2 or greater installed.
The vStorage feature must be enabled on each storage virtual machine (SVM,
also known as a Vserver) that is permitted to interact with the copy offload
client.
To configure the copy offload workflow, enable NFS v4.0 or greater and export it from the SVM.
Tip
To download the NetApp copy offload binary to be utilized in conjunction with
the netapp_copyoffload_tool_path configuration option, please visit the
Utility Toolchest page at the NetApp Support portal (login is required).
Tip
For more information on these options and other deployment and operational
scenarios, visit the OpenStack NetApp community.
Type
Description
netapp_raid_type
String
netapp_disk_type
String
netapp:qos_policy_groupa
String
Specify the name of a QoS policy group, which defines measurable Service Level Objectives, that should be applied to the
OpenStack Block Storage volume at the time of volume creation.
Ensure that the QoS policy group object within Data ONTAP
should be defined before an OpenStack Block Storage volume is
created, and that the QoS policy group is not associated with the
destination FlexVol volume.
60
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Extra spec
Type
Description
netapp_mirrored
Boolean
Limit the candidate volume list to only the ones that are mirrored on the storage controller.
netapp_unmirroredb
Boolean
Limit the candidate volume list to only the ones that are not mirrored on the storage controller.
netapp_dedup
Boolean
Limit the candidate volume list to only the ones that have deduplication enabled on the storage controller.
netapp_nodedupb
Boolean
Limit the candidate volume list to only the ones that have deduplication disabled on the storage controller.
netapp_compression
Boolean
Limit the candidate volume list to only the ones that have compression enabled on the storage controller.
netapp_nocompressionb
Boolean
Limit the candidate volume list to only the ones that have compression disabled on the storage controller.
netapp_thin_provisioned
Boolean
Limit the candidate volume list to only the ones that support
thin provisioning on the storage controller.
netapp_thick_provisionedb Boolean
Limit the candidate volume list to only the ones that support
thick provisioning on the storage controller.
Please note that this extra spec has a colon (:) in its name because it is used by the driver to assign the QoS policy group to the
OpenStack Block Storage volume after it has been provisioned.
b
In the Juno release, these negative-assertion extra specs are formally deprecated by the NetApp unified driver. Instead of using
the deprecated negative-assertion extra specs (for example, netapp_unmirrored) with a value of true, use the corresponding positive-assertion extra spec (for example, netapp_mirrored) with a value of false.
Configure the volume driver, storage family and storage protocol to the NetApp unified driver, Data ONTAP operating in 7-Mode, and iSCSI respectively by setting the
volume_driver, netapp_storage_family and netapp_storage_protocol options in cinder.conf as follows:
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family = ontap_7mode
netapp_storage_protocol = iscsi
netapp_server_hostname = myhostname
netapp_server_port = 80
netapp_login = username
netapp_password = password
61
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Note
To use the iSCSI protocol, you must override the default value of
netapp_storage_protocol with iscsi.
Description
[DEFAULT]
netapp_login = None
netapp_password = None
netapp_server_hostname = None
(StrOpt) The hostname (or IP address) for the storage system or proxy server.
netapp_server_port = 80
netapp_size_multiplier = 1.2
netapp_storage_family = ontap_cluster
(StrOpt) The storage family type used on the storage system; valid values are ontap_7mode for using Data ONTAP
operating in 7-Mode, ontap_cluster for using clustered Data ONTAP, or eseries for using E-Series.
netapp_storage_protocol = None
netapp_transport_type = http
(StrOpt) The transport protocol used when communicating with the storage system or proxy server. Valid values
are http or https.
netapp_vfiler = None
netapp_volume_list = None
(StrOpt) This option is only utilized when the storage protocol is configured to use iSCSI. This option is used to restrict provisioning to the specified controller volumes.
Specify the value of this option to be a comma separated
list of NetApp controller volume names to be used for provisioning.
Tip
For more information on these options and other deployment and operational
scenarios, visit the OpenStack NetApp community.
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
aging OpenStack volumes on NFS exports provided by the Data ONTAP operating in 7Mode storage system which can then be accessed using NFS protocol.
The NFS configuration for Data ONTAP operating in 7-Mode is a direct interface from
OpenStack Block Storage to the Data ONTAP operating in 7-Mode instance and as such
does not require any additional management software to achieve the desired functionality.
It uses NetApp ONTAPI to interact with the Data ONTAP operating in 7-Mode storage system.
Configuration options for the Data ONTAP operating in 7-Mode family with NFS protocol
Configure the volume driver, storage family, and storage protocol to the NetApp unified driver, Data ONTAP operating in 7-Mode, and NFS respectively by setting the
volume_driver, netapp_storage_family and netapp_storage_protocol options in cinder.conf as follows:
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family = ontap_7mode
netapp_storage_protocol = nfs
netapp_server_hostname = myhostname
netapp_server_port = 80
netapp_login = username
netapp_password = password
nfs_shares_config = /etc/cinder/nfs_shares
Description
[DEFAULT]
expiry_thres_minutes = 720
netapp_login = None
netapp_password = None
netapp_server_hostname = None
(StrOpt) The hostname (or IP address) for the storage system or proxy server.
netapp_server_port = 80
netapp_storage_family = ontap_cluster
(StrOpt) The storage family type used on the storage system; valid values are ontap_7mode for using Data ONTAP
operating in 7-Mode, ontap_cluster for using clustered Data ONTAP, or eseries for using E-Series.
netapp_storage_protocol = None
netapp_transport_type = http
(StrOpt) The transport protocol used when communicating with the storage system or proxy server. Valid values
are http or https.
thres_avl_size_perc_start = 20
63
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
thres_avl_size_perc_stop = 60
Note
Additional NetApp NFS configuration options are shared with the generic NFS
driver. For a description of these, see Table1.24, Description of NFS storage
configuration options [70].
Tip
For more information on these options and other deployment and operational
scenarios, visit the OpenStack NetApp community.
Configure the volume driver, storage family, and storage protocol to the NetApp
unified driver, E-Series, and iSCSI respectively by setting the volume_driver,
netapp_storage_family and netapp_storage_protocol options in
cinder.conf as follows:
64
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
netapp_storage_family = eseries
netapp_storage_protocol = iscsi
netapp_server_hostname = myhostname
netapp_server_port = 80
netapp_login = username
netapp_password = password
netapp_controller_ips = 1.2.3.4,5.6.7.8
netapp_sa_password = arrayPassword
netapp_storage_pools = pool1,pool2
use_multipath_for_image_xfer = True
Note
To use the E-Series driver, you must override the default value of
netapp_storage_family with eseries.
Note
To use the iSCSI protocol, you must override the default value of
netapp_storage_protocol with iscsi.
Description
[DEFAULT]
netapp_controller_ips = None
(StrOpt) This option is only utilized when the storage family is configured to eseries. This option is used to restrict
provisioning to the specified controllers. Specify the value
of this option to be a comma separated list of controller
hostnames or IP addresses to be used for provisioning.
netapp_login = None
netapp_password = None
netapp_sa_password = None
netapp_server_hostname = None
(StrOpt) The hostname (or IP address) for the storage system or proxy server.
netapp_server_port = 80
netapp_storage_family = ontap_cluster
(StrOpt) The storage family type used on the storage system; valid values are ontap_7mode for using Data ONTAP
operating in 7-Mode, ontap_cluster for using clustered Data ONTAP, or eseries for using E-Series.
netapp_storage_pools = None
netapp_transport_type = http
(StrOpt) The transport protocol used when communicating with the storage system or proxy server. Valid values
are http or https.
netapp_webservice_path = /devmgr/v2
(StrOpt) This option is used to specify the path to the ESeries proxy application on a proxy server. The value is
65
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
combined with the value of the netapp_transport_type,
netapp_server_hostname, and netapp_server_port options to create the URL used by the driver to connect to
the proxy application.
Tip
For more information on these options and other deployment and operational
scenarios, visit the OpenStack NetApp community.
1. NetApp iSCSI direct driver for Clustered Data ONTAP in Grizzly (or earlier).
volume_driver = cinder.volume.drivers.netapp.iscsi.
NetAppDirectCmodeISCSIDriver
2. NetApp NFS direct driver for Clustered Data ONTAP in Grizzly (or earlier).
volume_driver = cinder.volume.drivers.netapp.nfs.NetAppDirectCmodeNfsDriver
3. NetApp iSCSI direct driver for Data ONTAP operating in 7-Mode storage controller in
Grizzly (or earlier)
volume_driver = cinder.volume.drivers.netapp.iscsi.
NetAppDirect7modeISCSIDriver
66
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
4. NetApp NFS direct driver for Data ONTAP operating in 7-Mode storage controller in Grizzly (or earlier)
volume_driver = cinder.volume.drivers.netapp.nfs.NetAppDirect7modeNfsDriver
3. NetApp iSCSI driver for Data ONTAP operating in 7-Mode storage controller.
volume_driver = cinder.volume.drivers.netapp.iscsi.NetAppISCSIDriver
4. NetApp NFS driver for Data ONTAP operating in 7-Mode storage controller.
volume_driver = cinder.volume.drivers.netapp.nfs.NetAppNFSDriver
Note
See the OpenStack NetApp community for support information on deprecated
NetApp drivers in the Havana release.
Nexenta drivers
NexentaStor Appliance is NAS/SAN software platform designed for building reliable and
fast network storage arrays. The Nexenta Storage Appliance uses ZFS as a disk management system. NexentaStor can serve as a storage node for the OpenStack and its virtual
servers through iSCSI and NFS protocols.
With the NFS option, every Compute volume is represented by a directory designated to be
its own file system in the ZFS file system. These file systems are exported using NFS.
With either option some minimal setup is required to tell OpenStack which NexentaStor
servers are being used, whether they are supporting iSCSI and/or NFS and how to access
each of the servers.
Typically the only operation required on the NexentaStor servers is to create the containing directory for the iSCSI or NFS exports. For NFS this containing directory must be explicitly exported via NFS. There is no software that must be installed on the NexentaStor servers;
they are controlled using existing management plane interfaces.
67
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
[DEFAULT]
nexenta_blocksize =
nexenta_host =
nexenta_iscsi_target_portal_port = 3260
nexenta_password = nexenta
nexenta_rest_port = 2000
nexenta_rest_protocol = auto
nexenta_rrmgr_compression = 0
nexenta_rrmgr_connections = 2
nexenta_rrmgr_tcp_buf_size = 4096
nexenta_sparse = False
nexenta_sparsed_volumes = True
nexenta_target_group_prefix = cinder/
nexenta_target_prefix = iqn.1986-03.com.sun:02:cinder-
nexenta_user = admin
nexenta_volume = cinder
To use Compute with the Nexenta iSCSI driver, first set the volume_driver:
volume_driver=cinder.volume.drivers.nexenta.iscsi.NexentaISCSIDriver
Then, set the nexenta_host parameter and other parameters from the table, if needed.
68
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
The following table contains the options supported by the Nexenta NFS driver.
Description
[DEFAULT]
nexenta_mount_point_base = $state_path/mnt
nexenta_nms_cache_volroot = True
nexenta_shares_config = /etc/cinder/nfs_shares
nexenta_volume_compression = on
Add your list of Nexenta NFS servers to the file you specified with the
nexenta_shares_config option. For example, if the value of this option was set to /
etc/cinder/nfs_shares, then:
# cat /etc/cinder/nfs_shares
192.168.1.200:/storage http://admin:[email protected]:2000
192.168.1.201:/storage http://admin:[email protected]:2000
192.168.1.202:/storage http://admin:[email protected]:2000
NFS driver
The Network File System (NFS) is a distributed file system protocol originally developed by
Sun Microsystems in 1984. An NFS server exports one or more of its file systems, known as
69
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
shares. An NFS client can mount these exported shares on its own file system. You can perform file actions on this mounted remote file system as if the file system were local.
The following table contains the options supported by the NFS driver.
Description
[DEFAULT]
nfs_mount_options = None
(StrOpt) Mount options passed to the nfs client. See section of the nfs man page for details.
nfs_mount_point_base = $state_path/mnt
nfs_oversub_ratio = 1.0
nfs_shares_config = /etc/cinder/nfs_shares
nfs_sparsed_volumes = True
nfs_used_ratio = 0.95
(FloatOpt) Percent of ACTUAL usage of the underlying volume before no new volumes can be allocated to the volume destination.
Note
As of the Icehouse release, the NFS driver (and other drivers based off it) will
attempt to mount shares using version 4.1 of the NFS protocol (including pNFS). If the mount attempt is unsuccessful due to a lack of client or server support, a subsequent mount attempt that requests the default behavior of the
mount.nfs command will be performed. On most distributions, the default behavior is to attempt mounting first with NFS v4.0, then silently fall back to NFS
v3.0 if necessary. If the nfs_mount_options configuration option contains a
request for a specific version of NFS to be used, or if specific options are specified in the shares configuration file specified by the nfs_shares_config configuration option, the mount will be attempted as requested with no subsequent attempts.
70
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Access to one or more NFS servers. Creating an NFS server is outside the scope of
this document. This example assumes access to the following NFS servers and mount
points:
192.168.1.200:/storage
192.168.1.201:/storage
192.168.1.202:/storage
This example demonstrates the use of with this driver with multiple NFS servers. Multiple servers are not required. One is usually enough.
2.
Add your list of NFS servers to the file you specified with the nfs_shares_config
option. For example, if the value of this option was set to /etc/cinder/shares.txt, then:
# cat /etc/cinder/shares.txt
192.168.1.200:/storage
192.168.1.201:/storage
192.168.1.202:/storage
Configure the nfs_mount_point_base option. This is a directory where cinder-volume mounts all NFS shares stored in shares.txt. For this example,
/var/lib/cinder/nfs is used. You can, of course, use the default value of
$state_path/mnt.
4.
Start the cinder-volume service. /var/lib/cinder/nfs should now contain a directory for each NFS share specified in shares.txt. The name of each directory is a
hashed name:
# ls /var/lib/cinder/nfs/
...
46c5db75dc3a3a50a10bfd1a456a9f3f
...
5.
This volume can also be attached and deleted just like other volumes. However, snapshotting is not supported.
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
dle all requests, more than one cinder-volume service is needed as well as potentially
more than one NFS server.
Because data is stored in a file and not actually on a block storage device, you might not
see the same IO performance as you would with a traditional block storage driver. Please
test accordingly.
Despite possible IO performance loss, having volume data stored in a file might be beneficial. For example, backing up volumes can be as easy as copying the volume files.
Note
Regular IO flushing and syncing still stands.
Supported operations
Create, delete, attach, and detach volumes.
Create, list, and delete volume snapshots.
Create a volume from a snapshot.
Copy an image to a volume.
Copy a volume to an image.
Clone a volume.
Extend a volume.
b.
72
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Note
Other management command can reference by command help flvcli -h.
2.
3.
Save the changes to the /etc/cinder/cinder.conf file and restart the cinder-volume service.
The ProphetStor Fibre Channel or iSCSI drivers are now enabled on your OpenStack system.
If you experience problems, review the Block Storage service log files for errors.
The following table contains the options supported by the ProphetStor storage driver.
Description
[DEFAULT]
dpl_pool =
dpl_port = 8357
iscsi_port = 3260
san_ip =
san_login = admin
san_password =
73
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
Description
san_thin_provision = True
juno
Supported operations
Create, delete, attach, detach, clone and extend volumes.
Create a volume from snapshot.
Create and delete volume snapshots.
Note
These instructions assume that the cinder-api and cinder-scheduler services are installed and configured in your OpenStack cluster.
1.
74
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Created
2014-08-04
Created
2014-08-04
b.
c.
75
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
2.
IP_PURE_MGMT
The IP address of the Pure Storage array's management interface or a domain name that resolves to that IP address.
PURE_API_TOKEN
copy the IQN string and run the following command to create a Purity host
for an IQN:
$ purehost create --iqnlist IQN HOST
Note
Do not specify multiple IQNs with the --iqnlist option. Each
FlashArray host must be configured to a single OpenStack IQN.
Sheepdog driver
Sheepdog is an open-source distributed storage system that provides a virtual storage pool
utilizing internal disk of commodity servers.
Sheepdog scales to several hundred nodes, and has powerful virtual disk management features like snapshot, cloning, rollback, thin provisioning.
More information can be found on Sheepdog Project.
This driver enables use of Sheepdog through Qemu/KVM.
Set the following volume_driver in cinder.conf:
volume_driver=cinder.volume.drivers.sheepdog.SheepdogDriver
76
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
SolidFire
The SolidFire Cluster is a high performance all SSD iSCSI storage device that provides massive scale out capability and extreme fault tolerance. A key feature of the SolidFire cluster
is the ability to set and modify during operation specific QoS levels on a volume for volume
basis. The SolidFire cluster offers this along with de-duplication, compression, and an architecture that takes full advantage of SSDs.
To configure the use of a SolidFire cluster with Block Storage, modify your cinder.conf
file as follows:
volume_driver = cinder.volume.drivers.solidfire.SolidFireDriver
san_ip = 172.17.1.182
# the address of your MVIP
san_login = sfadmin
# your cluster admin login
san_password = sfpassword
# your cluster admin password
sf_account_prefix = ''
# prefix for tenant account creation on
solidfire cluster (see warning below)
Warning
The SolidFire driver creates a unique account prefixed with $cinder-volume-service-hostname-$tenant-id on the SolidFire cluster for each tenant that accesses the cluster through the Volume API. Unfortunately, this account formation results in issues for High Availability (HA) installations and installations where the cinder-volume service can move to a new node. HA installations can return an Account Not Found error because the call to the SolidFire cluster is not always going to be sent from the same node. In installations
where the cinder-volume service moves to a new node, the same issue can
occur when you perform operations on existing volumes, such as clone, extend,
delete, and so on.
Note
Set the sf_account_prefix option to an empty string ('') in the
cinder.conf file. This setting results in unique accounts being created on
the SolidFire cluster, but the accounts are prefixed with the tenant-id or any
unique identifier that you choose and are independent of the host where the
cinder-volume service resides.
Description
[DEFAULT]
sf_account_prefix = None
sf_allow_tenant_qos = False
sf_api_port = 443
(IntOpt) SolidFire API port. Useful if the device api is behind a proxy on a different port.
sf_emulate_512 = True
77
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Warning
The VMware ESX VMDK driver is deprecated as of the Icehouse release and
might be removed in Juno or a subsequent release. The VMware vCenter
VMDK driver continues to be fully supported.
Functional context
The VMware VMDK driver connects to vCenter, through which it can dynamically access all
the data stores visible from the ESX hosts in the managed cluster.
When you create a volume, the VMDK driver creates a VMDK file on demand. The VMDK
file creation completes only when the volume is subsequently attached to an instance, because the set of data stores visible to the instance determines where to place the volume.
The running vSphere VM is automatically reconfigured to attach the VMDK file as an extra
disk. Once attached, you can log in to the running vSphere VM to rescan and discover this
extra disk.
Configuration
The recommended volume driver for OpenStack Block Storage is the VMware vCenter
VMDK driver. When you configure the driver, you must match it with the appropriate
OpenStack Compute driver from VMware and both drivers must point to the same server.
In the nova.conf file, use this option to define the Compute driver:
compute_driver=vmwareapi.VMwareVCDriver
In the cinder.conf file, use this option to define the volume driver:
volume_driver=cinder.volume.drivers.vmware.vmdk.VMwareVcVmdkDriver
The following table lists various options that the drivers support for the OpenStack Block
Storage configuration (cinder.conf):
Description
[DEFAULT]
vmware_api_retry_count = 10
vmware_host_ip = None
vmware_host_password = None
vmware_host_username = None
78
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
vmware_host_version = None
vmware_image_transfer_timeout_secs = 7200
(IntOpt) Timeout in seconds for VMDK volume transfer between Cinder and Glance.
vmware_max_objects_retrieval = 100
vmware_task_poll_interval = 0.5
vmware_tmp_dir = /tmp
vmware_volume_folder = cinder-volumes
vmware_wsdl_location = None
thin
vmware:vmdk_type
thin
lazyZeroedThick
vmware:vmdk_type
thick
eagerZeroedThick
vmware:vmdk_type
eagerZeroedThick
If you do not specify a vmdk_type extra spec entry, the default disk file type is thin.
The following example shows how to create a lazyZeroedThick VMDK volume by using
the appropriate vmdk_type:
$ cinder type-create thick_volume
$ cinder type-key thick_volume set vmware:vmdk_type=thick
$ cinder create --volume-type thick_volume --display-name volume1 1
Clone type
With the VMware VMDK drivers, you can create a volume from another source volume or a
snapshot point. The VMware vCenter VMDK driver supports the full and linked/fast
clone types. Use the vmware:clone_type extra spec key to specify the clone type. The
following table captures the mapping for clone types:
full
vmware:clone_type
full
79
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Clone type
linked/fast
vmware:clone_type
linked
Note
The VMware ESX VMDK driver ignores the extra spec entry and always creates
a full clone.
Note
You must configure any data stores that you configure for the Block Storage
service for the Compute service.
In vCenter, tag the data stores to be used for the back end.
OpenStack also supports policies that are created by using vendor-specific capabilities;
for example vSAN-specific storage policies.
Note
The tag value serves as the policy. For details, see the section called Storage policy-based configuration in vCenter [82].
2.
Set the extra spec key vmware:storage_profile in the desired Block Storage volume types to the policy name that you created in the previous step.
80
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
4.
Note
The following considerations apply to configuring SPBM for the Block Storage
service:
Any volume that is created without an associated policy (that is to say, without an associated volume type that specifies vmware:storage_profile
extra spec), there is no policy-based placement for that volume.
Supported operations
The VMware vCenter and ESX VMDK drivers support these operations:
Create, delete, attach, and detach volumes.
Note
When a volume is attached to an instance, a reconfigure operation is performed on the instance to add the volume's VMDK to it. The user must manually rescan and mount the device from within the guest operating system.
Create, list, and delete volume snapshots.
Note
Allowed only if volume is not attached to an instance.
Create a volume from a snapshot.
Copy an image to a volume.
Note
Only images in vmdk disk format with bare container format are supported. The vmware_disktype property of the image can be preallocated,
sparse, streamOptimized or thin.
Copy a volume to an image.
Note
Allowed only if the volume is not attached to an instance.
Clone a volume.
81
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Note
Supported only if the source volume is not attached to an instance.
Backup a volume.
Restore backup to new or existing volume.
Note
Supported only if the existing volume doesn't contain snapshots.
Change the type of a volume.
Note
This operation is supported only if the volume state is available.
Note
Although the VMware ESX VMDK driver supports these operations, it has not
been extensively tested.
Prerequisites
Determine the data stores to be used by the SPBM policy.
Determine the tag that identifies the data stores in the OpenStack component configuration.
Create separate policies or sets of data stores for separate OpenStack components.
b.
82
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Apply the tag to the data stores to be used by the SPBM policy.
Note
For details about creating tags in vSphere, see the vSphere documentation.
3.
In vCenter, create a tag-based storage policy that uses one or more tags to identify a
set of data stores.
Note
You use this tag name and category when you configure the *.conf file
for the OpenStack component. For details about creating tags in vSphere,
see the vSphere documentation.
Windows
There is a volume back-end for Windows. Set the following in your cinder.conf, and use
the options below to configure it.
volume_driver=cinder.volume.drivers.windows.WindowsDriver
Description
[DEFAULT]
windows_iscsi_lun_path = C:\iSCSIVirtualDisks
83
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
of using more sophisticated storage back-ends for operations like cloning/snapshots, and
so on. Some of the storage plug-ins that are already supported in Citrix XenServer and Xen
Cloud Platform (XCP) are:
1. NFS VHD: Storage repository (SR) plug-in that stores disks as Virtual Hard Disk (VHD) files
on a remote Network File System (NFS).
2. Local VHD on LVM: SR plug-in that represents disks as VHD disks on Logical Volumes
(LVM) within a locally-attached Volume Group.
3. HBA LUN-per-VDI driver: SR plug-in that represents Logical Units (LUs) as Virtual Disk Images (VDIs) sourced by host bus adapters (HBAs). For example, hardware-based iSCSI or
FC support.
4. NetApp: SR driver for mapping of LUNs to VDIs on a NETAPP server, providing use of
fast snapshot and clone features on the filer.
5. LVHD over FC: SR plug-in that represents disks as VHDs on Logical Volumes within a Volume Group created on an HBA LUN. For example, hardware-based iSCSI or FC support.
6. iSCSI: Base ISCSI SR driver, provides a LUN-per-VDI. Does not support creation of VDIs but
accesses existing LUNs on a target.
7. LVHD over iSCSI: SR plug-in that represents disks as Logical Volumes within a Volume
Group created on an iSCSI LUN.
8. EqualLogic: SR driver for mapping of LUNs to VDIs on a EQUALLOGIC array group, providing use of fast snapshot and clone features on the array.
Operation
The admin uses the nova-manage command detailed below to add flavors and back-ends.
One or more cinder-volume service instances are deployed for each availability zone.
When an instance is started, it creates storage repositories (SRs) to connect to the backends available within that zone. All cinder-volume instances within a zone can see all
the available back-ends. These instances are completely symmetric and hence should be
able to service any create_volume request within the zone.
84
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Configuration
Set the following configuration options for the nova volume service: (nova-compute
also requires the volume_driver configuration option.)
--volume_driver "nova.volume.xensm.XenSMDriver"
--use_local_volumes False
You must create the back-end configurations that the volume driver uses before you
start the volume service.
$ nova-manage sm flavor_create <label> <description>
$ nova-manage sm flavor_delete <label>
$ nova-manage sm backend_add <flavor label> <SR type> [config connection
parameters]
Note
SR type and configuration connection parameters are in keeping with the XenAPI Command Line Interface.
$ nova-manage sm backend_delete <back-end-id>
Example: For the NFS storage manager plug-in, run these commands:
$ nova-manage sm flavor_create gold "Not all that glitters"
$ nova-manage sm flavor_delete gold
$ nova-manage sm backend_add gold nfs name_label=myback-end server=myserver
serverpath=/local/scratch/myname
$ nova-manage sm backend_remove 1
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
XenAPINFS
XenAPINFS is a Block Storage (Cinder) driver that uses an NFS share through the XenAPI
Storage Manager to store virtual disk images and expose those virtual disks as volumes.
This driver does not access the NFS share directly. It accesses the share only through XenAPI
Storage Manager. Consider this driver as a reference implementation for use of the XenAPI
Storage Manager in OpenStack (present in XenServer and XCP).
Requirements
A XenServer/XCP installation that acts as Storage Controller. This hypervisor is known as
the storage controller.
Use XenServer/XCP as your hypervisor for Compute nodes.
An NFS share that is configured for XenServer/XCP. For specific requirements and export
options, see the administration guide for your specific XenServer version. The NFS share
must be accessible by all XenServers components within your cloud.
To create volumes from XenServer type images (vhd tgz files), XenServer Nova plug-ins
are also required on the storage controller.
Note
You can use a XenServer as a storage controller and compute node at the same
time. This minimal configuration consists of a XenServer/XCP box and an NFS
share.
Configuration patterns
Local configuration (Recommended): The driver runs in a virtual machine on top of the
storage controller. With this configuration, you can create volumes from qemu-img-supported formats.
Figure1.3.Local configuration
86
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Remote configuration: The driver is not a guest VM of the storage controller. With this
configuration, you can only use XenServer vhd-type images to create volumes.
Figure1.4.Remote configuration
Configuration options
Assuming the following setup:
XenServer box at 10.2.2.1
XenServer password is r00tme
NFS server is nfs.example.com
NFS export is at /volumes
To use XenAPINFS as your cinder driver, set these configuration options in the
cinder.conf file:
volume_driver = cinder.volume.drivers.xenapi.sm.XenAPINFSDriver
xenapi_connection_url = http://10.2.2.1
xenapi_connection_username = root
xenapi_connection_password = r00tme
xenapi_nfs_server = nfs.example.com
xenapi_nfs_serverpath = /volumes
The following table shows the configuration options that the XenAPINFS driver supports:
Description
[DEFAULT]
xenapi_connection_password = None
xenapi_connection_url = None
xenapi_connection_username = root
xenapi_nfs_server = None
xenapi_nfs_serverpath = None
87
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
Description
xenapi_sr_base_path = /var/run/sr-mount
juno
Zadara
There is a volume back-end for Zadara. Set the following in your cinder.conf, and use
the following options to configure it.
volume_driver=cinder.volume.drivers.zadara.ZadaraVPSAISCSIDriver
Description
[DEFAULT]
zadara_password = None
zadara_user = None
zadara_vol_encrypt = False
zadara_vol_name_template = OS_%s
zadara_vol_thin = True
zadara_vpsa_allow_nonexistent_delete = True
zadara_vpsa_auto_detach_on_delete = True
zadara_vpsa_ip = None
zadara_vpsa_poolname = None
zadara_vpsa_port = None
zadara_vpsa_use_ssl = False
Configuration
1.
2.
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Note
You can also run this workflow to automate the above tasks.
3.
Ensure that the ZFSSA iSCSI service is online. If the ZFSSA iSCSI service is not online, enable the service by using the BUI, CLI or REST API in the appliance.
zfssa:> configuration services iscsi
zfssa:configuration services iscsi> enable
zfssa:configuration services iscsi> show
Properties:
<status>= online
...
89
LABEL
Untitled Interface
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
...
Note
Do not use management interfaces for zfssa_target_interfaces.
Supported operations
Create and delete volumes
Extend volume
Create and delete snapshots
Create volume from snapshot
Delete volume snapshots
Attach and detach volumes
Get volume stats
Clone volumes
Driver options
The Oracle ZFSSA iSCSI Driver supports these options:
Description
[DEFAULT]
zfssa_initiator =
zfssa_initiator_group =
zfssa_initiator_password =
zfssa_initiator_user =
zfssa_lun_compression =
zfssa_lun_logbias =
zfssa_lun_sparse = False
zfssa_lun_volblocksize = 8k
(StrOpt) Block size: 512, 1k, 2k, 4k, 8k, 16k, 32k, 64k,
128k.
zfssa_pool = None
zfssa_project = None
zfssa_rest_timeout = None
zfssa_target_group = tgt-grp
zfssa_target_interfaces = None
zfssa_target_password =
zfssa_target_portal = None
zfssa_target_user =
90
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Backup drivers
This section describes how to configure the cinder-backup service and its drivers.
The volume drivers are included with the Block Storage repository (https://github.com/
openstack/cinder). To set a backup driver, use the backup_driver flag. By default there
is no backup driver enabled.
Note
Block Storage enables you to:
Restore to a new volume, which is the default and recommended action.
Restore to the original volume from which the backup was taken. The restore
action takes a full copy because this is the safest action.
To enable the Ceph backup driver, include the following option in the cinder.conf file:
backup_driver = cinder.backup.drivers.ceph
The following configuration options are available for the Ceph backup driver.
Description
[DEFAULT]
backup_ceph_chunk_size = 134217728
backup_ceph_conf = /etc/ceph/ceph.conf
backup_ceph_pool = backups
backup_ceph_stripe_count = 0
backup_ceph_stripe_unit = 0
91
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
backup_ceph_user = cinder
restore_discard_excess_bytes = True
(BoolOpt) If True, always discard excess bytes when restoring volumes i.e. pad with zeroes.
This example shows the default options for the Ceph backup driver.
backup_ceph_conf=/etc/ceph/ceph.conf
backup_ceph_user = cinder
backup_ceph_chunk_size = 134217728
backup_ceph_pool = backups
backup_ceph_stripe_unit = 0
backup_ceph_stripe_count = 0
The following configuration options are available for the TSM backup driver.
Description
[DEFAULT]
backup_tsm_compression = True
backup_tsm_password = password
backup_tsm_volume_prefix = backup
This example shows the default options for the TSM backup driver.
backup_tsm_volume_prefix = backup
backup_tsm_password = password
backup_tsm_compression = True
The following configuration options are available for the Swift back-end backup driver.
92
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
[DEFAULT]
backup_swift_auth = per_user
backup_swift_container = volumebackups
backup_swift_key = None
backup_swift_object_size = 52428800
backup_swift_retry_attempts = 3
backup_swift_retry_backoff = 2
backup_swift_url = None
backup_swift_user = None
swift_catalog_info = object-store:swift:publicURL
(StrOpt) Info to match when looking for swift in the service catalog. Format is: separated values of the form:
<service_type>:<service_name>:<endpoint_type> - Only
used if backup_swift_url is unset
This example shows the default options for the Swift back-end backup driver.
backup_swift_url = http://localhost:8080/v1/AUTH
backup_swift_auth = per_user
backup_swift_user = <None>
backup_swift_key = <None>
backup_swift_container = volumebackups
backup_swift_object_size = 52428800
backup_swift_retry_attempts = 3
backup_swift_retry_backoff = 2
backup_compression_algorithm = zlib
cinder.conf
Use the cinder.conf file to configure the majority of the Block Storage service options.
[DEFAULT]
#
# Options defined in oslo.messaging
#
# Use durable queues in amqp. (boolean value)
# Deprecated group/name - [DEFAULT]/rabbit_durable_queues
#amqp_durable_queues=false
# Auto-delete queues in amqp. (boolean value)
#amqp_auto_delete=false
# Size of RPC connection pool. (integer value)
#rpc_conn_pool_size=30
# Qpid broker hostname. (string value)
93
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
#qpid_hostname=localhost
# Qpid broker port. (integer value)
#qpid_port=5672
# Qpid HA cluster host:port pairs. (list value)
#qpid_hosts=$qpid_hostname:$qpid_port
# Username for Qpid connection. (string value)
#qpid_username=
# Password for Qpid connection. (string value)
#qpid_password=
# Space separated list of SASL mechanisms to use for auth.
# (string value)
#qpid_sasl_mechanisms=
# Seconds between connection keepalive heartbeats. (integer
# value)
#qpid_heartbeat=60
# Transport to use, either 'tcp' or 'ssl'. (string value)
#qpid_protocol=tcp
# Whether to disable the Nagle algorithm. (boolean value)
#qpid_tcp_nodelay=true
# The number of prefetched messages held by receiver. (integer
# value)
#qpid_receiver_capacity=1
# The qpid topology version to use. Version 1 is what was
# originally used by impl_qpid. Version 2 includes some
# backwards-incompatible changes that allow broker federation
# to work. Users should update to version 2 when they are
# able to take everything down, as it requires a clean break.
# (integer value)
#qpid_topology_version=1
# SSL version to use (valid only if SSL enabled). valid values
# are TLSv1, SSLv23 and SSLv3. SSLv2 may be available on some
# distributions. (string value)
#kombu_ssl_version=
# SSL key file (valid only if SSL enabled). (string value)
#kombu_ssl_keyfile=
# SSL cert file (valid only if SSL enabled). (string value)
#kombu_ssl_certfile=
# SSL certification authority file (valid only if SSL
# enabled). (string value)
#kombu_ssl_ca_certs=
# How long to wait before reconnecting in response to an AMQP
# consumer cancel notification. (floating point value)
#kombu_reconnect_delay=1.0
# The RabbitMQ broker address where a single node is used.
94
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
# (string value)
#rabbit_host=localhost
# The RabbitMQ broker port where a single node is used.
# (integer value)
#rabbit_port=5672
# RabbitMQ HA cluster host:port pairs. (list value)
#rabbit_hosts=$rabbit_host:$rabbit_port
# Connect over SSL for RabbitMQ. (boolean value)
#rabbit_use_ssl=false
# The RabbitMQ userid. (string value)
#rabbit_userid=guest
# The RabbitMQ password. (string value)
#rabbit_password=guest
# the RabbitMQ login method (string value)
#rabbit_login_method=AMQPLAIN
# The RabbitMQ virtual host. (string value)
#rabbit_virtual_host=/
# How frequently to retry connecting with RabbitMQ. (integer
# value)
#rabbit_retry_interval=1
# How long to backoff for between retries when connecting to
# RabbitMQ. (integer value)
#rabbit_retry_backoff=2
# Maximum number of RabbitMQ connection retries. Default is 0
# (infinite retry count). (integer value)
#rabbit_max_retries=0
# Use HA queues in RabbitMQ (x-ha-policy: all). If you change
# this option, you must wipe the RabbitMQ database. (boolean
# value)
#rabbit_ha_queues=false
# If passed, use a fake RabbitMQ provider. (boolean value)
#fake_rabbit=false
# ZeroMQ bind address. Should be a wildcard (*), an ethernet
# interface, or IP. The "host" option should point or resolve
# to this address. (string value)
#rpc_zmq_bind_address=*
# MatchMaker driver. (string value)
#rpc_zmq_matchmaker=oslo.messaging._drivers.matchmaker.MatchMakerLocalhost
# ZeroMQ receiver listening port. (integer value)
#rpc_zmq_port=9501
# Number of ZeroMQ contexts, defaults to 1. (integer value)
#rpc_zmq_contexts=1
# Maximum number of ingress messages to locally buffer per
95
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
#
# Options defined in cinder.exception
#
# make exception message format errors fatal (boolean value)
#fatal_exception_format_errors=false
#
# Options defined in cinder.policy
96
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
#
# JSON file representing policy (string value)
#policy_file=policy.json
# Rule checked when requested rule is not found (string value)
#policy_default_rule=default
#
# Options defined in cinder.quota
#
# number of volumes allowed per project (integer value)
#quota_volumes=10
# number of volume snapshots allowed per project (integer
# value)
#quota_snapshots=10
# number of volume gigabytes (snapshots are also included)
# allowed per project (integer value)
#quota_gigabytes=1000
# number of seconds until a reservation expires (integer
# value)
#reservation_expire=86400
# count of reservations until usage is refreshed (integer
# value)
#until_refresh=0
# number of seconds between subsequent usage refreshes
# (integer value)
#max_age=0
# default driver to use for quota checks (string value)
#quota_driver=cinder.quota.DbQuotaDriver
# whether to use default quota class for default quota
# (boolean value)
#use_default_quota_class=true
#
# Options defined in cinder.service
#
# seconds between nodes reporting state to datastore (integer
# value)
#report_interval=10
# seconds between running periodic tasks (integer value)
#periodic_interval=60
# range of seconds to randomly delay when starting the
# periodic task scheduler to reduce stampeding. (Disable by
# setting to 0) (integer value)
#periodic_fuzzy_delay=60
97
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
#
# Options defined in cinder.test
#
# File name of clean sqlite db (string value)
#sqlite_clean_db=clean.sqlite
#
# Options defined in cinder.wsgi
#
# Maximum line size of message headers to be accepted.
# max_header_line may need to be increased when using large
# tokens (typically those generated by the Keystone v3 API
# with big service catalogs). (integer value)
#max_header_line=16384
# Sets the value of TCP_KEEPIDLE in seconds for each server
# socket. Not supported on OS X. (integer value)
#tcp_keepidle=600
# CA certificate file to use to verify connecting clients
# (string value)
#ssl_ca_file=<None>
# Certificate file to use when starting the server securely
# (string value)
#ssl_cert_file=<None>
# Private key file to use when starting the server securely
# (string value)
#ssl_key_file=<None>
#
# Options defined in cinder.api.common
#
# the maximum number of items returned in a single response
# from a collection resource (integer value)
#osapi_max_limit=1000
# Base URL that will be presented to users in links to the
# OpenStack Volume API (string value)
# Deprecated group/name - [DEFAULT]/osapi_compute_link_prefix
#osapi_volume_base_URL=<None>
98
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
#
# Options defined in cinder.api.middleware.auth
#
# Treat X-Forwarded-For as the canonical remote address. Only
# enable this if you have a sanitizing proxy. (boolean value)
#use_forwarded_for=false
#
# Options defined in cinder.api.middleware.sizelimit
#
# Max size for body of a request (integer value)
#osapi_max_request_body_size=114688
#
# Options defined in cinder.backup.driver
#
# Backup metadata version to be used when backing up volume
# metadata. If this number is bumped, make sure the service
# doing the restore supports the new version. (integer value)
#backup_metadata_version=1
#
# Options defined in cinder.backup.drivers.ceph
#
# Ceph configuration file to use. (string value)
#backup_ceph_conf=/etc/ceph/ceph.conf
# The Ceph user to connect with. Default here is to use the
# same user as for Cinder volumes. If not using cephx this
# should be set to None. (string value)
#backup_ceph_user=cinder
# The chunk size, in bytes, that a backup is broken into
# before transfer to the Ceph object store. (integer value)
#backup_ceph_chunk_size=134217728
# The Ceph pool where volume backups are stored. (string
# value)
#backup_ceph_pool=backups
# RBD stripe unit to use when creating a backup image.
# (integer value)
#backup_ceph_stripe_unit=0
# RBD stripe count to use when creating a backup image.
# (integer value)
#backup_ceph_stripe_count=0
# If True, always discard excess bytes when restoring volumes
# i.e. pad with zeroes. (boolean value)
#restore_discard_excess_bytes=true
99
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
#
# Options defined in cinder.backup.drivers.swift
#
# The URL of the Swift endpoint (string value)
#backup_swift_url=http://localhost:8080/v1/AUTH_
# Swift authentication mechanism (string value)
#backup_swift_auth=per_user
# Swift user name (string value)
#backup_swift_user=<None>
# Swift key for authentication (string value)
#backup_swift_key=<None>
# The default Swift container to use (string value)
#backup_swift_container=volumebackups
# The size in bytes of Swift backup objects (integer value)
#backup_swift_object_size=52428800
# The number of retries to make for Swift operations (integer
# value)
#backup_swift_retry_attempts=3
# The backoff time in seconds between Swift retries (integer
# value)
#backup_swift_retry_backoff=2
# Compression algorithm (None to disable) (string value)
#backup_compression_algorithm=zlib
#
# Options defined in cinder.backup.drivers.tsm
#
# Volume prefix for the backup id when backing up to TSM
# (string value)
#backup_tsm_volume_prefix=backup
# TSM password for the running username (string value)
#backup_tsm_password=password
# Enable or Disable compression for backups (boolean value)
#backup_tsm_compression=true
#
# Options defined in cinder.backup.manager
#
# Driver to use for backups. (string value)
# Deprecated group/name - [DEFAULT]/backup_service
#backup_driver=cinder.backup.drivers.swift
#
# Options defined in cinder.common.config
100
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
#
# File name for the paste.deploy config for cinder-api (string
# value)
#api_paste_config=api-paste.ini
# Top-level directory for maintaining cinder's state (string
# value)
# Deprecated group/name - [DEFAULT]/pybasedir
#state_path=/var/lib/cinder
# ip address of this host (string value)
#my_ip=10.0.0.1
# default glance hostname or ip (string value)
#glance_host=$my_ip
# default glance port (integer value)
#glance_port=9292
# A list of the glance api servers available to cinder
# ([hostname|ip]:port) (list value)
#glance_api_servers=$glance_host:$glance_port
# Version of the glance api to use (integer value)
#glance_api_version=1
# Number retries when downloading an image from glance
# (integer value)
#glance_num_retries=0
# Allow to perform insecure SSL (https) requests to glance
# (boolean value)
#glance_api_insecure=false
# Whether to attempt to negotiate SSL layer compression when
# using SSL (https) requests. Set to False to disable SSL
# layer compression. In some cases disabling this may improve
# data throughput, eg when high network bandwidth is available
# and you are using already compressed image formats such as
# qcow2 . (boolean value)
#glance_api_ssl_compression=false
# http/https timeout value for glance operations. If no value
# (None) is supplied here, the glanceclient default value is
# used. (integer value)
#glance_request_timeout=<None>
# the topic scheduler nodes listen on (string value)
#scheduler_topic=cinder-scheduler
# the topic volume nodes listen on (string value)
#volume_topic=cinder-volume
# the topic volume backup nodes listen on (string value)
#backup_topic=cinder-backup
# Deploy v1 of the Cinder API. (boolean value)
#enable_v1_api=true
101
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
102
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
# value)
#volume_api_class=cinder.volume.api.API
# The full class name of the volume backup API class (string
# value)
#backup_api_class=cinder.backup.api.API
# The strategy to use for auth. Supports noauth, keystone, and
# deprecated. (string value)
#auth_strategy=noauth
# A list of backend names to use. These backend names should
# be backed by a unique [CONFIG] group with its options (list
# value)
#enabled_backends=<None>
# Whether snapshots count against GigaByte quota (boolean
# value)
#no_snapshot_gb_quota=false
# The full class name of the volume transfer API class (string
# value)
#transfer_api_class=cinder.transfer.api.API
#
# Options defined in cinder.compute
#
# The full class name of the compute API class to use (string
# value)
#compute_api_class=cinder.compute.nova.API
#
# Options defined in cinder.compute.nova
#
# Info to match when looking for nova in the service catalog.
# Format is : separated values of the form:
# <service_type>:<service_name>:<endpoint_type> (string value)
#nova_catalog_info=compute:nova:publicURL
# Same as nova_catalog_info, but for admin endpoint. (string
# value)
#nova_catalog_admin_info=compute:nova:adminURL
# Override service catalog lookup with template for nova
# endpoint e.g. http://localhost:8774/v2/%(project_id)s
# (string value)
#nova_endpoint_template=<None>
# Same as nova_endpoint_template, but for admin endpoint.
# (string value)
#nova_endpoint_admin_template=<None>
# region name of this node (string value)
#os_region_name=<None>
# Location of ca certificates file to use for nova client
103
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
#
# Options defined in cinder.db.api
#
# The backend to use for db (string value)
#db_backend=sqlalchemy
# Services to be added to the available pool on create
# (boolean value)
#enable_new_services=true
# Template string to be used to generate volume names (string
# value)
#volume_name_template=volume-%s
# Template string to be used to generate snapshot names
# (string value)
#snapshot_name_template=snapshot-%s
# Template string to be used to generate backup names (string
# value)
#backup_name_template=backup-%s
#
# Options defined in cinder.db.base
#
# driver to use for database access (string value)
#db_driver=cinder.db
#
# Options defined in cinder.image.glance
#
# A list of url schemes that can be downloaded directly via
# the direct_url. Currently supported schemes: [file]. (list
# value)
#allowed_direct_url_schemes=
#
# Options defined in cinder.image.image_utils
#
# Directory used for temporary storage during image conversion
# (string value)
#image_conversion_dir=$state_path/conversion
104
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
#
# Options defined in cinder.openstack.common.eventlet_backdoor
#
# Enable eventlet backdoor. Acceptable values are 0, <port>,
# and <start>:<end>, where 0 results in listening on a random
# tcp port number; <port> results in listening on the
# specified port number (and not enabling backdoor if that
# port is in use); and <start>:<end> results in listening on
# the smallest unused port number within the specified range
# of port numbers. The chosen port is displayed in the
# service's log file. (string value)
#backdoor_port=<None>
#
# Options defined in cinder.openstack.common.lockutils
#
# Whether to disable inter-process locks (boolean value)
#disable_process_locking=false
# Directory to use for lock files. Default to a temp directory
# (string value)
#lock_path=<None>
#
# Options defined in cinder.openstack.common.log
#
# Print debugging output (set logging level to DEBUG instead
# of default WARNING level). (boolean value)
#debug=false
# Print more verbose output (set logging level to INFO instead
# of default WARNING level). (boolean value)
#verbose=false
# Log output to standard error (boolean value)
#use_stderr=true
# Format string to use for log messages with context (string
# value)
#logging_context_format_string=%(asctime)s.%(msecs)03d %(process)d
%(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s
%(message)s
# Format string to use for log messages without context
# (string value)
105
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
#logging_default_format_string=%(asctime)s.%(msecs)03d %(process)d
%(levelname)s %(name)s [-] %(instance)s%(message)s
# Data to append to log format when level is DEBUG (string
# value)
#logging_debug_format_suffix=%(funcName)s %(pathname)s:%(lineno)d
# Prefix each line of exception output with this format
# (string value)
#logging_exception_prefix=%(asctime)s.%(msecs)03d %(process)d TRACE %(name)s
%(instance)s
# List of logger=LEVEL pairs (list value)
#default_log_levels=amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=
WARN,suds=INFO,oslo.messaging=INFO,iso8601=WARN,requests.packages.urllib3.
connectionpool=WARN
# Publish error events (boolean value)
#publish_errors=false
# Make deprecations fatal (boolean value)
#fatal_deprecations=false
# If an instance is passed with the log message, format it
# like this (string value)
#instance_format="[instance: %(uuid)s] "
# If an instance UUID is passed with the log message, format
# it like this (string value)
#instance_uuid_format="[instance: %(uuid)s] "
# The name of logging configuration file. It does not disable
# existing loggers, but just appends specified logging
# configuration to any other existing logging options. Please
# see the Python logging module documentation for details on
# logging configuration files. (string value)
# Deprecated group/name - [DEFAULT]/log_config
#log_config_append=<None>
# DEPRECATED. A logging.Formatter log message format string
# which may use any of the available logging.LogRecord
# attributes. This option is deprecated. Please use
# logging_context_format_string and
# logging_default_format_string instead. (string value)
#log_format=<None>
# Format string for %%(asctime)s in log records. Default:
# %(default)s (string value)
#log_date_format=%Y-%m-%d %H:%M:%S
# (Optional) Name of log file to output to. If no default is
# set, logging will go to stdout. (string value)
# Deprecated group/name - [DEFAULT]/logfile
#log_file=<None>
# (Optional) The base directory used for relative --log-file
# paths (string value)
# Deprecated group/name - [DEFAULT]/logdir
#log_dir=<None>
106
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
#
# Options defined in cinder.openstack.common.periodic_task
#
# Some periodic tasks can be run in a separate process. Should
# we run them here? (boolean value)
#run_external_periodic_tasks=true
#
# Options defined in cinder.scheduler.driver
#
# The scheduler host manager class to use (string value)
#scheduler_host_manager=cinder.scheduler.host_manager.HostManager
# Maximum number of attempts to schedule an volume (integer
# value)
#scheduler_max_attempts=3
#
# Options defined in cinder.scheduler.host_manager
#
# Which filter class names to use for filtering hosts when not
# specified in the request. (list value)
#scheduler_default_filters=AvailabilityZoneFilter,CapacityFilter,
CapabilitiesFilter
# Which weigher class names to use for weighing hosts. (list
# value)
#scheduler_default_weighers=CapacityWeigher
#
# Options defined in cinder.scheduler.manager
#
# Default scheduler driver to use (string value)
#scheduler_driver=cinder.scheduler.filter_scheduler.FilterScheduler
#
# Options defined in cinder.scheduler.scheduler_options
107
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
#
# Absolute path to scheduler configuration JSON file. (string
# value)
#scheduler_json_config_location=
#
# Options defined in cinder.scheduler.simple
#
# This configure option has been deprecated along with the
# SimpleScheduler. New scheduler is able to gather capacity
# information for each host, thus setting the maximum number
# of volume gigabytes for host is no longer needed. It's safe
# to remove this configure from cinder.conf. (integer value)
#max_gigabytes=10000
#
# Options defined in cinder.scheduler.weights.capacity
#
# Multiplier used for weighing volume capacity. Negative
# numbers mean to stack vs spread. (floating point value)
#capacity_weight_multiplier=1.0
# Multiplier used for weighing volume capacity. Negative
# numbers mean to stack vs spread. (floating point value)
#allocated_capacity_weight_multiplier=-1.0
#
# Options defined in cinder.transfer.api
#
# The number of characters in the salt. (integer value)
#volume_transfer_salt_length=8
# The number of characters in the autogenerated auth key.
# (integer value)
#volume_transfer_key_length=16
#
# Options defined in cinder.volume.api
#
# Create volume from snapshot at the host where snapshot
# resides (boolean value)
#snapshot_same_host=true
# Ensure that the new volumes are the same AZ as snapshot or
# source volume (boolean value)
#cloned_volume_same_az=true
#
# Options defined in cinder.volume.driver
#
108
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
109
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
#
# Options defined in cinder.volume.drivers.block_device
#
# List of all available devices (list value)
#available_devices=
#
# Options defined in cinder.volume.drivers.coraid
#
# IP address of Coraid ESM (string value)
#coraid_esm_address=
# User name to connect to Coraid ESM (string value)
#coraid_user=admin
# Name of group on Coraid ESM to which coraid_user belongs
# (must have admin privilege) (string value)
#coraid_group=admin
# Password to connect to Coraid ESM (string value)
#coraid_password=password
110
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
#
# Options defined in cinder.volume.drivers.emc.emc_smis_common
#
# use this file for cinder emc plugin config data (string
# value)
#cinder_emc_config_file=/etc/cinder/cinder_emc_config.xml
#
# Options defined in cinder.volume.drivers.emc.emc_vnx_cli
#
# Naviseccli Path (string value)
#naviseccli_path=
# ISCSI pool name (string value)
#storage_vnx_pool_name=<None>
# Default Time Out For CLI operations in minutes (integer
# value)
#default_timeout=20
# Default max number of LUNs in a storage group (integer
# value)
#max_luns_per_storage_group=256
#
# Options defined in cinder.volume.drivers.eqlx
#
# Group name to use for creating volumes (string value)
#eqlx_group_name=group-0
# Timeout for the Group Manager cli command execution (integer
# value)
#eqlx_cli_timeout=30
# Maximum retry count for reconnection (integer value)
#eqlx_cli_max_retries=5
# Use CHAP authentication for targets? (boolean value)
#eqlx_use_chap=false
# Existing CHAP account name (string value)
#eqlx_chap_login=admin
# Password for specified CHAP account name (string value)
#eqlx_chap_password=password
# Pool in which volumes will be created (string value)
#eqlx_pool=default
111
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
#
# Options defined in cinder.volume.drivers.glusterfs
#
# File with the list of available gluster shares (string
# value)
#glusterfs_shares_config=/etc/cinder/glusterfs_shares
# Create volumes as sparsed files which take no space.If set
# to False volume is created as regular file.In such case
# volume creation takes a lot of time. (boolean value)
#glusterfs_sparsed_volumes=true
# Create volumes as QCOW2 files rather than raw files.
# (boolean value)
#glusterfs_qcow2_volumes=false
# Base dir containing mount points for gluster shares. (string
# value)
#glusterfs_mount_point_base=$state_path/mnt
#
# Options defined in cinder.volume.drivers.hds.hds
#
# configuration file for HDS cinder plugin for HUS (string
# value)
#hds_cinder_config_file=/opt/hds/hus/cinder_hus_conf.xml
#
# Options defined in cinder.volume.drivers.huawei
#
# config data for cinder huawei plugin (string value)
#cinder_huawei_conf_file=/etc/cinder/cinder_huawei_conf.xml
#
# Options defined in cinder.volume.drivers.ibm.gpfs
#
# Specifies the path of the GPFS directory where Block Storage
# volume and snapshot files are stored. (string value)
#gpfs_mount_point_base=<None>
# Specifies the path of the Image service repository in GPFS.
# Leave undefined if not storing images in GPFS. (string
# value)
#gpfs_images_dir=<None>
#
#
#
#
#
#
#
112
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
By
#
# Options defined in cinder.volume.drivers.ibm.storwize_svc
#
# Storage system storage pool for volumes (string value)
#storwize_svc_volpool_name=volpool
# Storage system space-efficiency parameter for volumes
# (percentage) (integer value)
#storwize_svc_vol_rsize=2
# Storage system threshold for volume capacity warnings
# (percentage) (integer value)
#storwize_svc_vol_warning=0
# Storage system autoexpand parameter for volumes (True/False)
# (boolean value)
#storwize_svc_vol_autoexpand=true
# Storage system grain size parameter for volumes
# (32/64/128/256) (integer value)
#storwize_svc_vol_grainsize=256
# Storage system compression option for volumes (boolean
# value)
#storwize_svc_vol_compression=false
# Enable Easy Tier for volumes (boolean value)
#storwize_svc_vol_easytier=true
# The I/O group in which to allocate volumes (integer value)
#storwize_svc_vol_iogrp=0
# Maximum number of seconds to wait for FlashCopy to be
# prepared. Maximum value is 600 seconds (10 minutes) (integer
# value)
#storwize_svc_flashcopy_timeout=120
113
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
#
# Options defined in cinder.volume.drivers.ibm.xiv_ds8k
#
# Proxy driver that connects to the IBM Storage Array (string
# value)
#xiv_ds8k_proxy=xiv_ds8k_openstack.nova_proxy.XIVDS8KNovaProxy
# Connection type to the IBM Storage Array
# (fibre_channel|iscsi) (string value)
#xiv_ds8k_connection_type=iscsi
# CHAP authentication mode, effective only for iscsi
# (disabled|enabled) (string value)
#xiv_chap=disabled
#
# Options defined in cinder.volume.drivers.lvm
#
# Name for the VG that will contain exported volumes (string
# value)
#volume_group=cinder-volumes
# If set, create lvms with multiple mirrors. Note that this
# requires lvm_mirrors + 2 pvs with available space (integer
# value)
#lvm_mirrors=0
# Type of LVM volumes to deploy; (default or thin) (string
# value)
#lvm_type=default
#
# Options defined in cinder.volume.drivers.netapp.options
#
#
#
#
#
#
#
114
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
#netapp_vfiler=<None>
# Administrative user account name used to access the storage
# system or proxy server. (string value)
#netapp_login=<None>
# Password for the administrative user account specified in
# the netapp_login option. (string value)
#netapp_password=<None>
# This option specifies the virtual storage server (Vserver)
# name on the storage cluster on which provisioning of block
# storage volumes should occur. If using the NFS storage
# protocol, this parameter is mandatory for storage service
# catalog support (utilized by Cinder volume type extra_specs
# support). If this option is specified, the exports belonging
# to the Vserver will only be used for provisioning in the
# future. Block storage volumes on exports not belonging to
# the Vserver specified by this option will continue to
# function normally. (string value)
#netapp_vserver=<None>
# The hostname (or IP address) for the storage system or proxy
# server. (string value)
#netapp_server_hostname=<None>
# The TCP port to use for communication with the storage
# system or proxy server. Traditionally, port 80 is used for
# HTTP and port 443 is used for HTTPS; however, this value
# should be changed if an alternate port has been configured
# on the storage system or proxy server. (integer value)
#netapp_server_port=80
# This option is used to specify the path to the E-Series
# proxy application on a proxy server. The value is combined
# with the value of the netapp_transport_type,
# netapp_server_hostname, and netapp_server_port options to
# create the URL used by the driver to connect to the proxy
# application. (string value)
#netapp_webservice_path=/devmgr/v2
# This option is only utilized when the storage family is
# configured to eseries. This option is used to restrict
# provisioning to the specified controllers. Specify the value
# of this option to be a comma separated list of controller
# hostnames or IP addresses to be used for provisioning.
# (string value)
#netapp_controller_ips=<None>
# Password for the NetApp E-Series storage array. (string
# value)
#netapp_sa_password=<None>
# This option is used to restrict provisioning to the
# specified storage pools. Only dynamic disk pools are
# currently supported. Specify the value of this option to be
# a comma separated list of disk pool names to be used for
# provisioning. (string value)
#netapp_storage_pools=<None>
115
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
#
# Options defined in cinder.volume.drivers.nexenta.options
116
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
#
# IP address of Nexenta SA (string value)
#nexenta_host=
# HTTP port to connect to Nexenta REST API server (integer
# value)
#nexenta_rest_port=2000
# Use http or https for REST connection (default auto) (string
# value)
#nexenta_rest_protocol=auto
# User name to connect to Nexenta SA (string value)
#nexenta_user=admin
# Password to connect to Nexenta SA (string value)
#nexenta_password=nexenta
# Nexenta target portal port (integer value)
#nexenta_iscsi_target_portal_port=3260
# pool on SA that will hold all volumes (string value)
#nexenta_volume=cinder
# IQN prefix for iSCSI targets (string value)
#nexenta_target_prefix=iqn.1986-03.com.sun:02:cinder# prefix for iSCSI target groups on SA (string value)
#nexenta_target_group_prefix=cinder/
# File with the list of available nfs shares (string value)
#nexenta_shares_config=/etc/cinder/nfs_shares
# Base dir containing mount points for nfs shares (string
# value)
#nexenta_mount_point_base=$state_path/mnt
# Create volumes as sparsed files which take no space.If set
# to False volume is created as regular file.In such case
# volume creation takes a lot of time. (boolean value)
#nexenta_sparsed_volumes=true
# Default compression value for new ZFS folders. (string
# value)
#nexenta_volume_compression=on
# If set True cache NexentaStor appliance volroot option
# value. (boolean value)
#nexenta_nms_cache_volroot=true
# Enable stream compression, level 1..9. 1 - gives best speed;
# 9 - gives best compression. (integer value)
#nexenta_rrmgr_compression=0
# TCP Buffer size in KiloBytes. (integer value)
#nexenta_rrmgr_tcp_buf_size=4096
# Number of TCP connections. (integer value)
#nexenta_rrmgr_connections=2
117
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
#
# Options defined in cinder.volume.drivers.nfs
#
# IP address or Hostname of NAS system. (string value)
#nas_ip=
# User name to connect to NAS system. (string value)
#nas_login=admin
# Password to connect to NAS system. (string value)
#nas_password=
# SSH port to use to connect to NAS system. (integer value)
#nas_ssh_port=22
# Filename of private key to use for SSH authentication.
# (string value)
#nas_private_key=
# File with the list of available nfs shares (string value)
#nfs_shares_config=/etc/cinder/nfs_shares
# Create volumes as sparsed files which take no space.If set
# to False volume is created as regular file.In such case
# volume creation takes a lot of time. (boolean value)
#nfs_sparsed_volumes=true
# Percent of ACTUAL usage of the underlying volume before no
# new volumes can be allocated to the volume destination.
# (floating point value)
#nfs_used_ratio=0.95
# This will compare the allocated to available space on the
# volume destination. If the ratio exceeds this number, the
# destination will no longer be valid. (floating point value)
#nfs_oversub_ratio=1.0
# Base dir containing mount points for nfs shares. (string
# value)
#nfs_mount_point_base=$state_path/mnt
# Mount options passed to the nfs client. See section of the
# nfs man page for details. (string value)
#nfs_mount_options=<None>
#
# Options defined in cinder.volume.drivers.rbd
#
# the RADOS pool in which rbd volumes are stored (string
118
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
# value)
#rbd_pool=rbd
# the RADOS client name for accessing rbd volumes - only set
# when using cephx authentication (string value)
#rbd_user=<None>
# path to the ceph configuration file to use (string value)
#rbd_ceph_conf=
# flatten volumes created from snapshots to remove dependency
# (boolean value)
#rbd_flatten_volume_from_snapshot=false
# the libvirt uuid of the secret for the rbd_uservolumes
# (string value)
#rbd_secret_uuid=<None>
# where to store temporary image files if the volume driver
# does not write them directly to the volume (string value)
#volume_tmp_dir=<None>
# maximum number of nested clones that can be taken of a
# volume before enforcing a flatten prior to next clone. A
# value of zero disables cloning (integer value)
#rbd_max_clone_depth=5
#
# Options defined in cinder.volume.drivers.san.hp.hp_3par_common
#
# 3PAR WSAPI Server Url like https://<3par ip>:8080/api/v1
# (string value)
#hp3par_api_url=
# 3PAR Super user username (string value)
#hp3par_username=
# 3PAR Super user password (string value)
#hp3par_password=
# The CPG to use for volume creation (string value)
#hp3par_cpg=OpenStack
# The CPG to use for Snapshots for volumes. If empty
# hp3par_cpg will be used (string value)
#hp3par_cpg_snap=
# The time in hours to retain a snapshot.
# before this expires. (string value)
#hp3par_snapshot_retention=
119
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
#
# Options defined in cinder.volume.drivers.san.hp.hp_lefthand_rest_proxy
#
# HP LeftHand WSAPI Server Url like https://<LeftHand
# ip>:8081/lhos (string value)
#hplefthand_api_url=<None>
# HP LeftHand Super user username (string value)
#hplefthand_username=<None>
# HP LeftHand Super user password (string value)
#hplefthand_password=<None>
# HP LeftHand cluster name (string value)
#hplefthand_clustername=<None>
# Configure CHAP authentication for iSCSI connections
# (Default: Disabled) (boolean value)
#hplefthand_iscsi_chap_enabled=false
# Enable HTTP debugging to LeftHand (boolean value)
#hplefthand_debug=false
#
# Options defined in cinder.volume.drivers.san.hp.hp_msa_common
#
# The VDisk to use for volume creation. (string value)
#msa_vdisk=OpenStack
#
# Options defined in cinder.volume.drivers.san.san
#
# Use thin provisioning for SAN volumes? (boolean value)
#san_thin_provision=true
# IP address of SAN controller (string value)
#san_ip=
# Username for SAN controller (string value)
#san_login=admin
# Password for SAN controller (string value)
#san_password=
# Filename of private key to use for SSH authentication
# (string value)
#san_private_key=
# Cluster name to use for creating volumes (string value)
#san_clustername=
120
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
#
# Options defined in cinder.volume.drivers.san.solaris
#
# The ZFS path under which to create zvols for volumes.
# (string value)
#san_zfs_volume_base=rpool/
#
# Options defined in cinder.volume.drivers.scality
#
# Path or URL to Scality SOFS configuration file (string
# value)
#scality_sofs_config=<None>
# Base dir where Scality SOFS shall be mounted (string value)
#scality_sofs_mount_point=$state_path/scality
# Path from Scality SOFS root to volume dir (string value)
#scality_sofs_volume_dir=cinder/volumes
#
# Options defined in cinder.volume.drivers.solidfire
#
# Set 512 byte emulation on volume creation;
#sf_emulate_512=true
(boolean value)
121
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
#
# Options defined in cinder.volume.drivers.vmware.vmdk
#
# IP address for connecting to VMware ESX/VC server. (string
# value)
#vmware_host_ip=<None>
# Username for authenticating with VMware ESX/VC server.
# (string value)
#vmware_host_username=<None>
# Password for authenticating with VMware ESX/VC server.
# (string value)
#vmware_host_password=<None>
# Optional VIM service WSDL Location e.g
# http://<server>/vimService.wsdl. Optional over-ride to
# default location for bug work-arounds. (string value)
#vmware_wsdl_location=<None>
# Number of times VMware ESX/VC server API must be retried
# upon connection related issues. (integer value)
#vmware_api_retry_count=10
# The interval (in seconds) for polling remote tasks invoked
# on VMware ESX/VC server. (integer value)
#vmware_task_poll_interval=5
# Name for the folder in the VC datacenter that will contain
# cinder volumes. (string value)
#vmware_volume_folder=cinder-volumes
# Timeout in seconds for VMDK volume transfer between Cinder
# and Glance. (integer value)
#vmware_image_transfer_timeout_secs=7200
# Max number of objects to be retrieved per batch. Query
# results will be obtained in batches from the server and not
# in one shot. Server may still limit the count to something
# less than the configured value. (integer value)
#vmware_max_objects_retrieval=100
# Optional string specifying the VMware VC server version. The
# driver attempts to retrieve the version from VMware VC
# server. Set this configuration only if you want to override
# the VC server version. (string value)
#vmware_host_version=<None>
#
# Options defined in cinder.volume.drivers.windows.windows
#
# Path to store VHD backed volumes (string value)
#windows_iscsi_lun_path=C:\iSCSIVirtualDisks
122
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
#
# Options defined in cinder.volume.drivers.xenapi.sm
#
# NFS server to be used by XenAPINFSDriver (string value)
#xenapi_nfs_server=<None>
# Path of exported NFS, used by XenAPINFSDriver (string value)
#xenapi_nfs_serverpath=<None>
# URL for XenAPI connection (string value)
#xenapi_connection_url=<None>
# Username for XenAPI connection (string value)
#xenapi_connection_username=root
# Password for XenAPI connection (string value)
#xenapi_connection_password=<None>
# Base path to the storage repository (string value)
#xenapi_sr_base_path=/var/run/sr-mount
#
# Options defined in cinder.volume.drivers.zadara
#
# Management IP of Zadara VPSA (string value)
#zadara_vpsa_ip=<None>
# Zadara VPSA port number (string value)
#zadara_vpsa_port=<None>
# Use SSL connection (boolean value)
#zadara_vpsa_use_ssl=false
# User name for the VPSA (string value)
#zadara_user=<None>
# Password for the VPSA (string value)
#zadara_password=<None>
# Name of VPSA storage pool for volumes (string value)
#zadara_vpsa_poolname=<None>
# Default thin provisioning policy for volumes (boolean value)
#zadara_vol_thin=true
# Default encryption policy for volumes (boolean value)
#zadara_vol_encrypt=false
# Default template for VPSA volume names (string value)
#zadara_vol_name_template=OS_%s
# Automatically detach from servers on volume delete (boolean
# value)
#zadara_vpsa_auto_detach_on_delete=true
# Don't halt on deletion of non-existing volumes (boolean
# value)
123
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
#zadara_vpsa_allow_nonexistent_delete=true
#
# Options defined in cinder.volume.manager
#
# Driver to use for volume creation (string value)
#volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver
# Timeout for creating the volume to migrate to when
# performing volume migration (seconds) (integer value)
#migration_create_volume_timeout_secs=300
# Offload pending volume delete during volume service startup
# (boolean value)
#volume_service_inithost_offload=false
# FC Zoning mode configured (string value)
#zoning_mode=none
# User defined capabilities, a JSON formatted string
# specifying key/value pairs. (string value)
#extra_capabilities={}
[BRCD_FABRIC_EXAMPLE]
#
# Options defined in cinder.zonemanager.drivers.brocade.brcd_fabric_opts
#
# Management IP of fabric (string value)
#fc_fabric_address=
# Fabric user ID (string value)
#fc_fabric_user=
# Password for user (string value)
#fc_fabric_password=
# Connecting port (integer value)
#fc_fabric_port=22
# overridden zoning policy (string value)
#zoning_policy=initiator-target
# overridden zoning activation state (boolean value)
#zone_activate=true
# overridden zone name prefix (string value)
#zone_name_prefix=<None>
# Principal switch WWN of the fabric (string value)
#principal_switch_wwn=<None>
[database]
#
124
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
#
# Options defined in cinder.openstack.common.db.sqlalchemy.session
#
# The SQLAlchemy connection string used to connect to the
# database (string value)
# Deprecated group/name - [DEFAULT]/sql_connection
#connection=sqlite:///$state_path/$sqlite_db
# timeout before idle sql connections are reaped (integer
# value)
# Deprecated group/name - [DEFAULT]/sql_idle_timeout
#idle_timeout=3600
# Minimum number of SQL connections to keep open in a pool
# (integer value)
# Deprecated group/name - [DEFAULT]/sql_min_pool_size
#min_pool_size=1
# Maximum number of SQL connections to keep open in a pool
# (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_pool_size
#max_pool_size=5
# maximum db connection retries during startup. (setting -1
# implies an infinite retry count) (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_retries
#max_retries=10
# interval between retries of opening a sql connection
# (integer value)
# Deprecated group/name - [DEFAULT]/sql_retry_interval
#retry_interval=10
# If set, use this value for max_overflow with sqlalchemy
# (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_overflow
#max_overflow=<None>
# Verbosity of SQL debugging information. 0=None,
# 100=Everything (integer value)
# Deprecated group/name - [DEFAULT]/sql_connection_debug
#connection_debug=0
# Add python stack traces to SQL as comment strings (boolean
# value)
# Deprecated group/name - [DEFAULT]/sql_connection_trace
125
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
#connection_trace=false
[fc-zone-manager]
#
# Options defined in cinder.zonemanager.drivers.brocade.brcd_fc_zone_driver
#
# Southbound connector for zoning operation (string value)
#brcd_sb_connector=cinder.zonemanager.drivers.brocade.brcd_fc_zone_client_cli.
BrcdFCZoneClientCLI
#
# Options defined in cinder.zonemanager.fc_zone_manager
#
# FC Zone Driver responsible for zone management (string
# value)
#zone_driver=cinder.zonemanager.drivers.brocade.brcd_fc_zone_driver.
BrcdFCZoneDriver
# Zoning policy configured by user (string value)
#zoning_policy=initiator-target
# Comma separated list of fibre channel fabric names. This
# list of names is used to retrieve other SAN credentials for
# connecting to each SAN fabric (string value)
#fc_fabric_names=<None>
# FC San Lookup Service (string value)
#fc_san_lookup_service=cinder.zonemanager.drivers.brocade.
brcd_fc_san_lookup_service.BrcdFCSanLookupService
[keymgr]
#
# Options defined in cinder.keymgr
#
# The full class name of the key manager API class (string
# value)
#api_class=cinder.keymgr.conf_key_mgr.ConfKeyManager
#
# Options defined in cinder.keymgr.conf_key_mgr
#
# Fixed key returned by key manager, specified in hex (string
# value)
#fixed_key=<None>
[keystone_authtoken]
#
# Options defined in keystoneclient.middleware.auth_token
126
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
#
# Prefix to prepend at the beginning of the path. Deprecated,
# use identity_uri. (string value)
#auth_admin_prefix=
# Host providing the admin Identity API endpoint. Deprecated,
# use identity_uri. (string value)
#auth_host=127.0.0.1
# Port of the admin Identity API endpoint. Deprecated, use
# identity_uri. (integer value)
#auth_port=35357
# Protocol of the admin Identity API endpoint (http or https).
# Deprecated, use identity_uri. (string value)
#auth_protocol=https
# Complete public Identity API endpoint (string value)
#auth_uri=<None>
# Complete admin Identity API endpoint. This should specify
# the unversioned root endpoint e.g. https://localhost:35357/
# (string value)
#identity_uri=<None>
# API version of the admin Identity API endpoint (string
# value)
#auth_version=<None>
# Do not handle authorization requests within the middleware,
# but delegate the authorization decision to downstream WSGI
# components (boolean value)
#delay_auth_decision=false
# Request timeout value for communicating with Identity API
# server. (boolean value)
#http_connect_timeout=<None>
# How many times are we trying to reconnect when communicating
# with Identity API Server. (integer value)
#http_request_max_retries=3
# This option is deprecated and may be removed in a future
# release. Single shared secret with the Keystone
# configuration used for bootstrapping a Keystone
# installation, or otherwise bypassing the normal
# authentication process. This option should not be used, use
# `admin_user` and `admin_password` instead. (string value)
#admin_token=<None>
# Keystone account username (string value)
#admin_user=<None>
# Keystone account password (string value)
#admin_password=<None>
# Keystone service account tenant name to validate user tokens
# (string value)
#admin_tenant_name=admin
127
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
128
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
#include_service_catalog=true
# Used to control the use and type of token binding. Can be
# set to: "disabled" to not check token binding. "permissive"
# (default) to validate binding information if the bind type
# is of a form known to the server and ignore it if not.
# "strict" like "permissive" but if the bind type is unknown
# the token will be rejected. "required" any form of token
# binding is needed to be allowed. Finally the name of a
# binding method that must be present in tokens. (string
# value)
#enforce_token_bind=permissive
# If true, the revocation list will be checked for cached
# tokens. This requires that PKI tokens are configured on the
# Keystone server. (boolean value)
#check_revocations_for_cached=false
# Hash algorithms to use for hashing PKI tokens. This may be a
# single algorithm or multiple. The algorithms are those
# supported by Python standard hashlib.new(). The hashes will
# be tried in the order given, so put the preferred one first
# for performance. The result of the first hash will be stored
# in the cache. This will typically be set to multiple values
# only while migrating from a less secure algorithm to a more
# secure one. Once all the old tokens are expired this option
# should be set to a single value for better performance.
# (list value)
#hash_algorithms=md5
[matchmaker_redis]
#
# Options defined in oslo.messaging
#
# Host to locate redis. (string value)
#host=127.0.0.1
# Use this port to connect to redis host. (integer value)
#port=6379
# Password for Redis server (optional). (string value)
#password=<None>
[matchmaker_ring]
#
# Options defined in oslo.messaging
#
# Matchmaker ring file (JSON). (string value)
# Deprecated group/name - [DEFAULT]/matchmaker_ringfile
#ringfile=/etc/oslo/matchmaker_ring.json
[ssl]
129
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
#
# Options defined in cinder.openstack.common.sslutils
#
# CA certificate file to use to verify connecting clients
# (string value)
#ca_file=<None>
# Certificate file to use when starting the server securely
# (string value)
#cert_file=<None>
# Private key file to use when starting the server securely
# (string value)
#key_file=<None>
api-paste.ini
Use the api-paste.ini file to configure the Block Storage API service.
#############
# OpenStack #
#############
[composite:osapi_volume]
use = call:cinder.api:root_app_factory
/: apiversions
/v1: openstack_volume_api_v1
/v2: openstack_volume_api_v2
[composite:openstack_volume_api_v1]
use = call:cinder.api.middleware.auth:pipeline_factory
noauth = request_id faultwrap sizelimit noauth apiv1
keystone = request_id faultwrap sizelimit authtoken keystonecontext apiv1
keystone_nolimit = request_id faultwrap sizelimit authtoken keystonecontext
apiv1
[composite:openstack_volume_api_v2]
use = call:cinder.api.middleware.auth:pipeline_factory
noauth = request_id faultwrap sizelimit noauth apiv2
keystone = request_id faultwrap sizelimit authtoken keystonecontext apiv2
keystone_nolimit = request_id faultwrap sizelimit authtoken keystonecontext
apiv2
[filter:request_id]
paste.filter_factory = cinder.openstack.common.middleware.
request_id:RequestIdMiddleware.factory
[filter:faultwrap]
paste.filter_factory = cinder.api.middleware.fault:FaultWrapper.factory
[filter:noauth]
paste.filter_factory = cinder.api.middleware.auth:NoAuthMiddleware.factory
[filter:sizelimit]
130
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
paste.filter_factory = cinder.api.middleware.sizelimit:RequestBodySizeLimiter.
factory
[app:apiv1]
paste.app_factory = cinder.api.v1.router:APIRouter.factory
[app:apiv2]
paste.app_factory = cinder.api.v2.router:APIRouter.factory
[pipeline:apiversions]
pipeline = faultwrap osvolumeversionapp
[app:osvolumeversionapp]
paste.app_factory = cinder.api.versions:Versions.factory
##########
# Shared #
##########
[filter:keystonecontext]
paste.filter_factory = cinder.api.middleware.auth:CinderKeystoneContext.
factory
[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
policy.json
The policy.json file defines additional access controls that apply to the Block Storage
service.
{
"context_is_admin": [["role:admin"]],
"admin_or_owner": [["is_admin:True"], ["project_id:%(project_id)s"]],
"default": [["rule:admin_or_owner"]],
"admin_api": [["is_admin:True"]],
"volume:create": [],
"volume:get_all": [],
"volume:get_volume_metadata": [],
"volume:get_volume_admin_metadata": [["rule:admin_api"]],
"volume:delete_volume_admin_metadata": [["rule:admin_api"]],
"volume:update_volume_admin_metadata": [["rule:admin_api"]],
"volume:get_snapshot": [],
"volume:get_all_snapshots": [],
"volume:extend": [],
"volume:update_readonly_flag": [],
"volume:retype": [],
"volume_extension:types_manage": [["rule:admin_api"]],
"volume_extension:types_extra_specs": [["rule:admin_api"]],
"volume_extension:volume_type_encryption": [["rule:admin_api"]],
"volume_extension:volume_encryption_metadata": [["rule:admin_or_owner"]],
"volume_extension:extended_snapshot_attributes": [],
"volume_extension:volume_image_metadata": [],
131
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
"volume_extension:quotas:show": [],
"volume_extension:quotas:update": [["rule:admin_api"]],
"volume_extension:quota_classes": [],
"volume_extension:volume_admin_actions:reset_status":
[["rule:admin_api"]],
"volume_extension:snapshot_admin_actions:reset_status":
[["rule:admin_api"]],
"volume_extension:volume_admin_actions:force_delete":
[["rule:admin_api"]],
"volume_extension:snapshot_admin_actions:force_delete":
[["rule:admin_api"]],
"volume_extension:volume_admin_actions:migrate_volume":
[["rule:admin_api"]],
"volume_extension:volume_admin_actions:migrate_volume_completion":
[["rule:admin_api"]],
"volume_extension:volume_host_attribute": [["rule:admin_api"]],
"volume_extension:volume_tenant_attribute": [["rule:admin_or_owner"]],
"volume_extension:volume_mig_status_attribute": [["rule:admin_api"]],
"volume_extension:hosts": [["rule:admin_api"]],
"volume_extension:services": [["rule:admin_api"]],
"volume:services": [["rule:admin_api"]],
"volume:create_transfer": [],
"volume:accept_transfer": [],
"volume:delete_transfer": [],
"volume:get_all_transfers": [],
"backup:create" : [],
"backup:delete": [],
"backup:get": [],
"backup:get_all": [],
"backup:restore": [],
"backup:backup-import": [["rule:admin_api"]],
"backup:backup-export": [["rule:admin_api"]],
"snapshot_extension:snapshot_actions:update_snapshot_status": []
}
rootwrap.conf
The rootwrap.conf file defines configuration values used by the rootwrap script when
the Block Storage service must escalate its privileges to those of the root user.
# Configuration for cinder-rootwrap
# This file should be owned by (and only-writeable by) the root user
[DEFAULT]
# List of directories to load filter definitions from (separated by ',').
# These directories MUST all be only writeable by root !
filters_path=/etc/cinder/rootwrap.d,/usr/share/cinder/rootwrap
# List of directories to search executables in, in case filters do not
# explicitely specify a full path (separated by ',')
# If not specified, defaults to system PATH environment variable.
# These directories MUST all be only writeable by root !
exec_dirs=/sbin,/usr/sbin,/bin,/usr/bin
132
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
api.log
openstack-cinder-api
cinder-api
cinder-manage.log
cinder-manage
cinder-manage
scheduler.log
openstack-cinder-scheduler
cinder-scheduler
volume.log
openstack-cinder-volume
cinder-volume
Description
[DEFAULT]
zoning_mode = none
[fc-zone-manager]
fc_fabric_names = None
zoning_policy = initiator-target
133
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
To use different Fibre Channel Zone Drivers, use the parameters described in this section.
Note
When multi backend configuration is used, provide the zoning_mode
configuration option as part of the volume driver configuration where
volume_driver option is specified.
Note
Default value of zoning_mode is None and this needs to be changed to fabric to allow fabric zoning.
Note
zoning_policy can be configured as initiator-target or initiator
Description
[fc-zone-manager]
brcd_sb_connector =
(StrOpt) Southbound connector for zoning operation
cinder.zonemanager.drivers.brocade.brcd_fc_zone_client_cli.BrcdFCZoneClientCLI
cisco_sb_connector =
(StrOpt) Southbound connector for zoning operation
cinder.zonemanager.drivers.cisco.cisco_fc_zone_client_cli.CiscoFCZoneClientCLI
fc_san_lookup_service =
(StrOpt) FC San Lookup Service
cinder.zonemanager.drivers.brocade.brcd_fc_san_lookup_service.BrcdFCSanLookupService
zone_driver =
(StrOpt) FC Zone Driver responsible for zone management
cinder.zonemanager.drivers.brocade.brcd_fc_zone_driver.BrcdFCZoneDriver
Configure SAN fabric parameters in the form of fabric groups as described in the example
below:
Description
[BRCD_FABRIC_EXAMPLE]
fc_fabric_address =
fc_fabric_password =
fc_fabric_port = 22
fc_fabric_user =
principal_switch_wwn = None
zone_activate = True
zone_name_prefix = None
zoning_policy = initiator-target
134
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
[CISCO_FABRIC_EXAMPLE]
cisco_fc_fabric_address =
cisco_fc_fabric_password =
cisco_fc_fabric_port = 22
cisco_fc_fabric_user =
cisco_zone_activate = True
cisco_zone_name_prefix = None
cisco_zoning_policy = initiator-target
cisco_zoning_vsan = None
Note
Define a fabric group for each fabric using the fabric names used in
fc_fabric_names configuration option as group name.
System requirements
Brocade Fibre Channel Zone Driver requires firmware version FOS v6.4 or higher.
As a best practice for zone management, use a user account with zoneadmin role. Users
with admin role (including the default admin user account) are limited to a maximum of
two concurrent SSH sessions.
For information about how to manage Brocade Fibre Channel switches, see the Brocade
Fabric OS user documentation.
Initial configuration
Configuration changes need to be made to any nodes running the cinder-volume or
nova-compute services.
135
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Edit the /etc/cinder/cinder.conf file and add or update the value of the option
fixed_key in the [keymgr] section:
[keymgr]
# Fixed key returned by key manager, specified in hex (string
# value)
fixed_key =
0000000000000000000000000000000000000000000000000000000000000000
2.
Restart cinder-volume.
Edit the /etc/nova/nova.conf file and add or update the value of the option
fixed_key in the [keymgr] section (add a keymgr section as shown if needed):
[keymgr]
# Fixed key returned by key manager, specified in hex (string
# value)
fixed_key =
0000000000000000000000000000000000000000000000000000000000000000
2.
Restart nova-compute.
2.
3.
Mark the volume type as encrypted and provide the necessary details:
$ cinder encryption-type-create --cipher aes-xts-plain64 --key_size 512 \
--control_location front-end LUKS nova.volume.encryptors.luks.
LuksEncryptor
+-------------------------------------+-------------------------------------------+-----------------+---------+------------------+
|
Volume Type ID
|
Provider
|
Cipher
| Key Size | Control Location |
136
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
+-------------------------------------+-------------------------------------------+-----------------+---------+------------------+
| e64b35a4-a849-4c53-9cc7-2345d3c8fbde | nova.volume.encryptors.luks.
LuksEncryptor | aes-xts-plain64 |
512
|
front-end
|
+-------------------------------------+-------------------------------------------+-----------------+---------+------------------+
Support for creating the volume type in the OpenStack dashboard (horizon) exists today,
however support for tagging the type as encrypted and providing the additional information needed is still in review.
2.
3.
137
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
|
created_at
|
2014-08-10T01:24:24.000000
|
|
description
|
None
|
|
encrypted
|
True
|
|
id
| 86060306-6f43-4c92-9ab8-ddcd83acd973 |
|
metadata
|
{}
|
|
name
|
encrypted volume
|
|
os-vol-host-attr:host
|
controller
|
| os-vol-mig-status-attr:migstat |
None
|
| os-vol-mig-status-attr:name_id |
None
|
| os-vol-tenant-attr:tenant_id |
08fdea76c760475f82087a45dbe94918
|
|
size
|
1
|
|
snapshot_id
|
None
|
|
source_volid
|
None
|
|
status
|
creating
|
|
user_id
|
7cbc6b58b372439e8f70e2a9103f1332
|
|
volume_type
|
LUKS
|
+--------------------------------+--------------------------------------+
Notice the encrypted parameter; it will show True/False. The option volume_type is also
shown for easy review.
Create a VM:
$ nova boot --flavor m1.tiny --image cirros-0.3.1-x86_64-disk vm-test
2.
Create two volumes, one encrypted and one not encrypted then attach them to your
VM:
$ cinder create --display-name 'unencrypted volume' 1
$ cinder create --display-name 'encrypted volume' --volume-type LUKS 1
$ cinder list
+--------------------------------------+-----------+-------------------+------+-------------+----------+-------------+
|
ID
|
Status |
Name
|
Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+-------------------+------+-------------+----------+-------------+
| 64b48a79-5686-4542-9b52-d649b51c10a2 | available | unencrypted volume |
1
|
None
| false
|
|
| db50b71c-bf97-47cb-a5cf-b4b43a0edab6 | available | encrypted volume |
1
|
LUKS
| false
|
|
+--------------------------------------+-----------+-------------------+------+-------------+----------+-------------+
$ nova volume-attach vm-test 64b48a79-5686-4542-9b52-d649b51c10a2 /dev/vdb
$ nova volume-attach vm-test db50b71c-bf97-47cb-a5cf-b4b43a0edab6 /dev/vdc
3.
On the VM, send some text to the newly attached volumes and synchronize them:
# echo "Hello, world (unencrypted /dev/vdb)" >> /dev/vdb
# echo "Hello, world (encrypted /dev/vdc)" >> /dev/vdc
138
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
4.
On the system hosting cinder volume services, synchronize to flush the I/O cache then
test to see if your strings can be found:
# sync && sleep 2
# sync && sleep 2
# strings /dev/stack-volumes/volume-* | grep "Hello"
Hello, world (unencrypted /dev/vdb)
In the above example you see that the search returns the string written to the unencrypted
volume, but not the encrypted one.
Additional options
These options can also be set in the cinder.conf file.
Description
[keystone_authtoken]
admin_password = None
admin_tenant_name = admin
admin_token = None
admin_user = None
auth_admin_prefix =
auth_host = 127.0.0.1
auth_port = 35357
(IntOpt) Port of the admin Identity API endpoint. Deprecated, use identity_uri.
auth_protocol = https
auth_uri = None
auth_version = None
cache = None
cafile = None
certfile = None
check_revocations_for_cached = False
delay_auth_decision = False
enforce_token_bind = permissive
(StrOpt) Used to control the use and type of token binding. Can be set to: "disabled" to not check token binding.
139
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
"permissive" (default) to validate binding information if
the bind type is of a form known to the server and ignore
it if not. "strict" like "permissive" but if the bind type is unknown the token will be rejected. "required" any form of
token binding is needed to be allowed. Finally the name of
a binding method that must be present in tokens.
hash_algorithms = md5
http_connect_timeout = None
http_request_max_retries = 3
identity_uri = None
include_service_catalog = True
(BoolOpt) (optional) indicate whether to set the X-Service-Catalog header. If False, middleware will not ask for
service catalog on token validation and will not set the XService-Catalog header.
insecure = False
keyfile = None
memcache_secret_key = None
memcache_security_strategy = None
(StrOpt) (optional) if defined, indicate whether token data should be authenticated or authenticated and encrypted. Acceptable values are MAC or ENCRYPT. If MAC, token data is authenticated (with HMAC) in the cache. If ENCRYPT, token data is encrypted and authenticated in the
cache. If the value is not one of these options or empty,
auth_token will raise an exception on initialization.
revocation_cache_time = 10
signing_dir = None
token_cache_time = 300
(IntOpt) In order to prevent excessive effort spent validating tokens, the middleware caches previously-seen tokens
for a configurable duration (in seconds). Set to -1 to disable caching completely.
Description
[DEFAULT]
cinder_huawei_conf_file = /etc/cinder/cinder_huawei_conf.xml
140
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
[DEFAULT]
nas_ip =
nas_login = admin
nas_password =
nas_private_key =
nas_ssh_port = 22
Description
[DEFAULT]
msa_vdisk = OpenStack
Description
[DEFAULT]
nimble_pool_name = default
nimble_subnet_label = *
Description
[DEFAULT]
pure_api_token = None
Description
[DEFAULT]
db_backend = sqlalchemy
db_driver = cinder.db
[database]
backend = sqlalchemy
connection = None
connection_debug = 0
connection_trace = False
db_inc_retry_interval = True
db_max_retries = 20
(IntOpt) Maximum database connection retries before error is raised. Set to -1 to specify an infinite retry count.
db_max_retry_interval = 10
(IntOpt) If db_inc_retry_interval is set, the maximum seconds between database connection retries.
db_retry_interval = 1
idle_timeout = 3600
141
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
max_overflow = None
max_pool_size = None
max_retries = 10
min_pool_size = 1
mysql_sql_mode = TRADITIONAL
pool_timeout = None
retry_interval = 10
slave_connection = None
(StrOpt) The SQLAlchemy connection string to use to connect to the slave database.
sqlite_db = oslo.sqlite
sqlite_synchronous = True
use_db_reconnect = False
use_tpool = False
Description
[keymgr]
api_class = cinder.keymgr.conf_key_mgr.ConfKeyManager (StrOpt) The full class name of the key manager API class
encryption_api_url = http://localhost:9311/v1
encryption_auth_url = http://localhost:5000/v2.0
fixed_key = None
Description
[DEFAULT]
allocated_capacity_weight_multiplier = -1.0
capacity_weight_multiplier = 1.0
enabled_backends = None
iscsi_helper = tgtadm
(StrOpt) iSCSI target user-land tool to use. tgtadm is default, use lioadm for LIO iSCSI support, iseradm for the ISER protocol, or fake for testing.
iscsi_iotype = fileio
(StrOpt) Sets the behavior of the iSCSI target to either perform blockio or fileio optionally, auto can be set and Cinder will autodetect type of backing device
142
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
iscsi_ip_address = $my_ip
iscsi_num_targets = 100
iscsi_port = 3260
iscsi_target_prefix = iqn.2010-10.org.openstack:
iscsi_write_cache = on
(StrOpt) Sets the behavior of the iSCSI target to either perform write-back(on) or write-through(off). This parameter
is valid if iscsi_helper is set to tgtadm or iseradm.
iser_helper = tgtadm
iser_ip_address = $my_ip
iser_num_targets = 100
iser_port = 3260
iser_target_prefix = iqn.2010-10.org.iser.openstack:
max_gigabytes = 10000
migration_create_volume_timeout_secs = 300
num_iser_scan_tries = 3
(IntOpt) The maximum number of times to rescan iSER targetto find volume
num_volume_device_scan_tries = 3
volume_backend_name = None
volume_clear = zero
volume_clear_ionice = None
volume_clear_size = 0
volume_copy_blkio_cgroup_name = cinder-volume-copy
(StrOpt) The blkio cgroup name to be used to limit bandwidth of volume copy
volume_copy_bps_limit = 0
volume_dd_blocksize = 1M
volume_service_inithost_offload = False
volume_usage_audit_period = month
(StrOpt) Time period for which to generate volume usages. The options are hour, day, month, or year.
volumes_dir = $state_path/volumes
143
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
[DEFAULT]
matchmaker_heartbeat_freq = 300
matchmaker_heartbeat_ttl = 600
rpc_backend = rabbit
rpc_cast_timeout = 30
rpc_conn_pool_size = 30
rpc_response_timeout = 60
rpc_thread_pool_size = 64
volume_topic = cinder-volume
Description
[DEFAULT]
amqp_auto_delete = False
amqp_durable_queues = False
control_exchange = openstack
notification_driver = []
notification_topics = notifications
transport_url = None
Description
[DEFAULT]
qpid_heartbeat = 60
qpid_hostname = localhost
qpid_hosts = $qpid_hostname:$qpid_port
qpid_password =
qpid_port = 5672
qpid_protocol = tcp
qpid_receiver_capacity = 1
qpid_sasl_mechanisms =
qpid_tcp_nodelay = True
qpid_topology_version = 1
144
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
Description
qpid_username =
juno
Description
[DEFAULT]
kombu_reconnect_delay = 1.0
(FloatOpt) How long to wait before reconnecting in response to an AMQP consumer cancel notification.
kombu_ssl_ca_certs =
kombu_ssl_certfile =
kombu_ssl_keyfile =
kombu_ssl_version =
rabbit_ha_queues = False
rabbit_host = localhost
rabbit_hosts = $rabbit_host:$rabbit_port
rabbit_login_method = AMQPLAIN
rabbit_max_retries = 0
(IntOpt) Maximum number of RabbitMQ connection retries. Default is 0 (infinite retry count).
rabbit_password = guest
rabbit_port = 5672
rabbit_retry_backoff = 2
rabbit_retry_interval = 1
rabbit_use_ssl = False
rabbit_userid = guest
rabbit_virtual_host = /
Description
[matchmaker_redis]
host = 127.0.0.1
password = None
port = 6379
[matchmaker_ring]
ringfile = /etc/oslo/matchmaker_ring.json
Description
[DEFAULT]
145
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
rpc_zmq_bind_address = *
rpc_zmq_contexts = 1
rpc_zmq_host = localhost
rpc_zmq_ipc_dir = /var/run/openstack
rpc_zmq_matchmaker =
(StrOpt) MatchMaker driver.
oslo.messaging._drivers.matchmaker.MatchMakerLocalhost
rpc_zmq_port = 9501
rpc_zmq_topic_backlog = None
Description
[DEFAULT]
san_zfs_volume_base = rpool/
(StrOpt) The ZFS path under which to create zvols for volumes.
Description
[DEFAULT]
filters_path = /etc/cinder/rootwrap.d,/usr/share/cinder/rootwrap
List of directories to load filter definitions from (separated by ','). These directories MUST all be only writeable by
root !
exec_dirs = /sbin,/usr/sbin,/bin,/usr/bin
use_syslog = False
syslog_log_facility = syslog
Which syslog facility to use. Valid values include auth, authpriv, syslog, local0, local1... Default value is 'syslog'
syslog_log_level = ERROR
Description
[DEFAULT]
ssl_ca_file = None
ssl_cert_file = None
ssl_key_file = None
(StrOpt) Private key file to use when starting the server securely
[ssl]
ca_file = None
cert_file = None
146
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
key_file = None
(StrOpt) Private key file to use when starting the server securely
Description
[DEFAULT]
allowed_direct_url_schemes =
glance_api_insecure = False
glance_api_servers = $glance_host:$glance_port
glance_api_ssl_compression = False
glance_api_version = 1
glance_ca_certificates_file = None
glance_host = $my_ip
glance_num_retries = 0
glance_port = 9292
glance_request_timeout = None
image_conversion_dir = $state_path/conversion
use_multipath_for_image_xfer = False
Description
[DEFAULT]
backup_swift_auth_version = 1
backup_swift_tenant = None
(StrOpt) Swift tenant/account name. Required when connecting to an auth 2.0 system
Description
[DEFAULT]
cinder_emc_config_file = /etc/cinder/cinder_emc_config.xml
(StrOpt) use this file for cinder emc plugin config data
147
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
destroy_empty_storage_group = False
initiator_auto_registration = False
iscsi_initiators =
max_luns_per_storage_group = 255
naviseccli_path =
storage_vnx_authentication_type = global
storage_vnx_pool_name = None
storage_vnx_security_file_dir = None
Description
[DEFAULT]
backup_api_class = cinder.backup.api.API
backup_compression_algorithm = zlib
backup_driver = cinder.backup.drivers.swift
backup_manager =
cinder.backup.manager.BackupManager
(StrOpt) Full class name for the Manager for volume backup
backup_metadata_version = 1
(IntOpt) Backup metadata version to be used when backing up volume metadata. If this number is bumped, make
sure the service doing the restore supports the new version.
backup_name_template = backup-%s
backup_topic = cinder-backup
snapshot_name_template = snapshot-%s
snapshot_same_host = True
Description
[DEFAULT]
hp3par_api_url =
hp3par_cpg = OpenStack
hp3par_cpg_snap =
(StrOpt) The CPG to use for Snapshots for volumes. If empty hp3par_cpg will be used
hp3par_debug = False
hp3par_iscsi_chap_enabled = False
hp3par_iscsi_ips =
hp3par_password =
148
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
hp3par_snapshot_expiration =
hp3par_snapshot_retention =
hp3par_username =
Description
[DEFAULT]
api_paste_config = api-paste.ini
api_rate_limit = True
az_cache_duration = 3600
default_timeout = 525600
enable_v1_api = True
enable_v2_api = True
extra_capabilities = {}
max_header_line = 16384
(IntOpt) Maximum line size of message headers to be accepted. max_header_line may need to be increased when
using large tokens (typically those generated by the Keystone v3 API with big service catalogs).
osapi_max_limit = 1000
osapi_max_request_body_size = 114688
osapi_volume_base_URL = None
osapi_volume_ext_list =
osapi_volume_extension =
['cinder.api.contrib.standard_extensions']
osapi_volume_listen = 0.0.0.0
osapi_volume_listen_port = 8776
osapi_volume_workers = None
transfer_api_class = cinder.transfer.api.API
volume_api_class = cinder.volume.api.API
(StrOpt) The full class name of the volume API class to use
volume_name_template = volume-%s
volume_number_multiplier = -1.0
149
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
volume_transfer_key_length = 16
volume_transfer_salt_length = 8
Description
[DEFAULT]
hplefthand_api_url = None
hplefthand_clustername = None
hplefthand_debug = False
hplefthand_iscsi_chap_enabled = False
hplefthand_password = None
hplefthand_username = None
Description
[DEFAULT]
scality_sofs_config = None
scality_sofs_mount_point = $state_path/scality
scality_sofs_volume_dir = cinder/volumes
Description
[DEFAULT]
available_devices =
Description
[DEFAULT]
nova_api_insecure = False
nova_ca_certificates_file = None
nova_catalog_admin_info = compute:nova:adminURL
nova_catalog_info = compute:nova:publicURL
nova_endpoint_admin_template = None
nova_endpoint_template = None
(StrOpt) Override service catalog lookup with template for nova endpoint e.g. http://localhost:8774/v2/
%(project_id)s
os_region_name = None
150
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
[DEFAULT]
san_clustername =
san_ip =
san_is_local = False
san_login = admin
san_password =
san_private_key =
san_secondary_ip = None
san_ssh_port = 22
san_thin_provision = True
ssh_conn_timeout = 30
ssh_max_pool_conn = 5
ssh_min_pool_conn = 1
Description
[DEFAULT]
cloned_volume_same_az = True
Description
[DEFAULT]
auth_strategy = noauth
Description
[DEFAULT]
scheduler_default_filters = AvailabilityZoneFilter, Capacity- (ListOpt) Which filter class names to use for filtering hosts
Filter, CapabilitiesFilter
when not specified in the request.
scheduler_default_weighers = CapacityWeigher
scheduler_driver =
cinder.scheduler.filter_scheduler.FilterScheduler
scheduler_host_manager =
cinder.scheduler.host_manager.HostManager
scheduler_json_config_location =
scheduler_manager =
cinder.scheduler.manager.SchedulerManager
scheduler_max_attempts = 3
151
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
Description
scheduler_topic = cinder-scheduler
juno
Description
[DEFAULT]
max_age = 0
quota_backup_gigabytes = 1000
quota_backups = 10
quota_consistencygroups = 10
quota_driver = cinder.quota.DbQuotaDriver
quota_gigabytes = 1000
quota_snapshots = 10
quota_volumes = 10
reservation_expire = 86400
use_default_quota_class = True
Description
[DEFAULT]
compute_api_class = cinder.compute.nova.API
consistencygroup_api_class =
cinder.consistencygroup.api.API
default_availability_zone = None
default_volume_type = None
enable_new_services = True
host = localhost
(StrOpt) Name of this node. This can be an opaque identifier. It is not necessarily a host name, FQDN, or IP address.
iet_conf = /etc/iet/ietd.conf
lio_initiator_iqns =
lock_path = None
memcached_servers = None
monkey_patch = False
monkey_patch_modules =
my_ip = 10.0.0.1
no_snapshot_gb_quota = False
num_shell_tries = 3
152
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
periodic_fuzzy_delay = 60
(IntOpt) Range, in seconds, to randomly delay when starting the periodic task scheduler to reduce stampeding. (Disable by setting to 0)
periodic_interval = 60
policy_default_rule = default
policy_file = policy.json
replication_api_class = cinder.replication.api.API
report_interval = 10
reserved_percentage = 0
rootwrap_config = /etc/cinder/rootwrap.conf
run_external_periodic_tasks = True
service_down_time = 60
ssh_hosts_key_file = $state_path/ssh_known_hosts
state_path = /var/lib/cinder
storage_availability_zone = nova
strict_ssh_host_key_policy = False
tcp_keepalive = True
tcp_keepalive_count = None
tcp_keepalive_interval = None
tcp_keepidle = 600
until_refresh = 0
use_forwarded_for = False
[keystone_authtoken]
memcached_servers = None
Description
[DEFAULT]
debug = False
(BoolOpt) Print debugging output (set logging level to DEBUG instead of default WARNING level).
153
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
fatal_exception_format_errors = False
log_config_append = None
log_dir = None
(StrOpt) (Optional) The base directory used for relative -log-file paths.
log_file = None
(StrOpt) (Optional) Name of log file to output to. If no default is set, logging will go to stdout.
log_format = None
(StrOpt) DEPRECATED. A logging.Formatter log message format string which may use any of the available
logging.LogRecord attributes. This option is deprecated. Please use logging_context_format_string and
logging_default_format_string instead.
logging_context_format_string = %(asctime)s.
%(msecs)03d %(process)d %(levelname)s %(name)s
[%(request_id)s %(user_identity)s] %(instance)s
%(message)s
logging_debug_format_suffix = %(funcName)s
%(pathname)s:%(lineno)d
logging_default_format_string = %(asctime)s.%(msecs)03d (StrOpt) Format string to use for log messages without
%(process)d %(levelname)s %(name)s [-] %(instance)s
context.
%(message)s
logging_exception_prefix = %(asctime)s.%(msecs)03d
%(process)d TRACE %(name)s %(instance)s
publish_errors = False
syslog_log_facility = LOG_USER
use_stderr = True
use_syslog = False
use_syslog_rfc_format = False
verbose = False
Description
[DEFAULT]
154
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
backdoor_port = None
disable_process_locking = False
Description
[DEFAULT]
fake_rabbit = False
Description
[profiler]
profiler_enabled = False
trace_sqlalchemy = False
Description
[DEFAULT]
fusionio_iocontrol_retry = 3
fusionio_iocontrol_targetdelay = 5
fusionio_iocontrol_verify_cert = True
Description
[DEFAULT]
hitachi_add_chap_user = False
hitachi_async_copy_check_interval = 10
hitachi_auth_method = None
hitachi_auth_password = HBSD-CHAP-password
hitachi_auth_user = HBSD-CHAP-user
hitachi_copy_check_interval = 3
hitachi_copy_speed = 3
hitachi_default_copy_method = FULL
hitachi_group_range = None
hitachi_group_request = False
hitachi_horcm_add_conf = True
hitachi_horcm_numbers = 200,201
hitachi_horcm_password = None
hitachi_horcm_user = None
hitachi_ldev_range = None
155
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
hitachi_pool_id = None
hitachi_serial_number = None
hitachi_target_ports = None
hitachi_thin_pool_id = None
hitachi_unit_name = None
hitachi_zoning_request = False
Description
[DEFAULT]
ibmnas_platform_type = v7ku
Description
[DEFAULT]
datera_api_port = 7717
datera_api_token = None
datera_api_version = 1
datera_num_replicas = 3
driver_client_cert = None
driver_client_cert_key = None
(StrOpt) The path to the client certificate key for verification, if the driver supports it.
Description
[DEFAULT]
cinder_smis_config_file = /etc/cinder/cinder_fujitsu_eternus_dx.xml
Description
[DEFAULT]
smbfs_default_volume_format = qcow2
smbfs_mount_options =
noperm,file_mode=0775,dir_mode=0775
smbfs_mount_point_base = $state_path/mnt
smbfs_oversub_ratio = 1.0
smbfs_shares_config = /etc/cinder/smbfs_shares
156
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
smbfs_sparsed_volumes = True
smbfs_used_ratio = 0.95
(FloatOpt) Percent of ACTUAL usage of the underlying volume before no new volumes can be allocated to the volume destination.
[DEFAULT] backup_swift_auth_version = 1
(StrOpt) Swift tenant/account name. Required when connecting to an auth 2.0 system
[DEFAULT] consistencygroup_api_class =
cinder.consistencygroup.api.API
[DEFAULT] datera_api_version = 1
[DEFAULT] datera_num_replicas = 3
[DEFAULT] dpl_pool =
(StrOpt) The path to the client certificate key for verification, if the driver supports it.
[DEFAULT] fusionio_iocontrol_retry = 3
[DEFAULT] fusionio_iocontrol_targetdelay = 5
[DEFAULT] hitachi_async_copy_check_interval = 10
157
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
[DEFAULT] hitachi_copy_check_interval = 3
[DEFAULT] hitachi_copy_speed = 3
[DEFAULT] iscsi_initiators =
[DEFAULT] iscsi_write_cache = on
(StrOpt) Sets the behavior of the iSCSI target to either perform write-back(on) or write-through(off). This parameter
is valid if iscsi_helper is set to tgtadm or iseradm.
[DEFAULT] nimble_subnet_label = *
[DEFAULT] nova_catalog_admin_info =
compute:nova:adminURL
(StrOpt) Override service catalog lookup with template for nova endpoint e.g. http://localhost:8774/v2/
%(project_id)s
158
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
[DEFAULT] quota_backups = 10
[DEFAULT] quota_consistencygroups = 10
[DEFAULT] rados_connect_timeout = -1
[DEFAULT] rbd_store_chunk_size = 4
[DEFAULT] replication_api_class =
cinder.replication.api.API
[DEFAULT] smbfs_mount_options =
noperm,file_mode=0775,dir_mode=0775
(FloatOpt) Percent of ACTUAL usage of the underlying volume before no new volumes can be allocated to the volume destination.
[DEFAULT] storwize_svc_stretched_cluster_partner = None (StrOpt) If operating in stretched cluster mode, specify the name of the pool in which mirrored copies are
stored.Example: "pool2"
[DEFAULT] strict_ssh_host_key_policy = False
(StrOpt) Info to match when looking for swift in the service catalog. Format is: separated values of the form:
<service_type>:<service_name>:<endpoint_type> - Only
used if backup_swift_url is unset
159
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
[DEFAULT] volume_copy_blkio_cgroup_name = cinder-vol- (StrOpt) The blkio cgroup name to be used to limit bandume-copy
width of volume copy
[DEFAULT] volume_copy_bps_limit = 0
[DEFAULT] zfssa_initiator =
[DEFAULT] zfssa_initiator_group =
[DEFAULT] zfssa_initiator_password =
[DEFAULT] zfssa_initiator_user =
[DEFAULT] zfssa_lun_compression =
[DEFAULT] zfssa_lun_logbias =
[DEFAULT] zfssa_lun_volblocksize = 8k
(StrOpt) Block size: 512, 1k, 2k, 4k, 8k, 16k, 32k, 64k,
128k.
[DEFAULT] zfssa_target_password =
[DEFAULT] zfssa_target_user =
[CISCO_FABRIC_EXAMPLE] cisco_fc_fabric_address =
[CISCO_FABRIC_EXAMPLE] cisco_fc_fabric_password =
[CISCO_FABRIC_EXAMPLE] cisco_fc_fabric_port = 22
[CISCO_FABRIC_EXAMPLE] cisco_fc_fabric_user =
[CISCO_FABRIC_EXAMPLE] cisco_zone_name_prefix =
None
[database] db_max_retries = 20
(IntOpt) Maximum database connection retries before error is raised. Set to -1 to specify an infinite retry count.
[database] db_max_retry_interval = 10
(IntOpt) If db_inc_retry_interval is set, the maximum seconds between database connection retries.
160
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
[database] db_retry_interval = 1
(StrOpt) The SQLAlchemy connection string to use to connect to the slave database.
[fc-zone-manager] cisco_sb_connector =
(StrOpt) Southbound connector for zoning operation
cinder.zonemanager.drivers.cisco.cisco_fc_zone_client_cli.CiscoFCZoneClientCLI
[keymgr] encryption_api_url = http://localhost:9311/v1
[keystone_authtoken] check_revocations_for_cached =
False
[DEFAULT] backup_swift_url
http://localhost:8080/v1/AUTH_
None
[DEFAULT] default_log_levels
amqp=WARN, amqplib=WARN,
amqp=WARN, amqplib=WARN,
boto=WARN, qpid=WARN,
boto=WARN, qpid=WARN,
sqlalchemy=WARN, suds=INFO,
sqlalchemy=WARN, suds=INFO,
oslo.messaging=INFO,
oslo.messaging=INFO,
iso8601=WARN,
iso8601=WARN,
requests.packages.urllib3.connectionpool=WARN
requests.packages.urllib3.connectionpool=WARN,
urllib3.connectionpool=WARN,
websocket=WARN,
keystonemiddleware=WARN,
routes.middleware=WARN,
stevedore=WARN
[DEFAULT] default_timeout
20
525600
[DEFAULT] gpfs_storage_pool
None
system
161
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Option
[DEFAULT]
max_luns_per_storage_group
256
255
[DEFAULT] vmware_task_poll_interval 5
0.5
[database] connection
sqlite:///$state_path/$sqlite_db
None
[database] max_pool_size
None
[keystone_authtoken]
revocation_cache_time
300
10
Table1.87.Deprecated options
Deprecated option
New Option
[DEFAULT] db_backend
[database] backend
162
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
2. Compute
Table of Contents
Overview of nova.conf ................................................................................................
Configure logging .......................................................................................................
Configure authentication and authorization ................................................................
Configure resize ..........................................................................................................
Database configuration ...............................................................................................
Configure the Oslo RPC messaging system ...................................................................
Configure the Compute API ........................................................................................
Configure the EC2 API .................................................................................................
Fibre Channel support in Compute ..............................................................................
Hypervisors ..................................................................................................................
Scheduling ...................................................................................................................
Cells ............................................................................................................................
Conductor ...................................................................................................................
Example nova.conf configuration files ......................................................................
Compute log files ........................................................................................................
Compute sample configuration files .............................................................................
New, updated and deprecated options in Juno for OpenStack Compute ......................
163
165
165
165
166
166
169
172
172
172
206
222
226
227
231
232
271
The OpenStack Compute service is a cloud computing fabric controller, which is the main
part of an IaaS system. You can use OpenStack Compute to host and manage cloud computing systems. This section describes the OpenStack Compute configuration options.
To configure your Compute installation, you must define configuration options in these
files:
nova.conf. Contains most of the Compute configuration options. Resides in the /etc/
nova directory.
api-paste.ini. Defines Compute limits. Resides in the /etc/nova directory.
Related Image Service and Identity service management configuration files.
Overview of nova.conf
The nova.conf configuration file is an INI file format as explained in the section called
Configuration file format [xx].
You can use a particular configuration option file by using the option (nova.conf) parameter when you run one of the nova-* services. This parameter inserts configuration option definitions from the specified configuration file name, which might be useful for debugging or performance tuning.
For a list of configuration options, see the tables in this guide.
163
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
To learn more about the nova.conf configuration file, review the general purpose configuration options documented in Table2.20, Description of common configuration options [237].
Important
Do not specify quotes around Nova options.
Sections
Configuration options are grouped by section. The Compute configuration file supports the
following sections:
[DEFAULT]
[baremetal]
[cells]
[conductor]
[database]
[glance]
[hyperv]
[image_file_url]
[keymgr]
[matchmaker_redis]
[matchmaker_ring]
[metrics]
[neutron]
[osapi_v3]
[rdp]
[serial_console]
[spice]
[ssl]
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
[trusted_computing]
[upgrade_levels]
Configures version locking on the RPC (message queue) communications between the various Compute services to allow
live upgrading an OpenStack installation.
[vmware]
[xenserver]
[zookeeper]
Configure logging
You can use nova.conf file to configure where Compute logs events, the level of logging,
and log formats.
To customize log formats for OpenStack Compute, use the configuration option settings
documented in Table2.39, Description of logging configuration options [247].
Configure resize
Resize (or Server resize) is the ability to change the flavor of a server, thus allowing it to upscale or downscale according to user needs. For this feature to work properly, you might
need to configure some underlying virt layers.
KVM
Resize on KVM is implemented currently by transferring the images between compute
nodes over ssh. For KVM you need hostnames to resolve properly and passwordless ssh access between your compute hosts. Direct access from one compute host to another is needed to copy the VM file across.
Cloud end users can find out how to resize a server by reading the OpenStack End User
Guide.
165
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
XenServer
To get resize to work with XenServer (and XCP), you need to establish a root trust between
all hypervisor nodes and provide an /image mount point to your hypervisors dom0.
Database configuration
You can configure OpenStack Compute to use any SQLAlchemy-compatible database. The
database name is nova. The nova-conductor service is the only service that writes to the
database. The other Compute services access the database through the nova-conductor
service.
To ensure that the database schema is current, run the following command:
# nova-manage db sync
If nova-conductor is not used, entries to the database are mostly written by the nova-scheduler service, although all services must be able to update entries in the
database.
In either case, use the configuration option settings documented in Table2.25, Description
of database configuration options [240] to configure the connection string for the nova
database.
Configure RabbitMQ
OpenStack Oslo RPC uses RabbitMQ by default. Use these options to configure the RabbitMQ message system. The rpc_backend option is not required as long as RabbitMQ is the
default messaging system. However, if it is included the configuration, you must set it to
nova.openstack.common.rpc.impl_kombu.
rpc_backend=nova.openstack.common.rpc.impl_kombu
You can use these additional options to configure the RabbitMQ messaging system.
You can configure messaging communication for different installation scenarios, tune
retries for RabbitMQ, and define the size of the RPC thread pool. To monitor notifications through RabbitMQ, you must set the notification_driver option to
nova.openstack.common.notifier.rpc_notifier in the nova.conf file. The default for sending usage data is sixty seconds plus a random number of seconds from zero to
sixty.
Description
[DEFAULT]
166
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
kombu_reconnect_delay = 1.0
(FloatOpt) How long to wait before reconnecting in response to an AMQP consumer cancel notification.
kombu_ssl_ca_certs =
kombu_ssl_certfile =
kombu_ssl_keyfile =
kombu_ssl_version =
rabbit_ha_queues = False
rabbit_host = localhost
rabbit_hosts = $rabbit_host:$rabbit_port
rabbit_login_method = AMQPLAIN
rabbit_max_retries = 0
(IntOpt) Maximum number of RabbitMQ connection retries. Default is 0 (infinite retry count).
rabbit_password = guest
rabbit_port = 5672
rabbit_retry_backoff = 2
rabbit_retry_interval = 1
rabbit_use_ssl = False
rabbit_userid = guest
rabbit_virtual_host = /
Configure Qpid
Use these options to configure the Qpid messaging system for OpenStack Oslo RPC. Qpid is
not the default messaging system, so you must enable it by setting the rpc_backend option in the nova.conf file.
rpc_backend=nova.openstack.common.rpc.impl_qpid
This critical option points the compute nodes to the Qpid broker (server). Set
qpid_hostname to the host name where the broker runs in the nova.conf file.
Note
The --qpid_hostname option accepts a host name or IP address value.
qpid_hostname=hostname.example.com
If the Qpid broker listens on a port other than the AMQP default of 5672, you must set the
qpid_port option to that value:
qpid_port=12345
If you configure the Qpid broker to require authentication, you must add a user name and
password to the configuration:
167
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
qpid_username=username
qpid_password=password
By default, TCP is used as the transport. To enable SSL, set the qpid_protocol option:
qpid_protocol=ssl
This table lists additional options that you use to configure the Qpid messaging driver for
OpenStack Oslo RPC. These options are used infrequently.
Description
[DEFAULT]
qpid_heartbeat = 60
qpid_hostname = localhost
qpid_hosts = $qpid_hostname:$qpid_port
qpid_password =
qpid_port = 5672
qpid_protocol = tcp
qpid_receiver_capacity = 1
qpid_sasl_mechanisms =
qpid_tcp_nodelay = True
qpid_topology_version = 1
qpid_username =
Configure ZeroMQ
Use these options to configure the ZeroMQ messaging system for OpenStack Oslo
RPC. ZeroMQ is not the default messaging system, so you must enable it by setting the
rpc_backend option in the nova.conf file.
Description
[DEFAULT]
rpc_zmq_bind_address = *
rpc_zmq_contexts = 1
rpc_zmq_host = localhost
rpc_zmq_ipc_dir = /var/run/openstack
rpc_zmq_matchmaker =
(StrOpt) MatchMaker driver.
oslo.messaging._drivers.matchmaker.MatchMakerLocalhost
168
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
rpc_zmq_port = 9501
rpc_zmq_topic_backlog = None
Configure messaging
Use these options to configure the RabbitMQ and Qpid messaging drivers.
Description
[DEFAULT]
amqp_auto_delete = False
amqp_durable_queues = False
control_exchange = openstack
default_publisher_id = None
notification_driver = []
notification_topics = notifications
transport_url = None
Description
[DEFAULT]
matchmaker_heartbeat_freq = 300
matchmaker_heartbeat_ttl = 600
rpc_backend = rabbit
rpc_cast_timeout = 30
rpc_conn_pool_size = 30
rpc_response_timeout = 60
rpc_thread_pool_size = 64
[cells]
rpc_driver_queue_base = cells.intercell
[upgrade_levels]
baseapi = None
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Define limits
To define limits, set these values:
The HTTP method used in the API call, typically one of GET, PUT, POST, or DELETE.
A human readable URI that is used as a friendly description of where the limit is applied.
A regular expression. The limit is applied to all URIs that match the regular expression
and HTTP method.
A limit value that specifies the maximum count of units before the limit takes effect.
An interval that specifies time frame to which the limit is applied. The interval can be
SECOND, MINUTE, HOUR, or DAY.
Rate limits are applied in relative order to the HTTP method, going from least to most specific.
Default limits
Normally, you install OpenStack Compute with the following limits enabled:
API URI
Limit
POST
.*
170
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
HTTP method
API URI
Limit
POST
/servers
^/servers
PUT
.*
GET
*changes-since*
.*changes-since.*
DELETE
.*
GET
*/os-fping
^/os-fping
12 per minute
Configuration reference
The Compute API configuration options are documented in Table2.12, Description of API
configuration options [232].
171
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Hypervisors
OpenStack Compute supports many hypervisors, which might make it difficult for you to
choose one. Most installations use only one hypervisor. However, you can use the section
called ComputeFilter [210] and the section called ImagePropertiesFilter [212] to
schedule different hypervisors within the same installation. The following links help you
choose a hypervisor. See http://wiki.openstack.org/HypervisorSupportMatrix for a detailed
list of features and support across the hypervisors.
172
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
KVM
KVM is configured as the default hypervisor for Compute.
Note
This document contains several sections about hypervisor selection. If you are
reading this document linearly, you do not want to load the KVM module before you install nova-compute. The nova-compute service depends on qemu-kvm, which installs /lib/udev/rules.d/45-qemu-kvm.rules, which
sets the correct permissions on the /dev/kvm device node.
To enable KVM explicitly, add the following configuration options to the /etc/nova/nova.conf file:
173
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
compute_driver = libvirt.LibvirtDriver
[libvirt]
virt_type = kvm
The KVM hypervisor supports the following virtual machine image formats:
Raw
QEMU Copy-on-write (qcow2)
QED Qemu Enhanced Disk
VMware virtual machine disk format (vmdk)
This section describes how to enable KVM on your system. For more information, see the
following distribution-specific documentation:
Fedora: Getting started with virtualization from the Fedora project wiki.
Ubuntu: KVM/Installation from the Community Ubuntu documentation.
Debian: Virtualization with KVM from the Debian handbook.
Red Hat Enterprise Linux: Installing virtualization packages on an existing Red Hat Enterprise Linux system from the Red Hat Enterprise Linux Virtualization Host Configuration
and Guest Installation Guide.
openSUSE: Installing KVM from the openSUSE Virtualization with KVM manual.
SLES: Installing KVM from the SUSE Linux Enterprise Server Virtualization with KVM manual.
Enable KVM
The following sections outline how to enable KVM based hardware virtualisation on different architectures and platforms. To perform these steps, you must be logged in as the
root user.
To determine whether the svm or vmx CPU extensions are present, run this command:
# grep -E 'svm|vmx' /proc/cpuinfo
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
To list the loaded kernel modules and verify that the kvm modules are loaded, run this
command:
# lsmod | grep kvm
If the output includes kvm_intel or kvm_amd, the kvm hardware virtualization modules are loaded and your kernel meets the module requirements for OpenStack Compute.
If the output does not show that the kvm module is loaded, run this command to load
it:
# modprobe -a kvm
Run the command for your CPU. For Intel, run this command:
# modprobe -a kvm-intel
Because a KVM installation can change user group membership, you might need to log
in again for changes to take effect.
If the kernel modules do not load automatically, use the procedures listed in these subsections.
If the checks indicate that required hardware virtualization support or kernel modules are
disabled or unavailable, you must either enable this support on the system or find a system
with this support.
Note
Some systems require that you enable VT support in the system BIOS. If you believe your processor supports hardware acceleration but the previous command
did not produce output, reboot your machine, enter the system BIOS, and enable the VT option.
If KVM acceleration is not supported, configure Compute to use a different hypervisor, such
as QEMU or Xen.
These procedures help you load the kernel modules for Intel-based and AMD-based processors if they do not load automatically during KVM installation.
Intel-based processors
If your compute host is Intel-based, run these commands as root to load the kernel modules:
# modprobe kvm
# modprobe kvm-intel
Add these lines to the /etc/modules file so that these modules load on reboot:
kvm
kvm-intel
175
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
AMD-based processors
If your compute host is AMD-based, run these commands as root to load the kernel modules:
# modprobe kvm
# modprobe kvm-amd
Add these lines to /etc/modules file so that these modules load on reboot:
kvm
kvm-amd
To determine if your POWER platform supports KVM based virtualization run the following command:
#cat /proc/cpuinfo | grep PowerNV
If the previous command generates the following output, then CPU supports KVM
based virtualization
platform: PowerNV
If no output is displayed, then your POWER platform does not support KVM based
hardware virtualization.
2.
To list the loaded kernel modules and verify that the kvm modules are loaded, run the
following command:
# lsmod | grep kvm
If the output includes kvm_hv, the kvm hardware virtualization modules are loaded
and your kernel meets the module requirements for OpenStack Compute.
If the output does not show that the kvm module is loaded, run the following command to load it:
# modprobe -a kvm
Because a KVM installation can change user group membership, you might need to log
in again for changes to take effect.
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
To ensure a consistent default CPU across all machines, removing reliance of variable QEMU defaults
In libvirt, the CPU is specified by providing a base CPU model name (which is a shorthand for a set of feature flags), a set of additional feature flags, and the topology (sockets/cores/threads). The libvirt KVM driver provides a number of standard CPU model
names. These models are defined in the /usr/share/libvirt/cpu_map.xml file.
Check this file to determine which models are supported by your local installation.
Two Compute configuration options in the [libvirt] group of nova.conf define
which type of CPU model is exposed to the hypervisor when using KVM: cpu_mode and
cpu_model.
The cpu_mode option can take one of the following values: none, host-passthrough,
host-model, and custom.
Custom
If your nova.conf file contains cpu_mode=custom, you can explicitly specify one of the
supported named models using the cpu_model configuration option. For example, to configure the KVM guests to expose Nehalem CPUs, your nova.conf file should contain:
[libvirt]
cpu_mode = custom
cpu_model = Nehalem
None (default for all libvirt-driven hypervisors other than KVM & QEMU)
If your nova.conf file contains cpu_mode=none, libvirt does not specify a CPU model. Instead, the hypervisor chooses the default model.
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Troubleshoot KVM
Trying to launch a new virtual machine instance fails with the ERRORstate, and the following error appears in the /var/log/nova/nova-compute.log file:
libvirtError: internal error no supported architecture for os type 'hvm'
This message indicates that the KVM kernel modules were not loaded.
If you cannot start VMs after installation without rebooting, the permissions might not be
set correctly. This can happen if you load the KVM module before you install nova-compute. To check whether the group is set to kvm, run:
# ls -l /dev/kvm
QEMU
From the perspective of the Compute service, the QEMU hypervisor is very similar to the
KVM hypervisor. Both are controlled through libvirt, both support the same feature set,
and all virtual machine images that are compatible with KVM are also compatible with QEMU. The main difference is that QEMU does not support native virtualization. Consequently, QEMU has worse performance than KVM and is a poor choice for a production deployment.
The typical uses cases for QEMU are
Running on older hardware that lacks virtualization support.
Running the Compute service inside of a virtual machine for development or testing purposes, where the hypervisor does not support native virtualization for guests.
To enable QEMU, add these settings to nova.conf:
compute_driver = libvirt.LibvirtDriver
[libvirt]
virt_type = qemu
For some operations you may also have to install the guestmount utility:
On Ubuntu:
178
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
On openSUSE:
# zypper install guestfs-tools
The QEMU hypervisor supports the following virtual machine image formats:
Raw
QEMU Copy-on-write (qcow2)
VMware virtual machine disk format (vmdk)
Note
The second command, setsebool, may take a while.
#
#
#
#
Xen terminology
Xen. A hypervisor that provides the fundamental isolation between virtual machines. Xen is
open source (GPLv2) and is managed by Xen.org, an cross-industry organization.
179
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Xen is a component of many different products and projects. The hypervisor itself is very
similar across all these projects, but the way that it is managed can be different, which can
cause confusion if you're not clear which tool stack you are using. Make sure you know
what tool stack you want before you get started.
Xen Cloud Platform (XCP). An open source (GPLv2) tool stack for Xen. It is designed specifically as a platform for enterprise and cloud computing, and is well integrated with OpenStack. XCP is available both as a binary distribution, installed from an iso, and from Linux
distributions, such as xcp-xapi in Ubuntu. The current versions of XCP available in Linux distributions do not yet include all the features available in the binary distribution of XCP.
Citrix XenServer. A commercial product. It is based on XCP, and exposes the same tool
stack and management API. As an analogy, think of XenServer being based on XCP in the
way that Red Hat Enterprise Linux is based on Fedora. XenServer has a free version (which
is very similar to XCP) and paid-for versions with additional features enabled. Citrix provides
support for XenServer, but as of July 2012, they do not provide any support for XCP. For a
comparison between these products see the XCP Feature Matrix.
Both XenServer and XCP include Xen, Linux, and the primary control daemon known as
xapi.
The API shared between XCP and XenServer is called XenAPI. OpenStack usually refers to
XenAPI, to indicate that the integration works equally well on XCP and XenServer. Sometimes, a careless person will refer to XenServer specifically, but you can be reasonably confident that anything that works on XenServer will also work on the latest version of XCP.
Read the XenAPI Object Model Overview for definitions of XenAPI specific terms such as
SR, VDI, VIF and PIF.
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
HVM guests do not need to modify the guest operating system, which is essential when
running Windows.
In OpenStack, customer VMs may run in either PV or HVM mode. However, the OpenStack
domU (that's the one running nova-compute) must be running in PV mode.
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Management network - RabbitMQ, MySQL, and other services. Please note that the
VM images are downloaded by the XenAPI plug-ins, so make sure that the images can
be downloaded through the management network. It usually means binding those services to the management interface.
Tenant network - controlled by nova-network. The parameters of this network depend
on the networking model selected (Flat, Flat DHCP, VLAN).
Public network - floating IPs, public API endpoints.
The networks shown here must be connected to the corresponding physical networks
within the data center. In the simplest case, three individual physical network cards could
be used. It is also possible to use VLANs to separate these networks. Please note, that
the selected configuration must be in line with the networking model selected for the
cloud. (In case of VLAN networking, the physical channels have to be able to forward the
tagged traffic.)
XenAPI pools
The host-aggregates feature enables you to create pools of XenServer hosts to enable live
migration when using shared storage. However, you cannot configure shared storage.
Further reading
Here are some of the resources available to learn more about Xen:
Citrix XenServer official documentation: http://docs.vmd.citrix.com/XenServer.
What is Xen? by Xen.org: http://xen.org/files/Marketing/WhatisXen.pdf.
Xen Hypervisor project: http://xen.org/products/xenhyp.html.
XCP project: http://xen.org/products/cloudxen.html.
Further XenServer and OpenStack information: http://wiki.openstack.org/XenServer.
Note
Xen is a type 1 hypervisor: When your server starts, Xen is the first software
that runs. Consequently, you must install XenServer or XCP before you install
the operating system where you want to run OpenStack code. The OpenStack
services then run in a virtual machine that you install on top of XenServer.
Before you can install your system, decide whether to install a free or paid edition of Citrix
XenServer or Xen Cloud Platform from Xen.org. Download the software from these locations:
182
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
http://www.citrix.com/XenServer/download
http://www.xen.org/download/xcp/index.html
When you install many servers, you might find it easier to perform PXE boot installations
of XenServer or XCP. You can also package any post-installation changes that you want to
make to your XenServer by creating your own XenServer supplemental pack.
You can also install the xcp-xenapi package on Debian-based distributions to get XCP. However, this is not as mature or feature complete as above distributions. This modifies your
boot loader to first boot Xen and boot your existing OS on top of Xen as Dom0. The xapi
daemon runs in Dom0. Find more details at http://wiki.xen.org/wiki/Project_Kronos.
Important
Make sure you use the EXT type of storage repository (SR). Features that require access to VHD files (such as copy on write, snapshot and migration) do
not work when you use the LVM SR. Storage repository (SR) is a XenAPI-specific
term relating to the physical storage where virtual disks are stored.
On the XenServer/XCP installation screen, choose the XenDesktop Optimized
option. If you use an answer file, make sure you use srtype="ext" in the installation tag of the answer file.
Post-installation steps
Complete these steps to install OpenStack in your XenServer system:
1.
For resize and migrate functionality, complete the changes described in the Configure
resize section in the OpenStack Configuration Reference.
2.
Install the VIF isolation rules to help prevent mac and IP address spoofing.
3.
4.
5.
6.
Create a Paravirtualized virtual machine that can run the OpenStack compute code.
7.
For more information, see how DevStack performs the last three steps for developer deployments. For more information about DevStack, see Getting Started With XenServer and Devstack (https://github.com/openstack-dev/devstack/blob/master/tools/xen/
README.md). Find more information about the first step, see Multi Tenancy Networking Protections in XenServer (https://github.com/openstack/nova/blob/master/plugins/xenserver/doc/networking.rst). For information about how to install the XenAPI
plug-ins, see XenAPI README (https://github.com/openstack/nova/blob/master/plugins/xenserver/xenapi/README).
183
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
When you use Xen as the hypervisor for OpenStack Compute, you can install a Python
script (or any executable) on the host side, and call that through the XenAPI. These scripts
are called plug-ins. The XenAPI plug-ins live in the nova code repository. These plug-ins have
to be copied to the Dom0 for the hypervisor, to the appropriate directory, where xapi can
find them. There are several options for the installation. The important thing is to ensure
that the version of the plug-ins are in line with the nova installation by only installing plugins from a matching nova repository.
Manually install the plug-in
1.
2.
Get the source from GitHub. The example assumes the master branch is used. Amend
the URL to match the version being used:
$ wget -qO "$NOVA_ZIPBALL" https://github.com/openstack/nova/archive/
master.zip
$ unzip "$NOVA_ZIPBALL" -d "$NOVA_SOURCES"
(Alternatively) To use the official Ubuntu packages, use the following commands to
get the nova code base:
$ ( cd $NOVA_SOURCES && apt-get source python-nova --download-only )
$ ( cd $NOVA_SOURCES && for ARCHIVE in *.tar.gz; do tar -xzf $ARCHIVE;
done )
3.
4.
Follow these steps to produce a supplemental pack from the nova sources, and package it
as a XenServer supplemental pack.
1.
Create RPM packages. Given you have the nova sources. Use one of the methods in the
section called Manually install the plug-in [184]:
$ cd nova/plugins/xenserver/xenapi/contrib
$ ./build-rpm.sh
Pack the RPM packages to a Supplemental Pack, using the XenServer DDK (the following command should be issued on the XenServer DDK virtual appliance, after the produced rpm file has been copied over):
184
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
/usr/bin/build-supplemental-pack.sh \
--output=output_directory \
--vendor-code=novaplugin \
--vendor-name=openstack \
--label=novaplugins \
--text="nova plugins" \
--version=0 \
full_path_to_rpmfile
This command produces an .iso file in the output directory specified. Copy that file to
the hypervisor.
3.
To support AMI type images in your OpenStack installation, you must create a /boot/
guest directory inside Dom0. The OpenStack VM extracts the kernel and ramdisk from the
AKI and ARI images puts them in this location.
OpenStack maintains the contents of this directory and its size should not increase during
normal operation. However, in case of power failures or accidental shutdowns, some files
might be left over. To prevent these files from filling the Dom0 disk, set up this directory as
a symlink that points to a subdirectory of the local SR.
Run these commands in Dom0 to achieve this setup:
#
#
#
#
185
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
# ln -s "$IMG_DIR" /images
Create an ISO-typed SR, such as an NFS ISO library, for instance. For this, using XenCenter is a simple method. You must export an NFS volume from a remote NFS server.
Make sure it is exported in read-write mode.
2.
On the compute host, find and record the UUID of this ISO SR:
# xe host-list
3.
4.
Set the UUID and configuration. Even if an NFS mount point is not local, you must
specify local-storage-iso.
# xe sr-param-set uuid=[iso sr uuid] other-config:i18n-key=local-storageiso
5.
Make sure the host-UUID from xe pbd-list equals the UUID of the host you found
previously:
# xe sr-uuid=[iso sr uuid]
6.
You can now add images through the OpenStack Image Service with diskformat=iso, and boot them in OpenStack Compute:
$ glance image-create --name fedora_iso --disk-format iso --containerformat bare < Fedora-16-x86_64-netinst.iso
These connection details are used by the OpenStack Compute service to contact your hypervisor and are the same details you use to connect XenCenter, the XenServer management console, to your XenServer or XCP box.
186
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Note
The xenapi_connection_url is generally the management network IP address of the XenServer. Though it is possible to use the internal network IP Address (169.250.0.1) to contact XenAPI, this does not allow live migration between hosts. Other functionalities such as host aggregates, do not work.
It is possible to manage Xen using libvirt, though this is not well-tested or supported. To
experiment using Xen through libvirt add the following configuration options /etc/nova/nova.conf:
compute_driver = libvirt.LibvirtDriver
[libvirt]
virt_type = xen
Agent
If you don't have the guest agent on your VMs, it takes a long time for nova to decide the
VM has successfully started. Generally a large timeout is required for Windows instances,
but you may want to tweak agent_version_timeout.
Firewall
If using nova-network, iptables is supported:
firewall_driver = nova.virt.firewall.IptablesFirewallDriver
Storage
You can specify which Storage Repository to use with nova by looking at the following flag.
The default is to use the local-storage setup by the default installer:
sr_matching_filter = "other-config:i18n-key=local-storage"
Another good alternative is to use the "default" storage (for example if you have attached
NFS or any other shared storage):
sr_matching_filter = "default-sr:true"
Note
To use a XenServer pool, you must create the pool by using the Host Aggregates feature.
187
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Note
Some OpenStack Compute features might be missing when running with LXC as
the hypervisor. See the hypervisor support matrix for details.
To enable LXC, ensure the following options are set in /etc/nova/nova.conf on all
hosts running the nova-compute service.
compute_driver = libvirt.LibvirtDriver
[libvirt]
virt_type = lxc
VMware vSphere
Introduction
OpenStack Compute supports the VMware vSphere product family and enables access to
advanced features such as vMotion, High Availability, and Dynamic Resource Scheduling
(DRS).
This section describes how to configure VMware-based virtual machine images for launch.
vSphere versions 4.1 and later are supported.
The VMware vCenter driver enables the nova-compute service to communicate with a
VMware vCenter server that manages one or more ESX host clusters. The driver aggregates
the ESX hosts in each cluster to present one large hypervisor entity for each cluster to the
Compute scheduler. Because individual ESX hosts are not exposed to the scheduler, Compute schedules to the granularity of clusters and vCenter uses DRS to select the actual ESX
188
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
host within the cluster. When a virtual machine makes its way into a vCenter cluster, it can
use all vSphere features.
The following sections describe how to configure the VMware vCenter driver.
High-level architecture
The following diagram shows a high-level view of the VMware driver architecture:
As the figure shows, the OpenStack Compute Scheduler sees three hypervisors that each
correspond to a cluster in vCenter. Nova-compute contains the VMware driver. You can
run with multiple nova-compute services. While Compute schedules at the granularity of
a cluster, the VMware driver inside nova-compute interacts with the vCenter APIs to select an appropriate ESX host within the cluster. Internally, vCenter uses DRS for placement.
The VMware vCenter driver also interacts with the OpenStack Image Service to copy VMDK
images from the Image Service back end store. The dotted line in the figure represents
VMDK images being copied from the OpenStack Image Service to the vSphere data store.
VMDK images are cached in the data store so the copy operation is only required the first
time that the VMDK image is used.
After OpenStack boots a VM into a vSphere cluster, the VM becomes visible in vCenter and
can access vSphere advanced features. At the same time, the VM is visible in the OpenStack
dashboard and you can manage it as you would any other OpenStack VM. You can per189
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
form advanced vSphere operations in vCenter while you configure OpenStack resources
such as VMs through the OpenStack dashboard.
The figure does not show how networking fits into the architecture. Both nova-network
and the OpenStack Networking Service are supported. For details, see the section called
Networking with VMware vSphere [197].
Configuration overview
To get started with the VMware vCenter driver, complete the following high-level steps:
1. Configure vCenter. See the section called Prerequisites and limitations [190].
2. Configure the VMware vCenter driver in the nova.conf file. See the section called
VMware vCenter driver [192].
3. Load desired VMDK images into the OpenStack Image Service. See the section called
Images with VMware vSphere [194].
4. Configure networking with either nova-network or the OpenStack Networking Service. See the section called Networking with VMware vSphere [197].
Note
The NSX plug-in is the only plug-in that is validated for vSphere.
7. VNC. The port range 5900 - 6105 (inclusive) is automatically enabled for VNC connections on every ESX host in all clusters under OpenStack control. For more information
190
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Note
In addition to the default VNC port numbers (5900 to 6000) specified in the
above document, the following ports are also used: 6101, 6102, and 6105.
You must modify the ESXi firewall configuration to allow the VNC ports. Additionally, for
the firewall modifications to persist after a reboot, you must create a custom vSphere Installation Bundle (VIB) which is then installed onto the running ESXi host or added to a
custom image profile used to install ESXi hosts. For details about how to create a VIB for
persisting the firewall configuration modifications, see http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2007381.
8. Ephemeral Disks. Ephemeral disks are not supported. A future major release will address
this limitation.
9. Injection of SSH keys into compute instances hosted by vCenter is not currently supported.
10.To use multiple vCenter installations with OpenStack, each vCenter must be assigned to
a separate availability zone. This is required as the OpenStack Block Storage VMDK driver does not currently work across multiple vCenter installations.
S
Network
Assign network
Resource
Assign virtual machine to resource pool
191
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Migrate powered off virtual machine
Migrate powered on virtual machine
Virtual Machine
Configuration
S
Interaction
S
Inventory
U
Provisioning
C
Sessions
V
Snapshot management
R
vApp
Export
Import
192
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Note
vSphere vCenter versions 5.0 and earlier: You must specify the location of the
WSDL files by adding the wsdl_location=http://127.0.0.1:8080/
vmware/SDK/wsdl/vim25/vimService.wsdl setting to the above configuration. For more information, see vSphere 5.0 and earlier additional set
up.
Clusters: The vCenter driver can support multiple clusters. To use more than
one cluster, simply add multiple cluster_name lines in nova.conf with
the appropriate cluster name. Clusters and data stores used by the vCenter
driver should not contain any VMs other than those created by the driver.
Data stores: The datastore_regex setting specifies the data stores to use
with Compute. For example, datastore_regex="nas.*" selects all the
data stores that have a name starting with "nas". If this line is omitted, Compute uses the first data store returned by the vSphere API. It is recommended
not to use this field and instead remove data stores that are not intended for
OpenStack.
Reserved host memory: The reserved_host_memory_mb option value is
512MB by default. However, VMware recommends that you set this option
to 0MB because the vCenter driver reports the effective memory available to
the virtual machines.
A nova-compute service can control one or more clusters containing multiple ESX hosts,
making nova-compute a critical service from a high availability perspective. Because the
host that runs nova-compute can fail while the vCenter and ESX still run, you must protect the nova-compute service against host failures.
Note
Many nova.conf options are relevant to libvirt but do not apply to this driver.
You must complete additional configuration for environments that use vSphere 5.0 and
earlier. See the section called vSphere 5.0 and earlier additional set up [198].
193
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
sparse
Monolithic Sparse
thin
preallocated (default)
The vmware_disktype property is set when an image is loaded into the OpenStack Image Service. For example, the following command creates a Monolithic Sparse image by setting vmware_disktype to sparse:
$ glance image-create --name "ubuntu-sparse" --disk-format vmdk \
--container-format bare \
--property vmware_disktype="sparse" \
--property vmware_ostype="ubuntu64Guest" < ubuntuLTS-sparse.vmdk
Note
Specifying thin does not provide any advantage over preallocated with
the current version of the driver. Future versions might restore the thin properties of the disk after it is downloaded to a vSphere data store.
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
For example, the following command can be used to convert a qcow2 Ubuntu Trusty cloud
image:
$ qemu-img convert -f qcow2 ~/Downloads/trusty-server-cloudimg-amd64-disk1.img
\
-O vmdk trusty-server-cloudimg-amd64-disk1.vmdk
VMDK disks converted through qemu-img are always monolithic sparse VMDK disks with
an IDE adapter type. Using the previous example of the Ubuntu Trusty image after the qemu-img conversion, the command to upload the VMDK disk should be something like:
$ glance image-create --name trusty-cloud --is-public False \
--container-format bare --disk-format vmdk \
--property vmware_disktype="sparse" \
--property vmware_adaptertype="ide" < \
trusty-server-cloudimg-amd64-disk1.vmdk
Note that the vmware_disktype is set to sparse and the vmware_adaptertype is set
to ide in the previous command.
If the image did not come from the qemu-img utility, the vmware_disktype and
vmware_adaptertype might be different. To determine the image adapter type from an
image file, use the following command and look for the ddb.adapterType= line:
$ head -20 <vmdk file name>
Assuming a preallocated disk type and an iSCSI lsiLogic adapter type, the following command uploads the VMDK disk:
$ glance image-create --name "ubuntu-thick-scsi" --disk-format vmdk \
--container-format bare \
--property vmware_adaptertype="lsiLogic" \
--property vmware_disktype="preallocated" \
--property vmware_ostype="ubuntu64Guest" < ubuntuLTS-flat.vmdk
Currently, OS boot VMDK disks with an IDE adapter type cannot be attached to a virtual
SCSI controller and likewise disks with one of the SCSI adapter types (such as, busLogic, lsiLogic) cannot be attached to the IDE controller. Therefore, as the previous examples show,
it is important to set the vmware_adaptertype property correctly. The default adapter
type is lsiLogic, which is SCSI, so you can omit the vmware_adaptertype property if you
are certain that the image adapter type is lsiLogic.
195
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Optimize images
Monolithic Sparse disks are considerably faster to download but have the overhead of
an additional conversion step. When imported into ESX, sparse disks get converted to
VMFS flat thin provisioned disks. The download and conversion steps only affect the first
launched instance that uses the sparse disk image. The converted disk image is cached, so
subsequent instances that use this disk image can simply use the cached version.
To avoid the conversion step (at the cost of longer download times) consider converting
sparse disks to thin provisioned or preallocated disks before loading them into the OpenStack Image Service.
Use one of the following tools to pre-convert sparse disks.
vSphere CLI tools
Note that the vifs tool from the same CLI package can
be used to upload the disk to be converted. The vifs tool
can also be used to download the converted disk if necessary.
vmkfstools directly on the ESX
host
vmware-vdiskmanager
vmware-vdiskmanager is a utility that comes bundled with VMware Fusion and VMware Workstation.
The following example converts a sparse disk to preallocated format:
'/Applications/VMware Fusion.app/Contents/
Library/vmware-vdiskmanager' -r sparse.vmdk -t 4
converted.vmdk
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Image handling
The ESX hypervisor requires a copy of the VMDK file in order to boot up a virtual machine.
As a result, the vCenter OpenStack Compute driver must download the VMDK via HTTP
from the OpenStack Image Service to a data store that is visible to the hypervisor. To optimize this process, the first time a VMDK file is used, it gets cached in the data store. Subsequent virtual machines that need the VMDK use the cached version and don't have to copy
the file again from the OpenStack Image Service.
Even with a cached VMDK, there is still a copy operation from the cache location to the
hypervisor file directory in the shared data store. To avoid this copy, boot the image in
linked_clone mode. To learn how to enable this mode, see the section called Configuration reference [199].
Note
You can also use the vmware_linked_clone property in the OpenStack Image Service to override the linked_clone mode on a per-image basis.
You can automatically purge unused images after a specified period of time. To configure
this action, set these options in the DEFAULT section in the nova.conf file:
remove_unused_base_images Set this parameter to True to specify that unused images should be removed after the duration specified in the
remove_unused_original_minimum_age_seconds
parameter. The default is True.
remove_unused_original_minimum_age_seconds
Specifies the duration in seconds after which an unused
image is purged from the cache. The default is 86400
(24 hours).
Note
When configuring the port binding for this port group in vCenter, specify
ephemeral for the port binding type. For more information, see Choosing a
port binding type in ESX/ESXi in the VMware Knowledge Base.
197
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
The nova-network service with the VlanManager. Set the vlan_interface configuration option to match the ESX host interface that handles VLAN-tagged VM traffic.
OpenStack Compute automatically creates the corresponding port groups.
If you are using the OpenStack Networking Service: Before provisioning VMs, create
a port group with the same name as the vmware.integration_bridge value in
nova.conf (default is br-int). All VM NICs are attached to this port group for management by the OpenStack Networking plug-in.
Set the VMWAREAPI_IP shell variable to the IP address for your vCenter or ESXi host
from where you plan to mirror files. For example:
$ export VMWAREAPI_IP=<your_vsphere_host_ip>
2.
3.
4.
Use your OS-specific tools to install a command-line tool that can download files like
wget.
5.
198
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
wget --no-check-certificate
wget --no-check-certificate
wget --no-check-certificate
wget --no-check-certificate
xsd
wget --no-check-certificate
wget --no-check-certificate
xsd
wget --no-check-certificate
wget --no-check-certificate
messagetypes.xsd
wget --no-check-certificate
juno
https://$VMWAREAPI_IP/sdk/vimService.wsdl
https://$VMWAREAPI_IP/sdk/vim.wsdl
https://$VMWAREAPI_IP/sdk/core-types.xsd
https://$VMWAREAPI_IP/sdk/query-messagetypes.
https://$VMWAREAPI_IP/sdk/query-types.xsd
https://$VMWAREAPI_IP/sdk/vim-messagetypes.
https://$VMWAREAPI_IP/sdk/vim-types.xsd
https://$VMWAREAPI_IP/sdk/reflecthttps://$VMWAREAPI_IP/sdk/reflect-types.xsd
6.
Now that the files are locally present, tell the driver to look for the SOAP service WSDLs
in the local file system and not on the remote vSphere server. Add the following setting to the nova.conf file for your nova-compute node:
[vmware]
wsdl_location=file:///opt/stack/vmware/wsdl/5.0/vimService.wsdl
Configuration reference
To customize the VMware driver, use the configuration option settings documented in Table2.58, Description of VMware configuration options [258].
199
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
dows platform with the Hyper-V role enabled. The necessary Python components as well
as the nova-compute service are installed directly onto the Windows platform. Windows
Clustering Services are not needed for functionality within the OpenStack infrastructure.
The use of the Windows Server 2012 platform is recommend for the best experience and is
the platform for active development. The following Windows platforms have been tested
as compute nodes:
Windows Server 2008r2
Both Server and Server Core with the Hyper-V role enabled (Shared Nothing Live migration is not supported using 2008r2)
Windows Server 2012
Server and Core (with the Hyper-V role enabled), and Hyper-V Server
Hyper-V configuration
The following sections discuss how to prepare the Windows Hyper-V node for operation
as an OpenStack compute node. Unless stated otherwise, any configuration information
should work for both the Windows 2008r2 and 2012 platforms.
Local Storage Considerations
The Hyper-V compute node needs to have ample storage for storing the virtual machine images running on the compute nodes. You may use a single volume for all, or partition it into an OS volume and VM volume. It is up to the individual deploying to decide.
Configure NTP
Network time services must be configured to ensure proper operation of the Hyper-V compute node. To set network time on your Hyper-V host you must run the following commands:
C:\>net stop w32time
C:\>w32tm /config /manualpeerlist:pool.ntp.org,0x8 /syncfromflags:MANUAL
C:\>net start w32time
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
PS C:\>Enable-VMMigration
PS C:\>Set-VMMigrationNetwork IP_ADDRESS
PS C:\>Set-VMHost -VirtualMachineMigrationAuthenticationTypeKerberos
Note
Please replace the IP_ADDRESS with the address of the interface which will provide the virtual switching for nova-network.
Additional Reading
Here's an article that clarifies the various live migration options in Hyper-V:
http://ariessysadmin.blogspot.ro/2012/04/hyper-v-live-migration-of-windows.html
Python Requirements
Python
Python 2.7.3 must be installed prior to installing the OpenStack Compute Driver on the Hyper-V server. Download and then install the MSI for windows here:
http://www.python.org/ftp/python/2.7.3/python-2.7.3.msi
Install the MSI accepting the default options.
The installation will put python in C:/python27.
Setuptools
You will require pip to install the necessary python module dependencies. The installer will
install under the C:\python27 directory structure. Setuptools for Python 2.7 for Windows
can be download from here:
http://pypi.python.org/packages/2.7/s/setuptools/setuptools-0.6c11.win32-py2.7.exe
Python Dependencies
You must download and manually install the following packages on the Compute node:
MySQL-python
http://codegood.com/download/10/
pywin32
Download and run the installer from the following location
http://sourceforge.net/projects/pywin32/files/pywin32/Build%20217/
pywin32-217.win32-py2.7.exe
greenlet
Select the link below:
http://www.lfd.uci.edu/~gohlke/pythonlibs/
202
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
You must scroll to the greenlet section for the following file: greenlet-0.4.0.win32py2.7.exe
Click on the file, to initiate the download. Once the download is complete, run the installer.
You must install the following Python packages through easy_install or pip. Run the following replacing PACKAGENAME with the following packages:
C:\>c:\Python27\Scripts\pip.exe install PACKAGE_NAME
amqplib
anyjson
distribute
eventlet
httplib2
iso8601
jsonschema
kombu
netaddr
paste
paste-deploy
prettytable
python-cinderclient
python-glanceclient
python-keystoneclient
repoze.lru
routes
sqlalchemy
simplejson
warlock
webob
wmi
203
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Install Nova-compute
Using git on Windows to retrieve source
Git be used to download the necessary source code. The installer to run Git on Windows
can be downloaded here:
http://code.google.com/p/msysgit/downloads/list?q=full+installer+official+git
Download the latest installer. Once the download is complete double click the installer and
follow the prompts in the installation wizard. The default should be acceptable for the
needs of the document.
Once installed you may run the following to clone the Nova code.
C:\>git.exe clone https://github.com/openstack/nova.git
Configure Nova.conf
The nova.conf file must be placed in C:\etc\nova for running OpenStack on Hyper-V.
Below is a sample nova.conf for Windows:
[DEFAULT]
verbose=true
force_raw_images=false
auth_strategy=keystone
fake_network=true
vswitch_name=openstack-br
logdir=c:\openstack\
state_path=c:\openstack\
lock_path=c:\openstack\
instances_path=e:\Hyper-V\instances
policy_file=C:\Program Files (x86)\OpenStack\nova\etc\nova\policy.json
api_paste_config=c:\openstack\nova\etc\nova\api-paste.ini
rabbit_host=IP_ADDRESS
image_service=nova.image.glance.GlanceImageService
instances_shared_storage=false
limit_cpu_features=true
compute_driver=nova.virt.hyperv.driver.HyperVDriver
volume_api_class=nova.volume.cinder.API
[glance]
api_servers=IP_ADDRESS:9292
[database]
connection=mysql://nova:passwd@IP_ADDRESS/nova
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
C:\>glance image-create --name "VM_IMAGE_NAME" --is-public False --containerformat bare --disk-format vhd
Baremetal driver
The baremetal driver is a hypervisor driver for OpenStack Nova Compute. Within the OpenStack framework, it has the same role as the drivers for other hypervisors (libvirt, xen, etc),
and yet it is presently unique in that the hardware is not virtualized - there is no hypervisor
between the tenants and the physical hardware. It exposes hardware through the OpenStack APIs, using pluggable sub-drivers to deliver machine imaging (PXE) and power control
(IPMI). With this, provisioning and management of physical hardware is accomplished by
using common cloud APIs and tools, such as the Orchestration module (heat) or salt-cloud.
However, due to this unique situation, using the baremetal driver requires some additional
preparation of its environment, the details of which are beyond the scope of this guide.
Note
Some OpenStack Compute features are not implemented by the baremetal hypervisor driver. See the hypervisor support matrix for details.
For the Baremetal driver to be loaded and function properly, ensure that the following options are set in /etc/nova/nova.conf on your nova-compute hosts.
[default]
compute_driver=nova.virt.baremetal.driver.BareMetalDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver
scheduler_host_manager=nova.scheduler.baremetal_host_manager.
BaremetalHostManager
ram_allocation_ratio=1.0
reserved_host_memory_mb=0
Many configuration options are specific to the Baremetal driver. Also, some additional
steps are required, such as building the baremetal deploy ramdisk. See the main wiki page
for details and implementation suggestions.
To customize the Baremetal driver, use the configuration option settings documented in
Table2.17, Description of baremetal configuration options [235].
205
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Scheduling
Compute uses the nova-scheduler service to determine how to dispatch compute and
volume requests. For example, the nova-scheduler service determines on which host a
VM should launch. In the context of filters, the term host means a physical node that has a
nova-compute service running on it. You can configure the scheduler through a variety of
options.
Compute is configured with the following default scheduler options in the /etc/nova/nova.conf file:
scheduler_driver=nova.scheduler.multi.MultiScheduler
scheduler_driver_task_period = 60
scheduler_driver = nova.scheduler.filter_scheduler.FilterScheduler
scheduler_available_filters = nova.scheduler.filters.all_filters
scheduler_default_filters = RetryFilter, AvailabilityZoneFilter, RamFilter,
ComputeFilter, ComputeCapabilitiesFilter, ImagePropertiesFilter,
ServerGroupAntiAffinityFilter, ServerGroupAffinityFilter
Note
Do not configure service_down_time to be much smaller than
scheduler_driver_task_period; otherwise, hosts appear to be dead
while the host list is being cached.
For information about the volume scheduler, see the Block Storage section of OpenStack
Cloud Administrator Guide.
206
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Filter scheduler
The filter scheduler (nova.scheduler.filter_scheduler.FilterScheduler) is
the default scheduler for scheduling virtual machine instances. It supports filtering and
weighting to make informed decisions on where a new instance should be created.
Filters
When the filter scheduler receives a request for a resource, it first applies filters to determine which hosts are eligible for consideration when dispatching a resource. Filters are binary: either a host is accepted by the filter, or it is rejected. Hosts that are accepted by the
filter are then processed by a different algorithm to decide which hosts to use for that request, described in the Weights section.
Figure2.2.Filtering
This configuration option can be specified multiple times. For example, if you implemented your own custom filter in Python called myfilter.MyFilter and you wanted to use
both the built-in filters and your custom filter, your nova.conf file would contain:
207
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
scheduler_available_filters = nova.scheduler.filters.all_filters
scheduler_available_filters = myfilter.MyFilter
AggregateCoreFilter
Filters host by CPU core numbers with a per-aggregate cpu_allocation_ratio value. If
the per-aggregate value is not found, the value falls back to the global setting. If the host
is in more than one aggregate and more than one value is found, the minimum value will
be used. For information about how to use this filter, see the section called Host aggregates [219]. See also the section called CoreFilter [210].
AggregateDiskFilter
Filters host by disk allocation with a per-aggregate disk_allocation_ratio value. If
the per-aggregate value is not found, the value falls back to the global setting. If the host
is in more than one aggregate and more than one value is found, the minimum value will
be used. For information about how to use this filter, see the section called Host aggregates [219]. See also the section called DiskFilter [211].
AggregateImagePropertiesIsolation
Matches properties defined in an image's metadata against those of aggregates to determine host matches:
If a host belongs to an aggregate and the aggregate defines one or more metadata that
matches an image's properties, that host is a candidate to boot the image's instance.
If a host does not belong to any aggregate, it can boot instances from all images.
For example, the following aggregate myWinAgg has the Windows operating system as
metadata (named 'windows'):
$ nova aggregate-details MyWinAgg
+----+----------+-------------------+------------+---------------+
| Id | Name
| Availability Zone | Hosts
| Metadata
|
+----+----------+-------------------+------------+---------------+
| 1 | MyWinAgg | None
| 'sf-devel' | 'os=windows' |
+----+----------+-------------------+------------+---------------+
In this example, because the following Win-2012 image has the windows property, it boots
on the sf-devel host (all other filters being equal):
$ glance image-show Win-2012
+------------------+--------------------------------------+
| Property
| Value
|
+------------------+--------------------------------------+
208
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
Property 'os'
checksum
container_format
created_at
...
October 7, 2014
|
|
|
|
windows
f8a2eeee2dc65b3d9b6e63678955bd83
ami
2013-11-14T13:24:25
juno
|
|
|
|
You can configure the AggregateImagePropertiesIsolation filter by using the following options in the nova.conf file:
# Considers only keys matching the given namespace (string).
aggregate_image_properties_isolation_namespace = <None>
# Separator used between the namespace and keys (string).
aggregate_image_properties_isolation_separator = .
AggregateInstanceExtraSpecsFilter
Matches properties defined in extra specs for an instance type against admin-defined properties on a host aggregate. Works with specifications that are scoped with
aggregate_instance_extra_specs. For backward compatibility, also works with nonscoped specifications; this action is highly discouraged because it conflicts with ComputeCapabilitiesFilter filter when you enable both filters. For information about how to use this filter, see the host aggregates section.
AggregateIoOpsFilter
Filters host by disk allocation with a per-aggregate max_io_ops_per_host value. If the
per-aggregate value is not found, the value falls back to the global setting. If the host is
in more than one aggregate and more than one value is found, the minimum value will
be used. For information about how to use this filter, see the section called Host aggregates [219]. See also the section called IoOpsFilter [213].
AggregateMultiTenancyIsolation
Isolates tenants to specific host aggregates. If a host is in an aggregate that has the
filter_tenant_id metadata key, the host creates instances from only that tenant or
list of tenants. A host can be in different aggregates. If a host does not belong to an aggregate with the metadata key, the host can create instances from all tenants.
AggregateNumInstancesFilter
Filters host by number of instances with a per-aggregate max_instances_per_host value. If the per-aggregate value is not found, the value falls back to the global setting. If the
host is in more than one aggregate and thus more than one value is found, the minimum
value will be used. For information about how to use this filter, see the section called Host
aggregates [219]. See also the section called NumInstancesFilter [214].
AggregateRamFilter
Filters host by RAM allocation of instances with a per-aggregate
ram_allocation_ratio value. If the per-aggregate value is not found, the value falls
back to the global setting. If the host is in more than one aggregate and thus more than
one value is found, the minimum value will be used. For information about how to use this
209
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
filter, see the section called Host aggregates [219]. See also the section called RamFilter [214].
AggregateTypeAffinityFilter
Filters host by per-aggregate instance_type value. For information about how to use
this filter, see the section called Host aggregates [219]. See also the section called TypeAffinityFilter [216].
AllHostsFilter
This is a no-op filter. It does not eliminate any of the available hosts.
AvailabilityZoneFilter
Filters hosts by availability zone. You must enable this filter for the scheduler to respect
availability zones in requests.
ComputeCapabilitiesFilter
Matches properties defined in extra specs for an instance type against compute capabilities.
If an extra specs key contains a colon (:), anything before the colon is treated as a namespace and anything after the colon is treated as the key to be matched. If a namespace
is present and is not capabilities, the filter ignores the namespace. For backward
compatibility, also treats the extra specs key as the key to be matched if no namespace is
present; this action is highly discouraged because it conflicts with AggregateInstanceExtraSpecsFilter filter when you enable both filters.
ComputeFilter
Passes all hosts that are operational and enabled.
In general, you should always enable this filter.
CoreFilter
Only schedules instances on hosts if sufficient CPU cores are available. If this filter is not set,
the scheduler might over-provision a host based on cores. For example, the virtual cores
running on an instance may exceed the physical cores.
You can configure this filter to enable a fixed amount of vCPU overcommitment by using
the cpu_allocation_ratio configuration option in nova.conf. The default setting is:
cpu_allocation_ratio = 16.0
With this setting, if 8 vCPUs are on a node, the scheduler allows instances up to 128 vCPU
to be run on that node.
To disallow vCPU overcommitment set:
cpu_allocation_ratio = 1.0
210
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Note
The Compute API always returns the actual number of CPU cores available on
a compute node regardless of the value of the cpu_allocation_ratio configuration key. As a result changes to the cpu_allocation_ratio are not
reflected via the command line clients or the dashboard. Changes to this configuration key are only taken into account internally in the scheduler.
DifferentHostFilter
Schedules the instance on a different host from a set of instances. To take advantage of
this filter, the requester must pass a scheduler hint, using different_host as the key and
a list of instance UUIDs as the value. This filter is the opposite of the SameHostFilter. Using the nova command-line tool, use the --hint flag. For example:
$ nova boot --image cedef40a-ed67-4d10-800e-17455edce175 --flavor 1 \
--hint different_host=a0cf03a5-d921-4877-bb5c-86d26cf818e1 \
--hint different_host=8c19174f-4220-44f0-824a-cd1eeef10287 server-1
DiskFilter
Only schedules instances on hosts if there is sufficient disk space available for root and
ephemeral storage.
You can configure this filter to enable a fixed amount of disk overcommitment by using the
disk_allocation_ratio configuration option in nova.conf. The default setting is:
disk_allocation_ratio = 1.0
Adjusting this value to greater than 1.0 enables scheduling instances while over committing disk resources on the node. This might be desirable if you use an image format that is
sparse or copy on write so that each virtual instance does not require a 1:1 allocation of virtual disk to physical storage.
GroupAffinityFilter
Note
This filter is deprecated in favor of ServerGroupAffinityFilter.
211
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
This filter should not be enabled at the same time as GroupAntiAffinityFilter or neither filter will work properly.
GroupAntiAffinityFilter
Note
This filter is deprecated in favor of ServerGroupAntiAffinityFilter.
The GroupAntiAffinityFilter ensures that each instance in a group is on a different host.
To take advantage of this filter, the requester must pass a scheduler hint, using group as
the key and an arbitrary name as the value. Using the nova command-line tool, use the -hint flag. For example:
$ nova boot --image IMAGE_ID --flavor 1 --hint group=foo server-1
This filter should not be enabled at the same time as GroupAffinityFilter or neither filter will
work properly.
ImagePropertiesFilter
Filters hosts based on properties defined on the instance's image. It passes hosts that can
support the specified image properties contained in the instance. Properties include the architecture, hypervisor type, and virtual machine mode. for example, an instance might require a host that runs an ARM-based processor and QEMU as the hypervisor. An image can
be decorated with these properties by using:
$ glance image-update img-uuid --property architecture=arm --property
hypervisor_type=qemu
IsolatedHostsFilter
Allows the admin to define a special (isolated) set of images and a special
(isolated) set of hosts, such that the isolated images can only run on the iso212
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
lated hosts, and the isolated hosts can only run isolated images. The flag
restrict_isolated_hosts_to_isolated_images can be used to force isolated
hosts to only run isolated images.
The admin must specify the isolated set of images and hosts in the nova.conf file using
the isolated_hosts and isolated_images configuration options. For example:
isolated_hosts = server1, server2
isolated_images = 342b492c-128f-4a42-8d3a-c5088cf27d13, ebd267a6ca86-4d6c-9a0e-bd132d6b7d09
IoOpsFilter
The IoOpsFilter filters hosts by concurrent I/O operations on it. Hosts with too many concurrent I/O operations will be filtered out. The max_io_ops_per_host option specifies
the maximum number of I/O intensive instances allowed to run on a host. A host will be
ignored by the scheduler if more than max_io_ops_per_host instances in build, resize,
snapshot, migrate, rescue or unshelve task states are running on it.
JsonFilter
The JsonFilter allows a user to construct a custom filter by passing a scheduler hint in JSON
format. The following operators are supported:
=
<
>
in
<=
>=
not
or
and
The filter supports the following variables:
$free_ram_mb
$free_disk_mb
$total_usable_ram_mb
$vcpus_total
$vcpus_used
213
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
MetricsFilter
Filters hosts based on metrics weight_setting. Only hosts with the available metrics are
passed so that the metrics weigher will not fail due to these hosts.
NumInstancesFilter
Hosts that have more instances running than specified by the
max_instances_per_host option are filtered out when this filter is in place.
PciPassthroughFilter
The filter schedules instances on a host if the host has devices that meet the device requests
in the extra_specs attribute for the flavor.
RamFilter
Only schedules instances on hosts that have sufficient RAM available. If this filter is not set,
the scheduler may over provision a host based on RAM (for example, the RAM allocated by
virtual machine instances may exceed the physical RAM).
You can configure this filter to enable a fixed amount of RAM overcommitment by using
the ram_allocation_ratio configuration option in nova.conf. The default setting is:
ram_allocation_ratio = 1.5
This setting enables 1.5GB instances to run on any compute node with 1GB of free RAM.
RetryFilter
Filters out hosts that have already been attempted for scheduling purposes. If the scheduler
selects a host to respond to a service request, and the host fails to respond to the request,
this filter prevents the scheduler from retrying that host for the service request.
This filter is only useful if the scheduler_max_attempts configuration option is set to a
value greater than zero.
214
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
SameHostFilter
Schedules the instance on the same host as another instance in a set of instances. To take
advantage of this filter, the requester must pass a scheduler hint, using same_host as the
key and a list of instance UUIDs as the value. This filter is the opposite of the DifferentHostFilter. Using the nova command-line tool, use the --hint flag:
$ nova boot --image cedef40a-ed67-4d10-800e-17455edce175 --flavor 1 \
--hint same_host=a0cf03a5-d921-4877-bb5c-86d26cf818e1 \
--hint same_host=8c19174f-4220-44f0-824a-cd1eeef10287 server-1
ServerGroupAffinityFilter
The ServerGroupAffinityFilter ensures that an instance is scheduled on to a host from a set
of group hosts. To take advantage of this filter, the requester must create a server group
with an affinity policy, and pass a scheduler hint, using group as the key and the server
group UUID as the value. Using the nova command-line tool, use the --hint flag. For example:
$ nova server-group-create --policy affinity group-1
$ nova boot --image IMAGE_ID --flavor 1 --hint group=SERVER_GROUP_UUID
server-1
ServerGroupAntiAffinityFilter
The ServerGroupAntiAffinityFilter ensures that each instance in a group is on a different
host. To take advantage of this filter, the requester must create a server group with an anti-affinity policy, and pass a scheduler hint, using group as the key and the server
group UUID as the value. Using the nova command-line tool, use the --hint flag. For example:
$ nova server-group-create --policy anti-affinity group-1
$ nova boot --image IMAGE_ID --flavor 1 --hint group=SERVER_GROUP_UUID
server-1
SimpleCIDRAffinityFilter
Schedules the instance based on host IP subnet range. To take advantage of this filter, the
requester must specify a range of valid IP address in CIDR format, by passing two scheduler
hints:
215
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Using the nova command-line tool, use the --hint flag. For example, to specify the IP subnet 192.168.1.1/24
$ nova boot --image cedef40a-ed67-4d10-800e-17455edce175 --flavor 1 \
--hint build_near_host_ip=192.168.1.1 --hint cidr=/24 server-1
TrustedFilter
Filters hosts based on their trust. Only passes hosts that meet the trust requirements specified in the instance properties.
TypeAffinityFilter
Dynamically limits hosts to one instance type. An instance can only be launched on a host,
if no instance with different instances types are running on it, or if the host has no running
instances at all.
Weights
When resourcing instances, the filter scheduler filters and weights each host in the list of
acceptable hosts. Each time the scheduler selects a host, it virtually consumes resources on
it, and subsequent selections are adjusted accordingly. This process is useful when the customer asks for the same large amount of instances, because weight is computed for each
requested instance.
All weights are normalized before being summed up; the host with the largest weight is
given the highest priority.
216
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
Figure2.3.Weighting hosts
If cells are used, cells are weighted by the scheduler in the same manner as hosts.
Hosts and cells are weighted based on the following options in the /etc/nova/nova.conf file:
Option
[DEFAULT]
ram_weight_multiplier By default, the scheduler spreads instances across all hosts evenly. Set
the ram_weight_multiplier option to a negative number if you
prefer stacking instead of spreading. Use a floating-point value.
Description
[DEFAULT]
scheduler_host_subset_size
New instances are scheduled on a host that is chosen randomly from a
subset of the N best hosts. This property defines the subset size from
which a host is chosen. A value of 1 chooses the first host returned
by the weighting functions. This value must be at least 1. A value less
than 1 is ignored, and 1 is used instead. Use an integer value.
[DEFAULT]
scheduler_weight_classes
Defaults to nova.scheduler.weights.all_weighers, which selects the RamWeigher. Hosts are then weighted and sorted with the
largest weight winning.
[metrics]
weight_multiplier
217
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
Section
Option
Description
[metrics]
weight_setting
[metrics]
required
juno
weight_of_unavailable If required is set to False, and any one of the metrics set by
weight_setting is unavailable, the weight_of_unavailable
value is returned to the scheduler.
For example:
[DEFAULT]
scheduler_host_subset_size = 1
scheduler_weight_classes = nova.scheduler.weights.all_weighers
ram_weight_multiplier = 1.0
[metrics]
weight_multiplier = 1.0
weight_setting = name1=1.0, name2=-1.0
required = false
weight_of_unavailable = -10000.0
Option
[cells]
mute_weight_multiplierMultiplier to weight mute children (hosts which have not sent capacity
or capacity updates for some time). Use a negative, floating-point value.
Description
[cells]
mute_weight_value
[cells]
offset_weight_multiplier
Multiplier to weight cells, so you can specify a preferred cell. Use a
floating point value.
[cells]
ram_weight_multiplier By default, the scheduler spreads instances across all cells evenly. Set
the ram_weight_multiplier option to a negative number if you
prefer stacking instead of spreading. Use a floating-point value.
[cells]
scheduler_weight_classes
Defaults to nova.cells.weights.all_weighers, which maps to
all cell weighters included with Compute. Cells are then weighted and
sorted with the largest weight winning.
For example:
[cells]
scheduler_weight_classes = nova.cells.weights.all_weighers
mute_weight_multiplier = -10.0
mute_weight_value = 1000.0
ram_weight_multiplier = 1.0
offset_weight_multiplier = 1.0
Chance scheduler
As an administrator, you work with the filter scheduler. However, the Compute service also
uses the Chance Scheduler, nova.scheduler.chance.ChanceScheduler, which randomly selects from lists of filtered hosts.
218
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Host aggregates
Host aggregates are a mechanism to further partition an availability zone; while availability zones are visible to users, host aggregates are only visible to administrators. Host Aggregates provide a mechanism to allow administrators to assign key-value pairs to groups of
machines. Each node can have multiple aggregates, each aggregate can have multiple keyvalue pairs, and the same key-value pair can be assigned to multiple aggregates. This information can be used in the scheduler to enable advanced scheduling, to set up hypervisor resource pools or to define logical groups for migration.
Command-line interface
The nova command-line tool supports the following aggregate-related commands.
nova aggregate-list
nova aggregate-remove-host
<id> <host>
Remove the host with name <host> from the aggregate with id <id>.
nova host-list
219
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Note
Only administrators can access these commands. If you try to use these commands and the user name and tenant that you use to access the Compute service do not have the admin role or the appropriate privileges, these errors occur:
ERROR: Policy doesn't allow compute_extension:aggregates to be
performed. (HTTP 403) (Request-ID: req-299fbff6-6729-4cef-93b2e7e1f96b4864)
ERROR: Policy doesn't allow compute_extension:hosts to be performed.
(HTTP 403) (Request-ID: req-ef2400f6-6776-4ea3-b6f1-7704085c27d1)
220
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
| Id | Name
| Availability Zone | Hosts
| Metadata
|
+----+---------+-------------------+------------+-------------------+
| 1 | fast-io | nova
| [u'node1'] | {u'ssd': u'true'} |
+----+---------+-------------------+------------+-------------------+
$ nova aggregate-add-host 1 node2
+----+---------+-------------------+---------------------+-------------------+
| Id | Name
| Availability Zone | Hosts
| Metadata
|
+----+---------+-------------------+----------------------+------------------+
| 1 | fast-io | nova
| [u'node1', u'node2'] | {u'ssd': u'true'}
|
+----+---------+-------------------+----------------------+------------------+
Use the nova flavor-create command to create the ssd.large flavor called with an ID of
6, 8GB of RAM, 80GB root disk, and four vCPUs.
$ nova flavor-create ssd.large 6 8192 80 4
+----+-----------+-----------+------+-----------+------+-------+------------+-----------+-------------+
| ID | Name
| Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor |
Is_Public | extra_specs |
+----+-----------+-----------+------+-----------+------+-------+------------+-----------+-------------+
| 6 | ssd.large | 8192
| 80
| 0
|
| 4
| 1
|
True
| {}
|
+----+-----------+-----------+------+-----------+------+-------+------------+-----------+-------------+
Once the flavor is created, specify one or more key-value pairs that match the key-value
pairs on the host aggregates. In this case, that is the ssd=true key-value pair. Setting a
key-value pair on a flavor is done using the nova flavor-key command.
$ nova flavor-key ssd.large set
ssd=true
Once it is set, you should see the extra_specs property of the ssd.large flavor populated with a key of ssd and a corresponding value of true.
$ nova flavor-show ssd.large
+----------------------------+-------------------+
| Property
| Value
|
+----------------------------+-------------------+
| OS-FLV-DISABLED:disabled
| False
|
| OS-FLV-EXT-DATA:ephemeral | 0
|
| disk
| 80
|
| extra_specs
| {u'ssd': u'true'} |
| id
| 6
|
| name
| ssd.large
|
| os-flavor-access:is_public | True
|
| ram
| 8192
|
| rxtx_factor
| 1.0
|
| swap
|
|
| vcpus
| 4
|
+----------------------------+-------------------+
Now, when a user requests an instance with the ssd.large flavor, the scheduler only
considers hosts with the ssd=true key-value pair. In this example, these are node1 and
node2.
221
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Configuration reference
To customize the Compute scheduler, use the configuration option settings documented in
Table2.51, Description of scheduler configuration options [254].
Cells
Cells functionality enables you to scale an OpenStack Compute cloud in a more distributed
fashion without having to use complicated technologies like database and message queue
clustering. It supports very large deployments.
When this functionality is enabled, the hosts in an OpenStack Compute cloud are partitioned into groups called cells. Cells are configured as a tree. The top-level cell should
have a host that runs a nova-api service, but no nova-compute services. Each child cell
should run all of the typical nova-* services in a regular Compute cloud except for nova-api. You can think of cells as a normal Compute deployment in that each cell has its
own database server and message queue broker.
The nova-cells service handles communication between cells and selects cells for new
instances. This service is required for every cell. Communication between cells is pluggable,
and currently the only option is communication through RPC.
Cells scheduling is separate from host scheduling. nova-cells first picks a cell. Once a cell
is selected and the new build request reaches its nova-cells service, it is sent over to the
host scheduler in that cell and the build proceeds as it would have without cells.
Warning
Cell functionality is currently considered experimental.
name
capabilities
List of arbitrary key=value pairs defining capabilities of the current cell. Values include
hypervisor=xenserver;kvm,os=linux;windows.
call_timeout
How long in seconds to wait for replies from calls between cells.
222
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
scheduler_filter_classes Filter classes that the cells scheduler should use. By default, uses "nova.cells.filters.all_filters" to
map to all cells filters included with Compute.
scheduler_weight_classes Weight classes that the scheduler for cells uses. By default, uses nova.cells.weights.all_weighers
to map to all cells weight algorithms included with Compute.
ram_weight_multiplier
Multiplier used to weight RAM. Negative numbers indicate that Compute should stack VMs on one host instead of spreading out new VMs to more hosts in the
cell. The default value is 10.0.
223
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
As an example, assume an API cell named api and a child cell named cell1.
Within the api cell, specify the following RabbitMQ server information:
rabbit_host=10.0.0.10
rabbit_port=5672
rabbit_username=api_user
rabbit_password=api_passwd
rabbit_virtual_host=api_vhost
Within the cell1 child cell, specify the following RabbitMQ server information:
rabbit_host=10.0.1.10
rabbit_port=5673
rabbit_username=cell1_user
rabbit_password=cell1_passwd
rabbit_virtual_host=cell1_vhost
To customize the Compute cells, use the configuration option settings documented in Table2.19, Description of cell configuration options [236].
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
scheduler_retry_delay
As an admin user, you can also add a filter that directs builds to
a particular cell. The policy.json file must have a line with
"cells_scheduler_filter:TargetCellFilter" : "is_admin:True" to let an
admin user specify a scheduler hint to direct a build to a particular cell.
225
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
umn). You must specify the queue connection information through a transport_url
field, instead of username, password, and so on. The transport_url has the following
form:
rabbit://USERNAME:PASSWORD@HOSTNAME:PORT/VIRTUAL_HOST
The scheme can be either qpid or rabbit, as shown previously. The following sample
shows this optional configuration:
{
"parent": {
"name": "parent",
"api_url": "http://api.example.com:8774",
"transport_url": "rabbit://rabbit.example.com",
"weight_offset": 0.0,
"weight_scale": 1.0,
"is_parent": true
},
"cell1": {
"name": "cell1",
"api_url": "http://api.example.com:8774",
"transport_url": "rabbit://rabbit1.example.com",
"weight_offset": 0.0,
"weight_scale": 1.0,
"is_parent": false
},
"cell2": {
"name": "cell2",
"api_url": "http://api.example.com:8774",
"transport_url": "rabbit://rabbit2.example.com",
"weight_offset": 0.0,
"weight_scale": 1.0,
"is_parent": false
}
}
Conductor
The nova-conductor service enables OpenStack to function without compute nodes
accessing the database. Conceptually, it implements a new layer on top of nova-compute. It should not be deployed on compute nodes, or else the security benefits of removing database access from nova-compute are negated. Just like other nova services such as
nova-api or nova-scheduler, it can be scaled horizontally. You can run multiple instances
of nova-conductor on different machines as needed for scaling purposes.
The methods exposed by nova-conductor are relatively simple methods used by nova-compute to offload its database operations. Places where nova-compute previously
performed database access are now talking to nova-conductor. However, we have plans
in the medium to long term to move more and more of what is currently in nova-compute up to the nova-conductor layer. The Compute service will start to look like a less
intelligent slave service to nova-conductor. The conductor service will implement long
running complex operations, ensuring forward progress and graceful error handling. This
will be especially beneficial for operations that cross multiple compute nodes, such as migrations or resizes.
To customize the Conductor, use the configuration option settings documented in Table2.22, Description of conductor configuration options [239].
226
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
227
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
my_ip=192.168.206.130
public_interface=eth0
vlan_interface=eth0
flat_network_bridge=br100
flat_interface=eth0
# NOVNC CONSOLE
novncproxy_base_url=http://192.168.206.130:6080/vnc_auto.html
# Change vncserver_proxyclient_address and vncserver_listen to match each
compute host
vncserver_proxyclient_address=192.168.206.130
vncserver_listen=192.168.206.130
# AUTHENTICATION
auth_strategy=keystone
[keystone_authtoken]
auth_host = 127.0.0.1
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = nova
signing_dirname = /tmp/keystone-signing-nova
# GLANCE
[glance]
api_servers=192.168.206.130:9292
# DATABASE
[database]
connection=mysql://nova:[email protected]/nova
# LIBVIRT
[libvirt]
virt_type=qemu
228
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
api_paste_config=/etc/nova/api-paste.ini
# COMPUTE/APIS: if you have separate configs for separate services
# this flag is required for both nova-api and nova-compute
allow_resize_to_same_host=True
# APIS
osapi_compute_extension=nova.api.openstack.compute.contrib.standard_extensions
ec2_dmz_host=192.168.206.130
s3_host=192.168.206.130
# RABBITMQ
rabbit_host=192.168.206.130
# GLANCE
image_service=nova.image.glance.GlanceImageService
# NETWORK
network_manager=nova.network.manager.FlatDHCPManager
force_dhcp_release=True
dhcpbridge_flagfile=/etc/nova/nova.conf
firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver
# Change my_ip to match each host
my_ip=192.168.206.130
public_interface=eth0
vlan_interface=eth0
flat_network_bridge=br100
flat_interface=eth0
# NOVNC CONSOLE
novncproxy_base_url=http://192.168.206.130:6080/vnc_auto.html
# Change vncserver_proxyclient_address and vncserver_listen to match each
compute host
vncserver_proxyclient_address=192.168.206.130
vncserver_listen=192.168.206.130
# AUTHENTICATION
auth_strategy=keystone
[keystone_authtoken]
auth_host = 127.0.0.1
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = nova
signing_dirname = /tmp/keystone-signing-nova
# GLANCE
[glance]
api_servers=192.168.206.130:9292
# DATABASE
[database]
connection=mysql://nova:[email protected]/nova
# LIBVIRT
[libvirt]
virt_type=qemu
229
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
230
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
api.log
cert.log
openstack-nova-api
nova-api
openstack-nova-cert
nova-cert
compute.log
openstack-nova-compute
nova-compute
conductor.log
openstack-nova-conductor
nova-conductor
consoleauth.log
openstack-nova-consoleauth
nova-consoleauth
network.log
openstack-nova-network
nova-network
nova-manage.log
nova-manage
nova-manage
scheduler.log
openstack-nova-scheduler
nova-scheduler
The X509 certificate service (openstack-nova-cert/nova-cert) is only required by the EC2 API to the Compute service.
231
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
The nova network service (openstack-nova-network/nova-network) only runs in deployments that are not configured to use the Networking service (neutron).
Description
[DEFAULT]
api_paste_config = api-paste.ini
api_rate_limit = False
enable_new_services = True
enabled_ssl_apis =
instance_name_template = instance-%08x
max_header_line = 16384
(IntOpt) Maximum line size of message headers to be accepted. max_header_line may need to be increased when
using large tokens (typically those generated by the Keystone v3 API with big service catalogs).
multi_instance_display_name_template = %(name)s%(uuid)s
null_kernel = nokernel
osapi_compute_ext_list =
osapi_compute_extension =
(MultiStrOpt) osapi compute extension to load
['nova.api.openstack.compute.contrib.standard_extensions']
osapi_compute_link_prefix = None
osapi_compute_listen = 0.0.0.0
osapi_compute_listen_port = 8774
osapi_compute_workers = None
osapi_hide_server_address_states = building
232
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
servicegroup_driver = db
snapshot_name_template = snapshot-%s
tcp_keepidle = 600
use_forwarded_for = False
wsgi_default_pool_size = 1000
(StrOpt) A python format string that is used as the template to generate log lines. The following values can
be formatted into it: client_ip, date_time, request_line,
status_code, body_length, wall_seconds.
Description
[osapi_v3]
enabled = False
extensions_blacklist =
extensions_whitelist =
Description
[DEFAULT]
auth_strategy = keystone
Description
[keystone_authtoken]
admin_password = None
admin_tenant_name = admin
admin_token = None
admin_user = None
auth_admin_prefix =
auth_host = 127.0.0.1
auth_port = 35357
(IntOpt) Port of the admin Identity API endpoint. Deprecated, use identity_uri.
auth_protocol = https
auth_uri = None
233
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
auth_version = None
cache = None
cafile = None
certfile = None
check_revocations_for_cached = False
delay_auth_decision = False
enforce_token_bind = permissive
(StrOpt) Used to control the use and type of token binding. Can be set to: "disabled" to not check token binding.
"permissive" (default) to validate binding information if
the bind type is of a form known to the server and ignore
it if not. "strict" like "permissive" but if the bind type is unknown the token will be rejected. "required" any form of
token binding is needed to be allowed. Finally the name of
a binding method that must be present in tokens.
hash_algorithms = md5
http_connect_timeout = None
http_request_max_retries = 3
identity_uri = None
include_service_catalog = True
(BoolOpt) (optional) indicate whether to set the X-Service-Catalog header. If False, middleware will not ask for
service catalog on token validation and will not set the XService-Catalog header.
insecure = False
keyfile = None
memcache_secret_key = None
memcache_security_strategy = None
(StrOpt) (optional) if defined, indicate whether token data should be authenticated or authenticated and encrypted. Acceptable values are MAC or ENCRYPT. If MAC, token data is authenticated (with HMAC) in the cache. If ENCRYPT, token data is encrypted and authenticated in the
cache. If the value is not one of these options or empty,
auth_token will raise an exception on initialization.
revocation_cache_time = 10
234
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
with a low cache duration may significantly reduce performance.
signing_dir = None
token_cache_time = 300
(IntOpt) In order to prevent excessive effort spent validating tokens, the middleware caches previously-seen tokens
for a configurable duration (in seconds). Set to -1 to disable caching completely.
Description
[DEFAULT]
default_availability_zone = nova
default_schedule_zone = None
internal_service_availability_zone = internal
Description
[baremetal]
db_backend = sqlalchemy
deploy_kernel = None
deploy_ramdisk = None
driver = nova.virt.baremetal.pxe.PXE
flavor_extra_specs =
ipmi_power_retry = 10
net_config_template = $pybasedir/nova/virt/baremetal/net-dhcp.ubuntu.template
power_manager = nova.virt.baremetal.ipmi.IPMI
pxe_bootfile_name = pxelinux.0
pxe_config_template = $pybasedir/nova/virt/baremetal/pxe_config.template
pxe_deploy_timeout = 0
pxe_network_config = False
sql_connection = sqlite:///$state_path/
baremetal_nova.sqlite
(StrOpt) The SQLAlchemy connection string used to connect to the bare-metal database
terminal = shellinaboxd
terminal_cert_dir = None
terminal_pid_dir = $state_path/baremetal/console
tftp_root = /tftpboot
235
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
use_file_injection = False
use_unsafe_iscsi = False
vif_driver =
nova.virt.baremetal.vif_driver.BareMetalVIFDriver
virtual_power_host_key = None
virtual_power_host_pass =
virtual_power_host_user =
virtual_power_ssh_host =
virtual_power_ssh_port = 22
virtual_power_type = virsh
Description
[DEFAULT]
ca_file = cacert.pem
ca_path = $state_path/CA
cert_manager = nova.cert.manager.CertManager
cert_topic = cert
crl_file = crl.pem
key_file = private/cakey.pem
keys_path = $state_path/keys
project_cert_subject = /C=US/ST=California/O=OpenStack/ (StrOpt) Subject for certificate for projects, %s for project,
OU=NovaDev/CN=project-ca-%.16s-%s
timestamp
ssl_ca_file = None
ssl_cert_file = None
ssl_key_file = None
use_project_ca = False
user_cert_subject = /C=US/ST=California/O=OpenStack/
OU=NovaDev/CN=%.16s-%.16s-%s
(StrOpt) Subject for certificate for users, %s for project, user, timestamp
[ssl]
ca_file = None
cert_file = None
key_file = None
(StrOpt) Private key file to use when starting the server securely.
Description
[cells]
call_timeout = 60
capabilities = hypervisor=xenserver;kvm, os=linux;windows (ListOpt) Key/Multi-value list with the capabilities of the
cell
236
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
cell_type = compute
cells_config = None
(StrOpt) Configuration file from which to read cells configuration. If given, overrides reading cells from the
database.
db_check_interval = 60
(IntOpt) Interval, in seconds, for getting fresh cell information from the database.
driver = nova.cells.rpc_driver.CellsRPCDriver
enable = False
instance_update_num_instances = 1
instance_updated_at_threshold = 3600
(IntOpt) Number of seconds after an instance was updated or deleted to continue to update cells
manager = nova.cells.manager.CellsManager
max_hop_count = 10
mute_child_interval = 300
(IntOpt) Number of seconds after which a lack of capability and capacity updates signals the child cell is to be treated as a mute.
mute_weight_multiplier = -10.0
mute_weight_value = 1000.0
name = nova
offset_weight_multiplier = 1.0
reserve_percent = 10.0
topic = cells
Description
[DEFAULT]
bindir = /usr/local/bin
compute_topic = compute
console_topic = console
consoleauth_topic = consoleauth
host = localhost
(StrOpt) Name of this node. This can be an opaque identifier. It is not necessarily a hostname, FQDN, or IP address.
However, the node name must be valid within an AMQP
key, and if using ZeroMQ, a valid hostname, FQDN, or IP
address
lock_path = None
memcached_servers = None
my_ip = 10.0.0.1
notify_api_faults = False
(BoolOpt) If set, send api.fault notifications on caught exceptions in the API service.
notify_on_state_change = None
(StrOpt) If set, send compute.instance.update notifications on instance state changes. Valid values are None for
no notifications, "vm_state" for notifications on VM state
changes, or "vm_and_task_state" for notifications on VM
and task state changes.
pybasedir = /usr/lib/python/site-packages/nova
237
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
report_interval = 10
rootwrap_config = /etc/nova/rootwrap.conf
service_down_time = 60
state_path = $pybasedir
tempdir = None
[keystone_authtoken]
memcached_servers = None
Description
[DEFAULT]
compute_available_monitors =
['nova.compute.monitors.all_monitors']
compute_driver = None
(StrOpt) Driver to use for controlling virtualization. Options include: libvirt.LibvirtDriver, xenapi.XenAPIDriver,
fake.FakeDriver, baremetal.BareMetalDriver,
vmwareapi.VMwareVCDriver, hyperv.HyperVDriver
compute_manager =
nova.compute.manager.ComputeManager
compute_monitors =
compute_resources = vcpu
compute_stats_class = nova.compute.stats.Stats
(StrOpt) Class that will manage stats for the local compute
host
console_host = localhost
console_manager =
nova.console.manager.ConsoleProxyManager
(StrOpt) Full class name for the Manager for console proxy
default_flavor = m1.small
(StrOpt) Default flavor to use for the EC2 API only. The
Nova API does not support a default flavor.
default_notification_level = INFO
enable_instance_password = True
heal_instance_info_cache_interval = 60
image_cache_manager_interval = 2400
image_cache_subdirectory_name = _base
instance_build_timeout = 0
238
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
instance_delete_interval = 300
instance_usage_audit = False
instance_usage_audit_period = month
instances_path = $state_path/instances
maximum_instance_delete_attempts = 5
reboot_timeout = 0
reclaim_instance_interval = 0
rescue_timeout = 0
resize_confirm_window = 0
resume_guests_state_on_host_boot = False
(BoolOpt) Whether to start guests that were running before the host rebooted
running_deleted_instance_action = reap
running_deleted_instance_poll_interval = 1800
running_deleted_instance_timeout = 0
shelved_offload_time = 0
(IntOpt) Time in seconds before a shelved instance is eligible for removing from a host. -1 never offload, 0 offload
when shelved
shelved_poll_interval = 3600
shutdown_timeout = 60
(IntOpt) Total amount of time to wait in seconds for an instance to perform a clean shutdown.
sync_power_state_interval = 600
vif_plugging_is_fatal = True
vif_plugging_timeout = 300
Description
[DEFAULT]
migrate_max_retries = -1
[conductor]
239
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
manager = nova.conductor.manager.ConductorManager
topic = conductor
use_local = False
workers = None
(IntOpt) Number of workers for OpenStack Conductor service. The default will be the number of CPUs available.
Description
[DEFAULT]
config_drive_format = iso9660
config_drive_tempdir = None
force_config_drive = None
mkisofs_cmd = genisoimage
[hyperv]
config_drive_cdrom = False
config_drive_inject_password = False
Description
[DEFAULT]
console_public_hostname = localhost
console_token_ttl = 600
consoleauth_manager =
nova.consoleauth.manager.ConsoleAuthManager
Description
[DEFAULT]
db_driver = nova.db
[database]
backend = sqlalchemy
connection = None
connection_debug = 0
connection_trace = False
db_inc_retry_interval = True
db_max_retries = 20
(IntOpt) Maximum database connection retries before error is raised. Set to -1 to specify an infinite retry count.
240
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
db_max_retry_interval = 10
(IntOpt) If db_inc_retry_interval is set, the maximum seconds between database connection retries.
db_retry_interval = 1
idle_timeout = 3600
max_overflow = None
max_pool_size = None
max_retries = 10
min_pool_size = 1
mysql_sql_mode = TRADITIONAL
pool_timeout = None
retry_interval = 10
slave_connection = None
(StrOpt) The SQLAlchemy connection string to use to connect to the slave database.
sqlite_db = oslo.sqlite
sqlite_synchronous = True
use_db_reconnect = False
use_tpool = False
Description
[DEFAULT]
backdoor_port = None
disable_process_locking = False
Description
[DEFAULT]
ec2_dmz_host = $my_ip
ec2_host = $my_ip
ec2_listen = 0.0.0.0
ec2_listen_port = 8773
ec2_path = /services/Cloud
(StrOpt) The path prefix used to call the ec2 API server
241
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
ec2_port = 8773
ec2_private_dns_show_ip = False
ec2_scheme = http
ec2_strict_validation = True
ec2_timestamp_expiry = 300
ec2_workers = None
(IntOpt) Number of workers for EC2 API service. The default will be equal to the number of CPUs available.
keystone_ec2_url = http://localhost:5000/v2.0/ec2tokens
lockout_attempts = 5
lockout_minutes = 15
lockout_window = 15
region_list =
Description
[ephemeral_storage_encryption]
cipher = aes-xts-plain64
enabled = False
key_size = 512
Description
[DEFAULT]
fping_path = /usr/sbin/fping
Description
[DEFAULT]
osapi_glance_link_prefix = None
[glance]
allowed_direct_url_schemes =
(ListOpt) A list of url scheme that can be downloaded directly via the direct_url. Currently supported schemes:
[file].
api_insecure = False
api_servers = None
host = $my_ip
242
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
num_retries = 0
port = 9292
protocol = http
[image_file_url]
filesystems =
Description
[hyperv]
dynamic_memory_ratio = 1.0
(FloatOpt) Enables dynamic memory allocation (ballooning) when set to a value greater than 1. The value expresses the ratio between the total RAM assigned to an instance and its startup RAM amount. For example a ratio of
2.0 for an instance with 1024MB of RAM implies 512MB of
RAM allocated at startup
enable_instance_metrics_collection = False
(BoolOpt) Enables metrics collections for an instance by using Hyper-V's metric APIs. Collected data can by retrieved
by other apps and services, e.g.: Ceilometer. Requires Hyper-V / Windows Server 2012 and above
force_hyperv_utils_v1 = False
instances_path_share =
limit_cpu_features = False
mounted_disk_query_retry_count = 10
mounted_disk_query_retry_interval = 5
qemu_img_cmd = qemu-img.exe
(StrOpt) Path of qemu-img command which is used to convert between different image types
vswitch_name = None
wait_soft_reboot_seconds = 60
Description
[DEFAULT]
default_ephemeral_format = None
force_raw_images = True
preallocate_images = none
(StrOpt) VM image preallocation mode: "none" => no storage provisioning is done up front, "space" => storage is fully allocated at instance start
243
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
timeout_nbd = 10
(IntOpt) Amount of time, in seconds, to wait for NBD device start up.
use_cow_images = True
vcpu_pin_set = None
virt_mkfs = []
Description
[ironic]
admin_auth_token = None
admin_password = None
admin_tenant_name = None
admin_url = None
admin_username = None
api_endpoint = None
api_max_retries = 60
api_retry_interval = 2
api_version = 1
client_log_level = None
Description
[DEFAULT]
fixed_range_v6 = fd00::/48
gateway_v6 = None
ipv6_backend = rfc2462
use_ipv6 = False
Description
[keymgr]
api_class = nova.keymgr.conf_key_mgr.ConfKeyManager
(StrOpt) The full class name of the key manager API class
fixed_key = None
Description
[DEFAULT]
ldap_dns_base_dn = ou=hosts,dc=example,dc=org
ldap_dns_password = password
ldap_dns_servers = ['dns.example.org']
244
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
ldap_dns_soa_expiry = 86400
ldap_dns_soa_hostmaster = [email protected]
ldap_dns_soa_minimum = 7200
(StrOpt) Minimum interval (in seconds) for LDAP DNS driver Statement of Authority
ldap_dns_soa_refresh = 1800
ldap_dns_soa_retry = 3600
ldap_dns_url = ldap://ldap.example.com:389
(StrOpt) URL for LDAP server which will store DNS entries
ldap_dns_user =
uid=admin,ou=people,dc=example,dc=org
Description
[DEFAULT]
remove_unused_base_images = True
remove_unused_original_minimum_age_seconds = 86400
[libvirt]
block_migration_flag =
VIR_MIGRATE_UNDEFINE_SOURCE,
VIR_MIGRATE_PEER2PEER, VIR_MIGRATE_LIVE,
VIR_MIGRATE_TUNNELLED,
VIR_MIGRATE_NON_SHARED_INC
checksum_base_images = False
checksum_interval_seconds = 3600
connection_uri =
cpu_mode = None
cpu_model = None
(StrOpt) Set to a named libvirt CPU model (see names listed in /usr/share/libvirt/cpu_map.xml). Only has effect if
cpu_mode="custom" and virt_type="kvm|qemu"
disk_cachemodes =
disk_prefix = None
(StrOpt) Override the default disk prefix for the devices attached to a server, which is dependent on virt_type. (valid
options are: sd, xvd, uvd, vd)
gid_maps =
hw_disk_discard = None
(StrOpt) Discard option for nova managed disks (valid options are: ignore, unmap). Need Libvirt(1.0.6) Qemu1.5
(raw format) Qemu1.6(qcow2 format)
hw_machine_type = None
245
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
config option is host-arch=machine-type. For example:
x86_64=machinetype1,armv7l=machinetype2
image_info_filename_pattern = $instances_path/
$image_cache_subdirectory_name/%(image)s.info
images_rbd_ceph_conf =
images_rbd_pool = rbd
images_type = default
images_volume_group = None
inject_key = False
inject_partition = -2
(IntOpt) The partition to inject to : -2 => disable, -1 => inspect (libguestfs only), 0 => not partitioned, >0 => partition
number
inject_password = False
iscsi_use_multipath = False
iser_use_multipath = False
mem_stats_period_seconds = 10
remove_unused_kernels = False
remove_unused_resized_minimum_age_seconds = 3600
rescue_image_id = None
(StrOpt) Rescue ami image. This will not be used if an image id is provided by the user.
rescue_kernel_id = None
rescue_ramdisk_id = None
rng_dev_path = None
snapshot_compression = False
snapshot_image_format = None
snapshots_directory = $instances_path/snapshots
sparse_logical_volumes = False
sysinfo_serial = auto
uid_maps =
use_usb_tablet = True
use_virtio_for_bridges = True
246
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
virt_type = kvm
volume_clear = zero
volume_clear_size = 0
volume_drivers =
(ListOpt) DEPRECATED. Libvirt handlers for remote voliscsi=nova.virt.libvirt.volume.LibvirtISCSIVolumeDriver,
umes. This option is deprecated and will be removed in the
iser=nova.virt.libvirt.volume.LibvirtISERVolumeDriver,
Kilo release.
local=nova.virt.libvirt.volume.LibvirtVolumeDriver,
fake=nova.virt.libvirt.volume.LibvirtFakeVolumeDriver,
rbd=nova.virt.libvirt.volume.LibvirtNetVolumeDriver,
sheepdog=nova.virt.libvirt.volume.LibvirtNetVolumeDriver,
nfs=nova.virt.libvirt.volume.LibvirtNFSVolumeDriver,
aoe=nova.virt.libvirt.volume.LibvirtAOEVolumeDriver,
glusterfs=nova.virt.libvirt.volume.LibvirtGlusterfsVolumeDriver,
fibre_channel=nova.virt.libvirt.volume.LibvirtFibreChannelVolumeDriver,
scality=nova.virt.libvirt.volume.LibvirtScalityVolumeDriver
wait_soft_reboot_seconds = 120
Description
[DEFAULT]
live_migration_retry_count = 30
[libvirt]
live_migration_bandwidth = 0
live_migration_flag = VIR_MIGRATE_UNDEFINE_SOURCE,
VIR_MIGRATE_PEER2PEER, VIR_MIGRATE_LIVE,
VIR_MIGRATE_TUNNELLED
live_migration_uri = qemu+tcp://%s/system
(StrOpt) Migration target URI (any included "%s" is replaced with the migration target hostname)
Description
[DEFAULT]
debug = False
(BoolOpt) Print debugging output (set logging level to DEBUG instead of default WARNING level).
fatal_deprecations = False
fatal_exception_format_errors = False
247
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
log_config_append = None
log_dir = None
(StrOpt) (Optional) The base directory used for relative -log-file paths.
log_file = None
(StrOpt) (Optional) Name of log file to output to. If no default is set, logging will go to stdout.
log_format = None
(StrOpt) DEPRECATED. A logging.Formatter log message format string which may use any of the available
logging.LogRecord attributes. This option is deprecated. Please use logging_context_format_string and
logging_default_format_string instead.
logging_context_format_string = %(asctime)s.
%(msecs)03d %(process)d %(levelname)s %(name)s
[%(request_id)s %(user_identity)s] %(instance)s
%(message)s
logging_debug_format_suffix = %(funcName)s
%(pathname)s:%(lineno)d
logging_default_format_string = %(asctime)s.%(msecs)03d (StrOpt) Format string to use for log messages without
%(process)d %(levelname)s %(name)s [-] %(instance)s
context.
%(message)s
logging_exception_prefix = %(asctime)s.%(msecs)03d
%(process)d TRACE %(name)s %(instance)s
publish_errors = False
syslog_log_facility = LOG_USER
use_stderr = True
use_syslog = False
use_syslog_rfc_format = False
verbose = False
Description
[DEFAULT]
metadata_host = $my_ip
metadata_listen = 0.0.0.0
metadata_listen_port = 8775
metadata_manager =
nova.api.manager.MetadataManager
metadata_port = 8775
248
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
metadata_workers = None
(IntOpt) Number of workers for metadata service. The default will be the number of CPUs available.
vendordata_driver =
nova.api.metadata.vendordata_json.JsonFileVendorData
vendordata_jsonfile_path = None
Description
[DEFAULT]
allow_same_net_traffic = True
auto_assign_floating_ip = False
cnt_vpn_clients = 0
create_unique_mac_address_attempts = 5
default_access_ip_network_name = None
default_floating_pool = nova
defer_iptables_apply = False
dhcp_domain = novalocal
dhcp_lease_time = 86400
dhcpbridge = $bindir/nova-dhcpbridge
dhcpbridge_flagfile = ['/etc/nova/nova-dhcpbridge.conf']
dns_server = []
dns_update_periodic_interval = -1
dnsmasq_config_file =
firewall_driver = None
fixed_ip_disassociate_timeout = 600
flat_injected = False
flat_interface = None
flat_network_bridge = None
flat_network_dns = 8.8.4.4
floating_ip_dns_manager =
nova.network.noop_dns_driver.NoopDNSDriver
(StrOpt) Full class name for the DNS Manager for floating
IPs
force_dhcp_release = True
force_snat_range = []
forward_bridge_interface = ['all']
249
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
gateway = None
injected_network_template = $pybasedir/nova/virt/
interfaces.template
instance_dns_domain =
(StrOpt) Full class name for the DNS Zone for instance IPs
instance_dns_manager =
nova.network.noop_dns_driver.NoopDNSDriver
(StrOpt) Full class name for the DNS Manager for instance
IPs
iptables_bottom_regex =
iptables_drop_action = DROP
iptables_top_regex =
l3_lib = nova.network.l3.LinuxNetL3
linuxnet_interface_driver =
nova.network.linux_net.LinuxBridgeInterfaceDriver
linuxnet_ovs_integration_bridge = br-int
multi_host = False
network_allocate_retries = 0
network_api_class = nova.network.api.API
network_device_mtu = None
network_driver = nova.network.linux_net
network_manager =
nova.network.manager.VlanManager
network_size = 256
network_topic = network
networks_path = $state_path/networks
num_networks = 1
ovs_vsctl_timeout = 120
public_interface = eth0
routing_source_ip = $my_ip
security_group_api = nova
send_arp_for_ha = False
send_arp_for_ha_count = 3
share_dhcp_address = False
teardown_unused_network_gateway = False
update_dns_entries = False
250
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
use_network_dns_servers = False
(BoolOpt) If set, uses the dns1 and dns2 from the network
ref. as dns servers.
use_neutron_default_nets = False
use_single_default_gateway = False
vlan_interface = None
vlan_start = 100
[vmware]
vlan_interface = vmnic0
Description
[DEFAULT]
neutron_default_tenant_id = default
[neutron]
admin_auth_url = http://localhost:5000/v2.0
admin_password = None
admin_tenant_id = None
admin_tenant_name = None
admin_user_id = None
admin_username = None
allow_duplicate_networks = False
(BoolOpt) Allow an instance to have multiple vNICs attached to the same Neutron network.
api_insecure = False
auth_strategy = keystone
ca_certificates_file = None
extension_sync_interval = 600
metadata_proxy_shared_secret =
ovs_bridge = br-int
region_name = None
service_metadata_proxy = False
url = http://127.0.0.1:9696
url_timeout = 30
251
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
[DEFAULT]
pci_alias = []
(MultiStrOpt) An alias for a PCI passthrough device requirement. This allows users to specify the alias in the
extra_spec for a flavor, without needing to repeat all
the PCI property requirements. For example: pci_alias =
{ "name": "QuicAssist", "product_id": "0443", "vendor_id":
"8086", "device_type": "ACCEL" } defines an alias for the Intel QuickAssist card. (multi valued)
pci_passthrough_whitelist = []
Description
[DEFAULT]
periodic_enable = True
periodic_fuzzy_delay = 60
(IntOpt) Range of seconds to randomly delay when starting the periodic task scheduler to reduce stampeding. (Disable by setting to 0)
run_external_periodic_tasks = True
Description
[DEFAULT]
allow_instance_snapshots = True
allow_migrate_to_same_host = False
allow_resize_to_same_host = False
max_age = 0
max_local_block_devices = 3
osapi_compute_unique_server_name_scope =
osapi_max_limit = 1000
(IntOpt) The maximum number of items returned in a single response from a collection resource
osapi_max_request_body_size = 114688
password_length = 12
policy_default_rule = default
policy_file = policy.json
reservation_expire = 86400
resize_fs_using_block_device = False
252
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
sion of cloud-init. Possible mechanisms require the nbd
driver (for qcow and raw), or loop (for raw).
until_refresh = 0
Description
[DEFAULT]
bandwidth_poll_interval = 600
enable_network_quota = False
quota_cores = 20
quota_driver = nova.quota.DbQuotaDriver
quota_fixed_ips = -1
quota_floating_ips = 10
quota_injected_file_content_bytes = 10240
quota_injected_file_path_length = 255
quota_injected_files = 5
quota_instances = 10
quota_key_pairs = 100
quota_metadata_items = 128
quota_ram = 51200
quota_security_group_rules = 20
quota_security_groups = 10
quota_server_group_members = 10
quota_server_groups = 10
[cells]
bandwidth_update_interval = 600
Description
[rdp]
enabled = False
html5_proxy_base_url = http://127.0.0.1:6083/
Description
[matchmaker_redis]
host = 127.0.0.1
password = None
port = 6379
[matchmaker_ring]
ringfile = /etc/oslo/matchmaker_ring.json
253
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
[DEFAULT]
filters_path = /etc/nova/rootwrap.d,/usr/share/nova/rootwrap
List of directories to load filter definitions from (separated by ','). These directories MUST all be only writeable by
root !
exec_dirs = /sbin,/usr/sbin,/bin,/usr/bin
use_syslog = False
syslog_log_facility = syslog
Which syslog facility to use. Valid values include auth, authpriv, syslog, local0, local1... Default value is 'syslog'
syslog_log_level = ERROR
Description
[DEFAULT]
buckets_path = $state_path/buckets
image_decryption_dir = /tmp
s3_access_key = notchecked
s3_affix_tenant = False
s3_host = $my_ip
s3_listen = 0.0.0.0
s3_listen_port = 3333
s3_port = 3333
s3_secret_key = notchecked
s3_use_ssl = False
Description
[DEFAULT]
aggregate_image_properties_isolation_namespace =
None
aggregate_image_properties_isolation_separator = .
cpu_allocation_ratio = 16.0
disk_allocation_ratio = 1.0
isolated_hosts =
254
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
isolated_images =
max_instances_per_host = 50
max_io_ops_per_host = 8
ram_allocation_ratio = 1.5
ram_weight_multiplier = 1.0
reserved_host_disk_mb = 0
reserved_host_memory_mb = 512
restrict_isolated_hosts_to_isolated_images = True
scheduler_available_filters =
['nova.scheduler.filters.all_filters']
scheduler_driver =
nova.scheduler.filter_scheduler.FilterScheduler
scheduler_driver_task_period = 60
scheduler_host_manager =
nova.scheduler.host_manager.HostManager
scheduler_host_subset_size = 1
scheduler_json_config_location =
scheduler_manager =
nova.scheduler.manager.SchedulerManager
scheduler_max_attempts = 3
scheduler_topic = scheduler
scheduler_use_baremetal_filters = False
scheduler_weight_classes =
nova.scheduler.weights.all_weighers
[cells]
ram_weight_multiplier = 10.0
255
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
scheduler_filter_classes = nova.cells.filters.all_filters
scheduler_retries = 10
scheduler_retry_delay = 2
scheduler_weight_classes = nova.cells.weights.all_weighers (ListOpt) Weigher classes the cells scheduler should use.
An entry of "nova.cells.weights.all_weighers" maps to all
cell weighers included with nova.
[metrics]
required = True
weight_multiplier = 1.0
weight_of_unavailable = -10000.0
(FloatOpt) The final weight value to be returned if required is set to False and any one of the metrics set by
weight_setting is unavailable.
weight_setting =
Description
[serial_console]
base_url = http://127.0.0.1:6083/
enabled = False
listen = 127.0.0.1
port_range = 10000:20000
(StrOpt) Range of TCP ports to use for serial ports on compute hosts
proxyclient_address = 127.0.0.1
(StrOpt) The address to which proxy clients (like nova-serialproxy) should connect
Description
[spice]
agent_enabled = True
enabled = False
html5proxy_base_url = http://127.0.0.1:6082/
spice_auto.html
keymap = en-us
server_listen = 127.0.0.1
server_proxyclient_address = 127.0.0.1
(StrOpt) The address to which proxy clients (like nova-spicehtml5proxy) should connect
256
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
[DEFAULT]
fake_call = False
fake_network = False
fake_rabbit = False
monkey_patch = False
monkey_patch_modules =
nova.api.ec2.cloud:nova.notifications.notify_decorator,
nova.compute.api:nova.notifications.notify_decorator
Description
[baremetal]
tile_pdu_ip = 10.0.100.1
tile_pdu_mgr = /tftpboot/pdu_mgr
tile_pdu_off = 2
tile_pdu_on = 1
tile_pdu_status = 9
tile_power_wait = 9
Description
[trusted_computing]
attestation_api_url = /OpenAttestationWebServices/V1.0
attestation_auth_blob = None
attestation_auth_timeout = 60
attestation_port = 8443
attestation_server = None
attestation_server_ca_file = None
Description
[cells]
scheduler = nova.cells.scheduler.CellsScheduler
[upgrade_levels]
cells = None
cert = None
compute = None
257
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
conductor = None
console = None
consoleauth = None
intercell = None
network = None
scheduler = None
Description
[vmware]
api_retry_count = 10
cluster_name = None
datastore_regex = None
host_ip = None
host_password = None
host_port = 443
host_username = None
integration_bridge = br-int
maximum_objects = 100
task_poll_interval = 0.5
use_linked_clone = True
wsdl_location = None
Description
[DEFAULT]
novncproxy_base_url = http://127.0.0.1:6080/
vnc_auto.html
vnc_enabled = True
vnc_keymap = en-us
vncserver_listen = 127.0.0.1
vncserver_proxyclient_address = 127.0.0.1
(StrOpt) The address to which proxy clients (like nova-xvpvncproxy) should connect
258
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
[vmware]
vnc_port = 5900
vnc_port_total = 10000
Description
[DEFAULT]
block_device_allocate_retries = 60
block_device_allocate_retries_interval = 3
volume_api_class = nova.volume.cinder.API
(StrOpt) The full class name of the volume API class to use
volume_usage_poll_interval = 0
[baremetal]
iscsi_iqn_prefix = iqn.2010-10.org.openstack.baremetal
volume_driver =
nova.virt.baremetal.volume_driver.LibvirtVolumeDriver
[cinder]
api_insecure = False
ca_certificates_file = None
catalog_info = volume:cinder:publicURL
cross_az_attach = True
endpoint_template = None
(StrOpt) Override service catalog lookup with template for cinder endpoint e.g. http://localhost:8776/v1/
%(project_id)s
http_retries = 3
http_timeout = None
os_region_name = None
[hyperv]
force_volumeutils_v1 = False
volume_attach_retry_count = 10
volume_attach_retry_interval = 5
[libvirt]
glusterfs_mount_point_base = $state_path/mnt
nfs_mount_options = None
(StrOpt) Mount options passedf to the NFS client. See section of the nfs man page for details
nfs_mount_point_base = $state_path/mnt
num_aoe_discover_tries = 3
num_iscsi_scan_tries = 5
259
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
num_iser_scan_tries = 5
qemu_allowed_storage_drivers =
rbd_secret_uuid = None
rbd_user = None
scality_sofs_config = None
scality_sofs_mount_point = $state_path/scality
[xenserver]
block_device_creation_timeout = 10
Description
[DEFAULT]
boot_script_template = $pybasedir/nova/cloudpipe/bootscript.template
dmz_cidr =
dmz_mask = 255.255.255.0
dmz_net = 10.0.0.0
vpn_flavor = m1.tiny
vpn_image_id = 0
vpn_ip = $my_ip
vpn_key_suffix = -vpn
(StrOpt) Suffix to add to project name for vpn key and secgroups
vpn_start = 1000
Description
[DEFAULT]
console_driver = nova.console.xvp.XVPConsoleProxy
console_vmrc_error_retries = 10
console_vmrc_port = 443
console_xvp_conf = /etc/xvp.conf
console_xvp_conf_template = $pybasedir/nova/console/xvp.conf.template
console_xvp_log = /var/log/xvp.log
console_xvp_multiplex_port = 5900
console_xvp_pid = /var/run/xvp.pid
stub_compute = False
[libvirt]
xen_hvmloader_path = /usr/lib/xen/boot/hvmloader
[xenserver]
agent_path = usr/sbin/xe-update-networking
260
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
work configuration is not injected into the image.
Used if compute_driver=xenapi.XenAPIDriver and
flat_injected=True
agent_resetnetwork_timeout = 60
agent_timeout = 30
agent_version_timeout = 300
cache_images = all
check_host = True
connection_concurrent = 5
connection_password = None
connection_url = None
connection_username = root
default_os_type = linux
disable_agent = False
(BoolOpt) Disables the use of the XenAPI agent in any image regardless of what image properties are present.
image_compression_level = None
image_upload_handler =
nova.virt.xenapi.image.glance.GlanceStore
introduce_vdi_retry_wait = 20
ipxe_boot_menu_url = None
ipxe_mkisofs_cmd = mkisofs
ipxe_network_name = None
iqn_prefix = iqn.2010-10.org.openstack
login_timeout = 10
max_kernel_ramdisk_size = 16777216
num_vbd_unplug_retries = 10
ovs_integration_bridge = xapi1
remap_vbd_dev = False
remap_vbd_dev_prefix = sd
running_timeout = 60
261
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
sparse_copy = True
sr_base_path = /var/run/sr-mount
sr_matching_filter = default-sr:true
target_host = None
target_port = 3260
torrent_base_url = None
torrent_download_stall_cutoff = 600
torrent_images = none
torrent_listen_port_end = 6891
torrent_listen_port_start = 6881
torrent_max_last_accessed = 86400
(IntOpt) Cached torrent files not accessed within this number of seconds can be reaped
torrent_max_seeder_processes_per_host = 1
(IntOpt) Maximum number of seeder processes to run concurrently within a given dom0. (-1 = no limit)
torrent_seed_chance = 1.0
torrent_seed_duration = 3600
use_agent_default = False
use_join_force = True
vhd_coalesce_max_attempts = 20
vhd_coalesce_poll_interval = 5.0
vif_driver = nova.virt.xenapi.vif.XenAPIBridgeDriver
Description
[DEFAULT]
xvpvncproxy_base_url = http://127.0.0.1:6081/console
xvpvncproxy_host = 0.0.0.0
xvpvncproxy_port = 6081
262
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
[zookeeper]
address = None
recv_timeout = 4000
sg_prefix = /servicegroups
sg_retry_interval = 5
api-paste.ini
The Compute service stores its API configuration settings in the api-paste.ini file.
############
# Metadata #
############
[composite:metadata]
use = egg:Paste#urlmap
/: meta
[pipeline:meta]
pipeline = ec2faultwrap logrequest metaapp
[app:metaapp]
paste.app_factory = nova.api.metadata.handler:MetadataRequestHandler.factory
#######
# EC2 #
#######
[composite:ec2]
use = egg:Paste#urlmap
/services/Cloud: ec2cloud
[composite:ec2cloud]
use = call:nova.api.auth:pipeline_factory
noauth = ec2faultwrap logrequest ec2noauth cloudrequest validator ec2executor
keystone = ec2faultwrap logrequest ec2keystoneauth cloudrequest validator
ec2executor
[filter:ec2faultwrap]
paste.filter_factory = nova.api.ec2:FaultWrapper.factory
[filter:logrequest]
paste.filter_factory = nova.api.ec2:RequestLogging.factory
[filter:ec2lockout]
paste.filter_factory = nova.api.ec2:Lockout.factory
[filter:ec2keystoneauth]
263
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
paste.filter_factory = nova.api.ec2:EC2KeystoneAuth.factory
[filter:ec2noauth]
paste.filter_factory = nova.api.ec2:NoAuth.factory
[filter:cloudrequest]
controller = nova.api.ec2.cloud.CloudController
paste.filter_factory = nova.api.ec2:Requestify.factory
[filter:authorizer]
paste.filter_factory = nova.api.ec2:Authorizer.factory
[filter:validator]
paste.filter_factory = nova.api.ec2:Validator.factory
[app:ec2executor]
paste.app_factory = nova.api.ec2:Executor.factory
#############
# OpenStack #
#############
[composite:osapi_compute]
use = call:nova.api.openstack.urlmap:urlmap_factory
/: oscomputeversions
/v1.1: openstack_compute_api_v2
/v2: openstack_compute_api_v2
/v3: openstack_compute_api_v3
[composite:openstack_compute_api_v2]
use = call:nova.api.auth:pipeline_factory
noauth = faultwrap sizelimit noauth ratelimit osapi_compute_app_v2
keystone = faultwrap sizelimit authtoken keystonecontext ratelimit
osapi_compute_app_v2
keystone_nolimit = faultwrap sizelimit authtoken keystonecontext
osapi_compute_app_v2
[composite:openstack_compute_api_v3]
use = call:nova.api.auth:pipeline_factory_v3
noauth = faultwrap sizelimit noauth_v3 osapi_compute_app_v3
keystone = faultwrap sizelimit authtoken keystonecontext osapi_compute_app_v3
[filter:faultwrap]
paste.filter_factory = nova.api.openstack:FaultWrapper.factory
[filter:noauth]
paste.filter_factory = nova.api.openstack.auth:NoAuthMiddleware.factory
[filter:noauth_v3]
paste.filter_factory = nova.api.openstack.auth:NoAuthMiddlewareV3.factory
[filter:ratelimit]
paste.filter_factory = nova.api.openstack.compute.
limits:RateLimitingMiddleware.factory
[filter:sizelimit]
paste.filter_factory = nova.api.sizelimit:RequestBodySizeLimiter.factory
[app:osapi_compute_app_v2]
paste.app_factory = nova.api.openstack.compute:APIRouter.factory
264
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
[app:osapi_compute_app_v3]
paste.app_factory = nova.api.openstack.compute:APIRouterV3.factory
[pipeline:oscomputeversions]
pipeline = faultwrap oscomputeversionapp
[app:oscomputeversionapp]
paste.app_factory = nova.api.openstack.compute.versions:Versions.factory
##########
# Shared #
##########
[filter:keystonecontext]
paste.filter_factory = nova.api.auth:NovaKeystoneContext.factory
[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
policy.json
The policy.json file defines additional access controls that apply to the Compute service.
{
"context_is_admin": "role:admin",
"admin_or_owner": "is_admin:True or project_id:%(project_id)s",
"default": "rule:admin_or_owner",
"cells_scheduler_filter:TargetCellFilter": "is_admin:True",
"compute:create": "",
"compute:create:attach_network": "",
"compute:create:attach_volume": "",
"compute:create:forced_host": "is_admin:True",
"compute:get_all": "",
"compute:get_all_tenants": "",
"compute:start": "rule:admin_or_owner",
"compute:stop": "rule:admin_or_owner",
"compute:unlock_override": "rule:admin_api",
"compute:shelve": "",
"compute:shelve_offload": "",
"compute:unshelve": "",
"compute:volume_snapshot_create": "",
"compute:volume_snapshot_delete": "",
"admin_api": "is_admin:True",
"compute:v3:servers:start": "rule:admin_or_owner",
"compute:v3:servers:stop": "rule:admin_or_owner",
"compute_extension:v3:os-access-ips:discoverable": "",
"compute_extension:v3:os-access-ips": "",
"compute_extension:accounts": "rule:admin_api",
"compute_extension:admin_actions": "rule:admin_api",
"compute_extension:admin_actions:pause": "rule:admin_or_owner",
265
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
"compute_extension:admin_actions:unpause": "rule:admin_or_owner",
"compute_extension:admin_actions:suspend": "rule:admin_or_owner",
"compute_extension:admin_actions:resume": "rule:admin_or_owner",
"compute_extension:admin_actions:lock": "rule:admin_or_owner",
"compute_extension:admin_actions:unlock": "rule:admin_or_owner",
"compute_extension:admin_actions:resetNetwork": "rule:admin_api",
"compute_extension:admin_actions:injectNetworkInfo": "rule:admin_api",
"compute_extension:admin_actions:createBackup": "rule:admin_or_owner",
"compute_extension:admin_actions:migrateLive": "rule:admin_api",
"compute_extension:admin_actions:resetState": "rule:admin_api",
"compute_extension:admin_actions:migrate": "rule:admin_api",
"compute_extension:v3:os-admin-actions": "rule:admin_api",
"compute_extension:v3:os-admin-actions:discoverable": "",
"compute_extension:v3:os-admin-actions:reset_network": "rule:admin_api",
"compute_extension:v3:os-admin-actions:inject_network_info":
"rule:admin_api",
"compute_extension:v3:os-admin-actions:reset_state": "rule:admin_api",
"compute_extension:v3:os-admin-password": "",
"compute_extension:v3:os-admin-password:discoverable": "",
"compute_extension:aggregates": "rule:admin_api",
"compute_extension:v3:os-aggregates:discoverable": "",
"compute_extension:v3:os-aggregates:index": "rule:admin_api",
"compute_extension:v3:os-aggregates:create": "rule:admin_api",
"compute_extension:v3:os-aggregates:show": "rule:admin_api",
"compute_extension:v3:os-aggregates:update": "rule:admin_api",
"compute_extension:v3:os-aggregates:delete": "rule:admin_api",
"compute_extension:v3:os-aggregates:add_host": "rule:admin_api",
"compute_extension:v3:os-aggregates:remove_host": "rule:admin_api",
"compute_extension:v3:os-aggregates:set_metadata": "rule:admin_api",
"compute_extension:agents": "rule:admin_api",
"compute_extension:v3:os-agents": "rule:admin_api",
"compute_extension:v3:os-agents:discoverable": "",
"compute_extension:attach_interfaces": "",
"compute_extension:v3:os-attach-interfaces": "",
"compute_extension:v3:os-attach-interfaces:discoverable": "",
"compute_extension:baremetal_nodes": "rule:admin_api",
"compute_extension:cells": "rule:admin_api",
"compute_extension:v3:os-cells": "rule:admin_api",
"compute_extension:v3:os-cells:discoverable": "",
"compute_extension:certificates": "",
"compute_extension:v3:os-certificates:create": "",
"compute_extension:v3:os-certificates:show": "",
"compute_extension:v3:os-certificates:discoverable": "",
"compute_extension:cloudpipe": "rule:admin_api",
"compute_extension:cloudpipe_update": "rule:admin_api",
"compute_extension:console_output": "",
"compute_extension:v3:consoles:discoverable": "",
"compute_extension:v3:os-console-output:discoverable": "",
"compute_extension:v3:os-console-output": "",
"compute_extension:consoles": "",
"compute_extension:v3:os-remote-consoles": "",
"compute_extension:v3:os-remote-consoles:discoverable": "",
"compute_extension:createserverext": "",
"compute_extension:v3:os-create-backup:discoverable": "",
"compute_extension:v3:os-create-backup": "rule:admin_or_owner",
"compute_extension:deferred_delete": "",
"compute_extension:v3:os-deferred-delete": "",
"compute_extension:v3:os-deferred-delete:discoverable": "",
"compute_extension:disk_config": "",
"compute_extension:evacuate": "rule:admin_api",
266
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
"compute_extension:v3:os-evacuate": "rule:admin_api",
"compute_extension:v3:os-evacuate:discoverable": "",
"compute_extension:extended_server_attributes": "rule:admin_api",
"compute_extension:v3:os-extended-server-attributes": "rule:admin_api",
"compute_extension:v3:os-extended-server-attributes:discoverable": "",
"compute_extension:extended_status": "",
"compute_extension:v3:os-extended-status": "",
"compute_extension:v3:os-extended-status:discoverable": "",
"compute_extension:extended_availability_zone": "",
"compute_extension:v3:os-extended-availability-zone": "",
"compute_extension:v3:os-extended-availability-zone:discoverable": "",
"compute_extension:extended_ips": "",
"compute_extension:extended_ips_mac": "",
"compute_extension:extended_vif_net": "",
"compute_extension:v3:extension_info:discoverable": "",
"compute_extension:extended_volumes": "",
"compute_extension:v3:os-extended-volumes": "",
"compute_extension:v3:os-extended-volumes:swap": "",
"compute_extension:v3:os-extended-volumes:discoverable": "",
"compute_extension:v3:os-extended-volumes:attach": "",
"compute_extension:v3:os-extended-volumes:detach": "",
"compute_extension:fixed_ips": "rule:admin_api",
"compute_extension:flavor_access": "",
"compute_extension:flavor_access:addTenantAccess": "rule:admin_api",
"compute_extension:flavor_access:removeTenantAccess": "rule:admin_api",
"compute_extension:v3:flavor-access": "",
"compute_extension:v3:flavor-access:discoverable": "",
"compute_extension:v3:flavor-access:remove_tenant_access":
"rule:admin_api",
"compute_extension:v3:flavor-access:add_tenant_access": "rule:admin_api",
"compute_extension:flavor_disabled": "",
"compute_extension:flavor_rxtx": "",
"compute_extension:v3:os-flavor-rxtx": "",
"compute_extension:v3:os-flavor-rxtx:discoverable": "",
"compute_extension:flavor_swap": "",
"compute_extension:flavorextradata": "",
"compute_extension:flavorextraspecs:index": "",
"compute_extension:flavorextraspecs:show": "",
"compute_extension:flavorextraspecs:create": "rule:admin_api",
"compute_extension:flavorextraspecs:update": "rule:admin_api",
"compute_extension:flavorextraspecs:delete": "rule:admin_api",
"compute_extension:v3:flavors:discoverable": "",
"compute_extension:v3:flavor-extra-specs:discoverable": "",
"compute_extension:v3:flavor-extra-specs:index": "",
"compute_extension:v3:flavor-extra-specs:show": "",
"compute_extension:v3:flavor-extra-specs:create": "rule:admin_api",
"compute_extension:v3:flavor-extra-specs:update": "rule:admin_api",
"compute_extension:v3:flavor-extra-specs:delete": "rule:admin_api",
"compute_extension:flavormanage": "rule:admin_api",
"compute_extension:v3:flavor-manage": "rule:admin_api",
"compute_extension:floating_ip_dns": "",
"compute_extension:floating_ip_pools": "",
"compute_extension:floating_ips": "",
"compute_extension:floating_ips_bulk": "rule:admin_api",
"compute_extension:fping": "",
"compute_extension:fping:all_tenants": "rule:admin_api",
"compute_extension:hide_server_addresses": "is_admin:False",
"compute_extension:v3:os-hide-server-addresses": "is_admin:False",
"compute_extension:v3:os-hide-server-addresses:discoverable": "",
"compute_extension:hosts": "rule:admin_api",
267
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
"compute_extension:v3:os-hosts": "rule:admin_api",
"compute_extension:v3:os-hosts:discoverable": "",
"compute_extension:hypervisors": "rule:admin_api",
"compute_extension:v3:os-hypervisors": "rule:admin_api",
"compute_extension:v3:os-hypervisors:discoverable": "",
"compute_extension:image_size": "",
"compute_extension:instance_actions": "",
"compute_extension:v3:os-instance-actions": "",
"compute_extension:v3:os-instance-actions:discoverable": "",
"compute_extension:instance_actions:events": "rule:admin_api",
"compute_extension:v3:os-instance-actions:events": "rule:admin_api",
"compute_extension:instance_usage_audit_log": "rule:admin_api",
"compute_extension:v3:ips:discoverable": "",
"compute_extension:keypairs": "",
"compute_extension:keypairs:index": "",
"compute_extension:keypairs:show": "",
"compute_extension:keypairs:create": "",
"compute_extension:keypairs:delete": "",
"compute_extension:v3:keypairs:discoverable": "",
"compute_extension:v3:keypairs": "",
"compute_extension:v3:keypairs:index": "",
"compute_extension:v3:keypairs:show": "",
"compute_extension:v3:keypairs:create": "",
"compute_extension:v3:keypairs:delete": "",
"compute_extension:v3:os-lock-server:discoverable": "",
"compute_extension:v3:os-lock-server:lock": "rule:admin_or_owner",
"compute_extension:v3:os-lock-server:unlock": "rule:admin_or_owner",
"compute_extension:v3:os-migrate-server:discoverable": "",
"compute_extension:v3:os-migrate-server:migrate": "rule:admin_api",
"compute_extension:v3:os-migrate-server:migrate_live": "rule:admin_api",
"compute_extension:multinic": "",
"compute_extension:v3:os-multinic": "",
"compute_extension:v3:os-multinic:discoverable": "",
"compute_extension:networks": "rule:admin_api",
"compute_extension:networks:view": "",
"compute_extension:networks_associate": "rule:admin_api",
"compute_extension:v3:os-pause-server:discoverable": "",
"compute_extension:v3:os-pause-server:pause": "rule:admin_or_owner",
"compute_extension:v3:os-pause-server:unpause": "rule:admin_or_owner",
"compute_extension:v3:os-pci:pci_servers": "",
"compute_extension:v3:os-pci:discoverable": "",
"compute_extension:v3:os-pci:index": "rule:admin_api",
"compute_extension:v3:os-pci:detail": "rule:admin_api",
"compute_extension:v3:os-pci:show": "rule:admin_api",
"compute_extension:quotas:show": "",
"compute_extension:quotas:update": "rule:admin_api",
"compute_extension:quotas:delete": "rule:admin_api",
"compute_extension:v3:os-quota-sets:discoverable": "",
"compute_extension:v3:os-quota-sets:show": "",
"compute_extension:v3:os-quota-sets:update": "rule:admin_api",
"compute_extension:v3:os-quota-sets:delete": "rule:admin_api",
"compute_extension:v3:os-quota-sets:detail": "rule:admin_api",
"compute_extension:quota_classes": "",
"compute_extension:rescue": "",
"compute_extension:v3:os-rescue": "",
"compute_extension:v3:os-rescue:discoverable": "",
"compute_extension:v3:os-scheduler-hints:discoverable": "",
"compute_extension:security_group_default_rules": "rule:admin_api",
"compute_extension:security_groups": "",
"compute_extension:v3:os-security-groups": "",
268
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
"compute_extension:v3:os-security-groups:discoverable": "",
"compute_extension:server_diagnostics": "rule:admin_api",
"compute_extension:v3:os-server-diagnostics": "rule:admin_api",
"compute_extension:v3:os-server-diagnostics:discoverable": "",
"compute_extension:server_groups": "",
"compute_extension:server_password": "",
"compute_extension:v3:os-server-password": "",
"compute_extension:v3:os-server-password:discoverable": "",
"compute_extension:server_usage": "",
"compute_extension:v3:os-server-usage": "",
"compute_extension:v3:os-server-usage:discoverable": "",
"compute_extension:services": "rule:admin_api",
"compute_extension:v3:os-services": "rule:admin_api",
"compute_extension:v3:os-services:discoverable": "",
"compute_extension:v3:server-metadata:discoverable": "",
"compute_extension:v3:servers:discoverable": "",
"compute_extension:shelve": "",
"compute_extension:shelveOffload": "rule:admin_api",
"compute_extension:v3:os-shelve:shelve": "",
"compute_extension:v3:os-shelve:shelve:discoverable": "",
"compute_extension:v3:os-shelve:shelve_offload": "rule:admin_api",
"compute_extension:simple_tenant_usage:show": "rule:admin_or_owner",
"compute_extension:v3:os-suspend-server:discoverable": "",
"compute_extension:v3:os-suspend-server:suspend": "rule:admin_or_owner",
"compute_extension:v3:os-suspend-server:resume": "rule:admin_or_owner",
"compute_extension:simple_tenant_usage:list": "rule:admin_api",
"compute_extension:unshelve": "",
"compute_extension:v3:os-shelve:unshelve": "",
"compute_extension:users": "rule:admin_api",
"compute_extension:v3:os-user-data:discoverable": "",
"compute_extension:virtual_interfaces": "",
"compute_extension:virtual_storage_arrays": "",
"compute_extension:volumes": "",
"compute_extension:volume_attachments:index": "",
"compute_extension:volume_attachments:show": "",
"compute_extension:volume_attachments:create": "",
"compute_extension:volume_attachments:update": "",
"compute_extension:volume_attachments:delete": "",
"compute_extension:volumetypes": "",
"compute_extension:availability_zone:list": "",
"compute_extension:v3:os-availability-zone:list": "",
"compute_extension:v3:os-availability-zone:discoverable": "",
"compute_extension:availability_zone:detail": "rule:admin_api",
"compute_extension:v3:os-availability-zone:detail": "rule:admin_api",
"compute_extension:used_limits_for_admin": "rule:admin_api",
"compute_extension:migrations:index": "rule:admin_api",
"compute_extension:v3:os-migrations:index": "rule:admin_api",
"compute_extension:v3:os-migrations:discoverable": "",
"compute_extension:os-assisted-volume-snapshots:create": "rule:admin_api",
"compute_extension:os-assisted-volume-snapshots:delete": "rule:admin_api",
"compute_extension:console_auth_tokens": "rule:admin_api",
"compute_extension:v3:os-console-auth-tokens": "rule:admin_api",
"compute_extension:os-server-external-events:create": "rule:admin_api",
"compute_extension:v3:os-server-external-events:create": "rule:admin_api",
"volume:create": "",
"volume:get_all": "",
"volume:get_volume_metadata": "",
"volume:get_snapshot": "",
"volume:get_all_snapshots": "",
269
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
"volume_extension:types_manage": "rule:admin_api",
"volume_extension:types_extra_specs": "rule:admin_api",
"volume_extension:volume_admin_actions:reset_status": "rule:admin_api",
"volume_extension:snapshot_admin_actions:reset_status": "rule:admin_api",
"volume_extension:volume_admin_actions:force_delete": "rule:admin_api",
"network:get_all": "",
"network:get": "",
"network:create": "",
"network:delete": "",
"network:associate": "",
"network:disassociate": "",
"network:get_vifs_by_instance": "",
"network:allocate_for_instance": "",
"network:deallocate_for_instance": "",
"network:validate_networks": "",
"network:get_instance_uuids_by_ip_filter": "",
"network:get_instance_id_by_floating_address": "",
"network:setup_networks_on_host": "",
"network:get_backdoor_port": "",
"network:get_floating_ip": "",
"network:get_floating_ip_pools": "",
"network:get_floating_ip_by_address": "",
"network:get_floating_ips_by_project": "",
"network:get_floating_ips_by_fixed_address": "",
"network:allocate_floating_ip": "",
"network:deallocate_floating_ip": "",
"network:associate_floating_ip": "",
"network:disassociate_floating_ip": "",
"network:release_floating_ip": "",
"network:migrate_instance_start": "",
"network:migrate_instance_finish": "",
"network:get_fixed_ip": "",
"network:get_fixed_ip_by_address": "",
"network:add_fixed_ip_to_instance": "",
"network:remove_fixed_ip_from_instance": "",
"network:add_network_to_project": "",
"network:get_instance_nw_info": "",
"network:get_dns_domains": "",
"network:add_dns_entry": "",
"network:modify_dns_entry": "",
"network:delete_dns_entry": "",
"network:get_dns_entries_by_address": "",
"network:get_dns_entries_by_name": "",
"network:create_private_dns_domain": "",
"network:create_public_dns_domain": "",
"network:delete_dns_domain": ""
}
270
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
rootwrap.conf
The rootwrap.conf file defines configuration values used by the rootwrap script when
the Compute service needs to escalate its privileges to those of the root user.
# Configuration for nova-rootwrap
# This file should be owned by (and only-writeable by) the root user
[DEFAULT]
# List of directories to load filter definitions from (separated by ',').
# These directories MUST all be only writeable by root !
filters_path=/etc/nova/rootwrap.d,/usr/share/nova/rootwrap
# List of directories to search executables in, in case filters do not
# explicitely specify a full path (separated by ',')
# If not specified, defaults to system PATH environment variable.
# These directories MUST all be only writeable by root !
exec_dirs=/sbin,/usr/sbin,/bin,/usr/bin
# Enable logging to syslog
# Default value is False
use_syslog=False
# Which syslog facility to use.
# Valid values include auth, authpriv, syslog, user0, user1...
# Default value is 'syslog'
syslog_log_facility=syslog
# Which messages to log.
# INFO means log all usage
# ERROR means only log unsuccessful attempts
syslog_log_level=ERROR
[DEFAULT] block_device_allocate_retries = 60
[DEFAULT] block_device_allocate_retries_interval = 3
[DEFAULT] quota_server_group_members = 10
[DEFAULT] quota_server_groups = 10
271
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
[DEFAULT] shutdown_timeout = 60
(IntOpt) Total amount of time to wait in seconds for an instance to perform a clean shutdown.
(StrOpt) Override service catalog lookup with template for cinder endpoint e.g. http://localhost:8776/v1/
%(project_id)s
[cinder] http_retries = 3
[glance] allowed_direct_url_schemes = []
(ListOpt) A list of url scheme that can be downloaded directly via the direct_url. Currently supported schemes:
[file].
[glance] num_retries = 0
[hyperv] wait_soft_reboot_seconds = 60
272
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
[ironic] api_max_retries = 60
[ironic] api_retry_interval = 2
[ironic] api_version = 1
[keystone_authtoken] check_revocations_for_cached =
False
[libvirt] gid_maps = []
(StrOpt) Discard option for nova managed disks (valid options are: ignore, unmap). Need Libvirt(1.0.6) Qemu1.5
(raw format) Qemu1.6(qcow2 format)
[libvirt] mem_stats_period_seconds = 10
[libvirt] uid_maps = []
273
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
(BoolOpt) Allow an instance to have multiple vNICs attached to the same Neutron network.
[neutron] metadata_proxy_shared_secret =
[neutron] url_timeout = 30
(StrOpt) Range of TCP ports to use for serial ports on compute hosts
(StrOpt) The address to which proxy clients (like nova-serialproxy) should connect
[DEFAULT] auth_strategy
noauth
keystone
[DEFAULT] default_log_levels
amqp=WARN, amqplib=WARN,
boto=WARN, qpid=WARN,
sqlalchemy=WARN, suds=INFO,
oslo.messaging=INFO, iso8601=WARN
amqp=WARN, amqplib=WARN,
boto=WARN, qpid=WARN,
sqlalchemy=WARN, suds=INFO,
oslo.messaging=INFO,
iso8601=WARN,
requests.packages.urllib3.connectionpool=WARN,
urllib3.connectionpool=WARN,
websocket=WARN,
keystonemiddleware=WARN,
routes.middleware=WARN,
stevedore=WARN
[DEFAULT] dhcp_lease_time
120
86400
[DEFAULT]
logging_context_format_string
%(asctime)s.%(msecs)03d
%(process)d %(levelname)s %(name)s
[%(request_id)s %(user)s %(tenant)s]
%(instance)s%(message)s
%(asctime)s.%(msecs)03d
%(process)d %(levelname)s %(name)s
[%(request_id)s %(user_identity)s]
%(instance)s%(message)s
[database] mysql_sql_mode
None
TRADITIONAL
[database] sqlite_db
nova.sqlite
oslo.sqlite
274
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Option
[keystone_authtoken]
revocation_cache_time
300
10
[libvirt] block_migration_flag
VIR_MIGRATE_UNDEFINE_SOURCE,
VIR_MIGRATE_PEER2PEER,
VIR_MIGRATE_NON_SHARED_INC
VIR_MIGRATE_UNDEFINE_SOURCE,
VIR_MIGRATE_PEER2PEER,
VIR_MIGRATE_LIVE,
VIR_MIGRATE_TUNNELLED,
VIR_MIGRATE_NON_SHARED_INC
[libvirt] live_migration_flag
VIR_MIGRATE_UNDEFINE_SOURCE,
VIR_MIGRATE_PEER2PEER
VIR_MIGRATE_UNDEFINE_SOURCE,
VIR_MIGRATE_PEER2PEER,
VIR_MIGRATE_LIVE,
VIR_MIGRATE_TUNNELLED
Table2.67.Deprecated options
Deprecated option
New Option
[DEFAULT] quota_injected_file_path_bytes
[DEFAULT] quota_injected_file_path_length
[DEFAULT] neutron_url
[neutron] url
[DEFAULT] neutron_ca_certificates_file
[neutron] ca_certificates_file
[DEFAULT] neutron_api_insecure
[neutron] api_insecure
[DEFAULT] neutron_admin_username
[neutron] admin_username
[DEFAULT] neutron_auth_strategy
[neutron] auth_strategy
[DEFAULT] glance_api_servers
[glance] api_servers
[DEFAULT] neutron_admin_tenant_id
[neutron] admin_tenant_id
[DEFAULT] neutron_admin_tenant_name
[neutron] admin_tenant_name
[DEFAULT] neutron_metadata_proxy_shared_secret
[neutron] metadata_proxy_shared_secret
[DEFAULT] glance_port
[glance] port
[DEFAULT] neutron_region_name
[neutron] region_name
[DEFAULT] neutron_admin_password
[neutron] admin_password
[DEFAULT] glance_num_retries
[glance] num_retries
[DEFAULT] service_neutron_metadata_proxy
[neutron] service_metadata_proxy
[DEFAULT] glance_protocol
[glance] protocol
[DEFAULT] neutron_ovs_bridge
[neutron] ovs_bridge
[DEFAULT] glance_api_insecure
[glance] api_insecure
[DEFAULT] glance_host
[glance] host
[DEFAULT] neutron_admin_auth_url
[neutron] admin_auth_url
[DEFAULT] neutron_extension_sync_interval
[neutron] extension_sync_interval
[DEFAULT] neutron_url_timeout
[neutron] url_timeout
275
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
3. Dashboard
Table of Contents
Configure the dashboard .............................................................................................
Customize the dashboard ............................................................................................
Additional sample configuration files ...........................................................................
Dashboard log files .....................................................................................................
276
280
281
292
This chapter describes how to configure the OpenStack dashboard with Apache web server.
Specify the host for your OpenStack Identity Service endpoint in the /etc/openstack-dashboard/local_settings.py file with the OPENSTACK_HOST setting.
The following example shows this setting:
import os
from django.utils.translation import ugettext_lazy as _
DEBUG = False
TEMPLATE_DEBUG = DEBUG
PROD = True
USE_SSL = False
SITE_BRANDING = 'OpenStack Dashboard'
# Ubuntu-specific: Enables an extra panel in the 'Settings' section
# that easily generates a Juju environments.yaml for download,
# preconfigured with endpoints and credentials required for bootstrap
# and service deployment.
ENABLE_JUJU_PANEL = True
# Note: You should change this value
SECRET_KEY = 'elj1IWiLoWHgryYxFT6j7cM5fGOOxWY0'
# Specify a regular expression to validate user passwords.
# HORIZON_CONFIG = {
#
"password_validator": {
#
"regex": '.*',
#
"help_text": _("Your password does not meet the requirements.")
#
}
# }
LOCAL_PATH = os.path.dirname(os.path.abspath(__file__))
276
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
CACHES = {
'default': {
'BACKEND' : 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION' : '127.0.0.1:11211'
}
}
# Send email to the console by default
EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'
# Or send them to /dev/null
#EMAIL_BACKEND = 'django.core.mail.backends.dummy.EmailBackend'
#
#
#
#
#
# For multiple regions uncomment this configuration, and add (endpoint, title).
# AVAILABLE_REGIONS = [
#
('http://cluster1.example.com:5000/v2.0', 'cluster1'),
#
('http://cluster2.example.com:5000/v2.0', 'cluster2'),
# ]
OPENSTACK_HOST = "127.0.0.1"
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v2.0" % OPENSTACK_HOST
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "Member"
# The OPENSTACK_KEYSTONE_BACKEND settings can be used to identify the
# capabilities of the auth backend for Keystone.
# If Keystone has been configured to use LDAP as the auth backend then set
# can_edit_user to False and name to 'ldap'.
#
# TODO(tres): Remove these once Keystone has an API to identify auth backend.
OPENSTACK_KEYSTONE_BACKEND = {
'name': 'native',
'can_edit_user': True
}
# OPENSTACK_ENDPOINT_TYPE specifies the endpoint type to use for the endpoints
# in the Keystone service catalog. Use this setting when Horizon is running
# external to the OpenStack environment. The default is 'internalURL'.
#OPENSTACK_ENDPOINT_TYPE = "publicURL"
# The number of Swift containers and objects to display on a single page before
# providing a paging element (a "more" link) to paginate results.
API_RESULT_LIMIT = 1000
# If you have external monitoring links, eg:
# EXTERNAL_MONITORING = [
#
['Nagios','http://foo.com'],
#
['Ganglia','http://bar.com'],
# ]
LOGGING = {
'version': 1,
# When set to True this will disable all logging except
# for loggers specified in this configuration dictionary. Note that
# if nothing is specified here and disable_existing_loggers is True,
# django.db.backends will still log unless it is disabled explicitly.
'disable_existing_loggers': False,
'handlers': {
'null': {
'level': 'DEBUG',
'class': 'django.utils.log.NullHandler',
},
'console': {
# Set the level to "DEBUG" for verbose output logging.
'level': 'INFO',
'class': 'logging.StreamHandler',
},
},
'loggers': {
# Logging from django.db.backends is VERY verbose, send to null
# by default.
'django.db.backends': {
'handlers': ['null'],
'propagate': False,
277
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
},
'horizon': {
'handlers': ['console'],
'propagate': False,
},
'novaclient': {
'handlers': ['console'],
'propagate': False,
},
'keystoneclient': {
'handlers': ['console'],
'propagate': False,
},
'nose.plugins.manager': {
'handlers': ['console'],
'propagate': False,
}
}
}
The service catalog configuration in the Identity Service determines whether a service
appears in the dashboard. For the full listing, see Horizon Settings and Configuration.
2.
or for Fedora/RHEL/CentOS:
# service httpd restart
3.
278
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Example3.1.Before
WSGIScriptAlias / /usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi
WSGIDaemonProcess horizon user=www-data group=www-data processes=3 threads=10
Alias /static /usr/share/openstack-dashboard/openstack_dashboard/static/
<Directory /usr/share/openstack-dashboard/openstack_dashboard/wsgi>
# For Apache http server 2.2 and earlier:
Order allow,deny
Allow from all
# For Apache http server 2.4 and later:
# Require all granted
</Directory>
Example3.2.After
<VirtualHost *:80>
ServerName openstack.example.com
<IfModule mod_rewrite.c>
RewriteEngine On
RewriteCond %{HTTPS} off
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
</IfModule>
<IfModule !mod_rewrite.c>
RedirectPermanent / https://openstack.example.com
</IfModule>
</VirtualHost>
<VirtualHost *:443>
ServerName openstack.example.com
SSLEngine On
# Remember to replace certificates and keys with valid paths in your environment
SSLCertificateFile /etc/apache2/SSL/openstack.example.com.crt
SSLCACertificateFile /etc/apache2/SSL/openstack.example.com.crt
SSLCertificateKeyFile /etc/apache2/SSL/openstack.example.com.key
SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown
# HTTP Strict Transport Security (HSTS) enforces that all communications
# with a server go over SSL. This mitigates the threat from attacks such
# as SSL-Strip which replaces links on the wire, stripping away https prefixes
# and potentially allowing an attacker to view confidential information on the
# wire
Header add Strict-Transport-Security "max-age=15768000"
WSGIScriptAlias / /usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi
WSGIDaemonProcess horizon user=www-data group=www-data processes=3 threads=10
Alias /static /usr/share/openstack-dashboard/openstack_dashboard/static/
<Directory /usr/share/openstack-dashboard/openstack_dashboard/wsgi>
# For Apache http server 2.2 and earlier:
Order allow,deny
Allow from all
# For Apache http server 2.4 and later:
# Require all granted
</Directory>
</VirtualHost>
In this configuration, the Apache HTTP server listens on port 443 and redirects all nonsecure requests to the HTTPS protocol. The secured section defines the private key,
public key, and certificate to use.
4.
279
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Restart memcached:
# service memcached restart
If you try to access the dashboard through HTTP, the browser redirects you to the
HTTPS page.
Edit /usr/share/pyshared/horizon/dashboards/nova/instances/templates/instances/_detail_vnc.html.
2.
Create a graphical logo with a transparent background. The text TGen Cloud in this
example is rendered through .png files of multiple sizes created with a graphics program.
Use a 20027 for the logged-in banner graphic, and 36550 for the login screen graphic.
2.
Set the HTML title, which appears at the top of the browser window, by adding the
following line to /etc/openstack-dashboard/local_settings.py:
SITE_BRANDING = "Example, Inc. Cloud"
3.
4.
5.
Edit your CSS file to override the Ubuntu customizations in the ubuntu.css file.
Change the colors and image file names as appropriate, though the relative directory paths should be the same. The following example file shows you how to customize
your CSS file:
280
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
/*
* New theme colors for dashboard that override the defaults:
* dark blue: #355796 / rgb(53, 87, 150)
* light blue: #BAD3E1 / rgb(186, 211, 225)
*
* By Preston Lee <[email protected]>
*/
h1.brand {
background: #355796 repeat-x top left;
border-bottom: 2px solid #BAD3E1;
}
h1.brand a {
background: url(../img/my_cloud_logo_small.png) top left no-repeat;
}
#splash .login {
background: #355796 url(../img/my_cloud_logo_medium.png) no-repeat center 35px;
}
#splash .login .modal-header {
border-top: 1px solid #BAD3E1;
}
.btn-primary {
background-image: none !important;
background-color: #355796 !important;
border: none !important;
box-shadow: none;
}
.btn-primary:hover,
.btn-primary:active {
border: none;
box-shadow: none;
background-color: #BAD3E1 !important;
text-decoration: none;
}
6.
7.
8.
Restart Apache:
On Ubuntu:
# service apache2 restart
On openSUSE:
# service apache2 restart
9.
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
keystone_policy.json
The keystone_policy.json file defines additional access controls for the dashboard
that apply to the Identity service.
Note
The keystone_policy.json file must match the Identity service /etc/
keystone/policy.json policy file.
{
"admin_required": [
[
"role:admin"
],
[
"is_admin:1"
]
],
"service_role": [
[
"role:service"
]
],
"service_or_admin": [
[
"rule:admin_required"
],
[
"rule:service_role"
]
],
"owner": [
[
"user_id:%(user_id)s"
]
],
"admin_or_owner": [
[
"rule:admin_required"
],
[
"rule:owner"
]
],
"default": [
[
"rule:admin_required"
]
],
"identity:get_service": [
[
"rule:admin_required"
]
],
"identity:list_services": [
[
"rule:admin_required"
282
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
]
],
"identity:create_service": [
[
"rule:admin_required"
]
],
"identity:update_service": [
[
"rule:admin_required"
]
],
"identity:delete_service": [
[
"rule:admin_required"
]
],
"identity:get_endpoint": [
[
"rule:admin_required"
]
],
"identity:list_endpoints": [
[
"rule:admin_required"
]
],
"identity:create_endpoint": [
[
"rule:admin_required"
]
],
"identity:update_endpoint": [
[
"rule:admin_required"
]
],
"identity:delete_endpoint": [
[
"rule:admin_required"
]
],
"identity:get_domain": [
[
"rule:admin_required"
]
],
"identity:list_domains": [
[
"rule:admin_required"
]
],
"identity:create_domain": [
[
"rule:admin_required"
]
],
"identity:update_domain": [
[
"rule:admin_required"
283
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
]
],
"identity:delete_domain": [
[
"rule:admin_required"
]
],
"identity:get_project": [
[
"rule:admin_required"
]
],
"identity:list_projects": [
[
"rule:admin_required"
]
],
"identity:list_user_projects": [
[
"rule:admin_or_owner"
]
],
"identity:create_project": [
[
"rule:admin_required"
]
],
"identity:update_project": [
[
"rule:admin_required"
]
],
"identity:delete_project": [
[
"rule:admin_required"
]
],
"identity:get_user": [
[
"rule:admin_required"
]
],
"identity:list_users": [
[
"rule:admin_required"
]
],
"identity:create_user": [
[
"rule:admin_required"
]
],
"identity:update_user": [
[
"rule:admin_or_owner"
]
],
"identity:delete_user": [
[
"rule:admin_required"
284
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
]
],
"identity:get_group": [
[
"rule:admin_required"
]
],
"identity:list_groups": [
[
"rule:admin_required"
]
],
"identity:list_groups_for_user": [
[
"rule:admin_or_owner"
]
],
"identity:create_group": [
[
"rule:admin_required"
]
],
"identity:update_group": [
[
"rule:admin_required"
]
],
"identity:delete_group": [
[
"rule:admin_required"
]
],
"identity:list_users_in_group": [
[
"rule:admin_required"
]
],
"identity:remove_user_from_group": [
[
"rule:admin_required"
]
],
"identity:check_user_in_group": [
[
"rule:admin_required"
]
],
"identity:add_user_to_group": [
[
"rule:admin_required"
]
],
"identity:get_credential": [
[
"rule:admin_required"
]
],
"identity:list_credentials": [
[
"rule:admin_required"
285
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
]
],
"identity:create_credential": [
[
"rule:admin_required"
]
],
"identity:update_credential": [
[
"rule:admin_required"
]
],
"identity:delete_credential": [
[
"rule:admin_required"
]
],
"identity:get_role": [
[
"rule:admin_required"
]
],
"identity:list_roles": [
[
"rule:admin_required"
]
],
"identity:create_role": [
[
"rule:admin_required"
]
],
"identity:update_role": [
[
"rule:admin_required"
]
],
"identity:delete_role": [
[
"rule:admin_required"
]
],
"identity:check_grant": [
[
"rule:admin_required"
]
],
"identity:list_grants": [
[
"rule:admin_required"
]
],
"identity:create_grant": [
[
"rule:admin_required"
]
],
"identity:revoke_grant": [
[
"rule:admin_required"
286
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
]
],
"identity:list_role_assignments": [
[
"rule:admin_required"
]
],
"identity:get_policy": [
[
"rule:admin_required"
]
],
"identity:list_policies": [
[
"rule:admin_required"
]
],
"identity:create_policy": [
[
"rule:admin_required"
]
],
"identity:update_policy": [
[
"rule:admin_required"
]
],
"identity:delete_policy": [
[
"rule:admin_required"
]
],
"identity:check_token": [
[
"rule:admin_required"
]
],
"identity:validate_token": [
[
"rule:service_or_admin"
]
],
"identity:validate_token_head": [
[
"rule:service_or_admin"
]
],
"identity:revocation_list": [
[
"rule:service_or_admin"
]
],
"identity:revoke_token": [
[
"rule:admin_or_owner"
]
],
"identity:create_trust": [
[
"user_id:%(trust.trustor_user_id)s"
287
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
]
],
"identity:get_trust": [
[
"rule:admin_or_owner"
]
],
"identity:list_trusts": [
[
"@"
]
],
"identity:list_roles_for_trust": [
[
"@"
]
],
"identity:check_role_for_trust": [
[
"@"
]
],
"identity:get_role_for_trust": [
[
"@"
]
],
"identity:delete_trust": [
[
"@"
]
]
}
nova_policy.json
The nova_policy.json file defines additional access controls for the dashboard that apply to the Compute service.
Note
The nova_policy.json file must match the Compute /etc/nova/policy.json policy file.
{
"context_is_admin": "role:admin",
"admin_or_owner": "is_admin:True or project_id:%(project_id)s",
"default": "rule:admin_or_owner",
"cells_scheduler_filter:TargetCellFilter": "is_admin:True",
"compute:create": "",
"compute:create:attach_network": "",
"compute:create:attach_volume": "",
"compute:create:forced_host": "is_admin:True",
"compute:get_all": "",
"compute:get_all_tenants": "",
"compute:unlock_override": "rule:admin_api",
"compute:shelve": "",
"compute:shelve_offload": "",
"compute:unshelve": "",
288
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
"admin_api": "is_admin:True",
"compute_extension:accounts": "rule:admin_api",
"compute_extension:admin_actions": "rule:admin_api",
"compute_extension:admin_actions:pause": "rule:admin_or_owner",
"compute_extension:admin_actions:unpause": "rule:admin_or_owner",
"compute_extension:admin_actions:suspend": "rule:admin_or_owner",
"compute_extension:admin_actions:resume": "rule:admin_or_owner",
"compute_extension:admin_actions:lock": "rule:admin_or_owner",
"compute_extension:admin_actions:unlock": "rule:admin_or_owner",
"compute_extension:admin_actions:resetNetwork": "rule:admin_api",
"compute_extension:admin_actions:injectNetworkInfo": "rule:admin_api",
"compute_extension:admin_actions:createBackup": "rule:admin_or_owner",
"compute_extension:admin_actions:migrateLive": "rule:admin_api",
"compute_extension:admin_actions:resetState": "rule:admin_api",
"compute_extension:admin_actions:migrate": "rule:admin_api",
"compute_extension:v3:os-admin-actions": "rule:admin_api",
"compute_extension:v3:os-admin-actions:pause": "rule:admin_or_owner",
"compute_extension:v3:os-admin-actions:unpause": "rule:admin_or_owner",
"compute_extension:v3:os-admin-actions:suspend": "rule:admin_or_owner",
"compute_extension:v3:os-admin-actions:resume": "rule:admin_or_owner",
"compute_extension:v3:os-admin-actions:lock": "rule:admin_or_owner",
"compute_extension:v3:os-admin-actions:unlock": "rule:admin_or_owner",
"compute_extension:v3:os-admin-actions:reset_network": "rule:admin_api",
"compute_extension:v3:os-admin-actions:inject_network_info":
"rule:admin_api",
"compute_extension:v3:os-admin-actions:create_backup":
"rule:admin_or_owner",
"compute_extension:v3:os-admin-actions:migrate_live": "rule:admin_api",
"compute_extension:v3:os-admin-actions:reset_state": "rule:admin_api",
"compute_extension:v3:os-admin-actions:migrate": "rule:admin_api",
"compute_extension:v3:os-admin-password": "",
"compute_extension:aggregates": "rule:admin_api",
"compute_extension:v3:os-aggregates": "rule:admin_api",
"compute_extension:agents": "rule:admin_api",
"compute_extension:v3:os-agents": "rule:admin_api",
"compute_extension:attach_interfaces": "",
"compute_extension:v3:os-attach-interfaces": "",
"compute_extension:baremetal_nodes": "rule:admin_api",
"compute_extension:v3:os-baremetal-nodes": "rule:admin_api",
"compute_extension:cells": "rule:admin_api",
"compute_extension:v3:os-cells": "rule:admin_api",
"compute_extension:certificates": "",
"compute_extension:v3:os-certificates": "",
"compute_extension:cloudpipe": "rule:admin_api",
"compute_extension:cloudpipe_update": "rule:admin_api",
"compute_extension:console_output": "",
"compute_extension:v3:consoles:discoverable": "",
"compute_extension:v3:os-console-output": "",
"compute_extension:consoles": "",
"compute_extension:v3:os-remote-consoles": "",
"compute_extension:coverage_ext": "rule:admin_api",
"compute_extension:v3:os-coverage": "rule:admin_api",
"compute_extension:createserverext": "",
"compute_extension:deferred_delete": "",
"compute_extension:v3:os-deferred-delete": "",
"compute_extension:disk_config": "",
"compute_extension:evacuate": "rule:admin_api",
"compute_extension:v3:os-evacuate": "rule:admin_api",
"compute_extension:extended_server_attributes": "rule:admin_api",
"compute_extension:v3:os-extended-server-attributes": "rule:admin_api",
289
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
"compute_extension:extended_status": "",
"compute_extension:v3:os-extended-status": "",
"compute_extension:extended_availability_zone": "",
"compute_extension:v3:os-extended-availability-zone": "",
"compute_extension:extended_ips": "",
"compute_extension:extended_ips_mac": "",
"compute_extension:extended_vif_net": "",
"compute_extension:v3:extension_info:discoverable": "",
"compute_extension:extended_volumes": "",
"compute_extension:v3:os-extended-volumes": "",
"compute_extension:v3:os-extended-volumes:attach": "",
"compute_extension:v3:os-extended-volumes:detach": "",
"compute_extension:fixed_ips": "rule:admin_api",
"compute_extension:v3:os-fixed-ips:discoverable": "",
"compute_extension:v3:os-fixed-ips": "rule:admin_api",
"compute_extension:flavor_access": "",
"compute_extension:v3:os-flavor-access": "",
"compute_extension:flavor_disabled": "",
"compute_extension:v3:os-flavor-disabled": "",
"compute_extension:flavor_rxtx": "",
"compute_extension:v3:os-flavor-rxtx": "",
"compute_extension:flavor_swap": "",
"compute_extension:flavorextradata": "",
"compute_extension:flavorextraspecs:index": "",
"compute_extension:flavorextraspecs:show": "",
"compute_extension:flavorextraspecs:create": "rule:admin_api",
"compute_extension:flavorextraspecs:update": "rule:admin_api",
"compute_extension:flavorextraspecs:delete": "rule:admin_api",
"compute_extension:v3:flavor-extra-specs:index": "",
"compute_extension:v3:flavor-extra-specs:show": "",
"compute_extension:v3:flavor-extra-specs:create": "rule:admin_api",
"compute_extension:v3:flavor-extra-specs:update": "rule:admin_api",
"compute_extension:v3:flavor-extra-specs:delete": "rule:admin_api",
"compute_extension:flavormanage": "rule:admin_api",
"compute_extension:floating_ip_dns": "",
"compute_extension:floating_ip_pools": "",
"compute_extension:floating_ips": "",
"compute_extension:floating_ips_bulk": "rule:admin_api",
"compute_extension:fping": "",
"compute_extension:fping:all_tenants": "rule:admin_api",
"compute_extension:hide_server_addresses": "is_admin:False",
"compute_extension:v3:os-hide-server-addresses": "is_admin:False",
"compute_extension:hosts": "rule:admin_api",
"compute_extension:v3:os-hosts": "rule:admin_api",
"compute_extension:hypervisors": "rule:admin_api",
"compute_extension:v3:os-hypervisors": "rule:admin_api",
"compute_extension:image_size": "",
"compute_extension:v3:os-image-metadata": "",
"compute_extension:v3:os-images": "",
"compute_extension:instance_actions": "",
"compute_extension:v3:os-instance-actions": "",
"compute_extension:instance_actions:events": "rule:admin_api",
"compute_extension:v3:os-instance-actions:events": "rule:admin_api",
"compute_extension:instance_usage_audit_log": "rule:admin_api",
"compute_extension:v3:os-instance-usage-audit-log": "rule:admin_api",
"compute_extension:v3:ips:discoverable": "",
"compute_extension:keypairs": "",
"compute_extension:keypairs:index": "",
"compute_extension:keypairs:show": "",
"compute_extension:keypairs:create": "",
290
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
"compute_extension:keypairs:delete": "",
"compute_extension:v3:os-keypairs:discoverable": "",
"compute_extension:v3:os-keypairs": "",
"compute_extension:v3:os-keypairs:index": "",
"compute_extension:v3:os-keypairs:show": "",
"compute_extension:v3:os-keypairs:create": "",
"compute_extension:v3:os-keypairs:delete": "",
"compute_extension:multinic": "",
"compute_extension:v3:os-multinic": "",
"compute_extension:networks": "rule:admin_api",
"compute_extension:networks:view": "",
"compute_extension:networks_associate": "rule:admin_api",
"compute_extension:quotas:show": "",
"compute_extension:quotas:update": "rule:admin_api",
"compute_extension:quotas:delete": "rule:admin_api",
"compute_extension:v3:os-quota-sets:show": "",
"compute_extension:v3:os-quota-sets:update": "rule:admin_api",
"compute_extension:v3:os-quota-sets:delete": "rule:admin_api",
"compute_extension:quota_classes": "",
"compute_extension:v3:os-quota-class-sets": "",
"compute_extension:rescue": "",
"compute_extension:v3:os-rescue": "",
"compute_extension:security_group_default_rules": "rule:admin_api",
"compute_extension:security_groups": "",
"compute_extension:v3:os-security-groups": "",
"compute_extension:server_diagnostics": "rule:admin_api",
"compute_extension:v3:os-server-diagnostics": "rule:admin_api",
"compute_extension:server_password": "",
"compute_extension:v3:os-server-password": "",
"compute_extension:server_usage": "",
"compute_extension:v3:os-server-usage": "",
"compute_extension:services": "rule:admin_api",
"compute_extension:v3:os-services": "rule:admin_api",
"compute_extension:v3:servers:discoverable": "",
"compute_extension:shelve": "",
"compute_extension:shelveOffload": "rule:admin_api",
"compute_extension:v3:os-shelve:shelve": "",
"compute_extension:v3:os-shelve:shelve_offload": "rule:admin_api",
"compute_extension:simple_tenant_usage:show": "rule:admin_or_owner",
"compute_extension:v3:os-simple-tenant-usage:show": "rule:admin_or_owner",
"compute_extension:simple_tenant_usage:list": "rule:admin_api",
"compute_extension:v3:os-simple-tenant-usage:list": "rule:admin_api",
"compute_extension:unshelve": "",
"compute_extension:v3:os-shelve:unshelve": "",
"compute_extension:users": "rule:admin_api",
"compute_extension:virtual_interfaces": "",
"compute_extension:virtual_storage_arrays": "",
"compute_extension:volumes": "",
"compute_extension:volume_attachments:index": "",
"compute_extension:volume_attachments:show": "",
"compute_extension:volume_attachments:create": "",
"compute_extension:volume_attachments:update": "",
"compute_extension:volume_attachments:delete": "",
"compute_extension:volumetypes": "",
"compute_extension:availability_zone:list": "",
"compute_extension:v3:os-availability-zone:list": "",
"compute_extension:availability_zone:detail": "rule:admin_api",
"compute_extension:v3:os-availability-zone:detail": "rule:admin_api",
"compute_extension:used_limits_for_admin": "rule:admin_api",
"compute_extension:v3:os-used-limits": "",
291
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
"compute_extension:v3:os-used-limits:tenant": "rule:admin_api",
"compute_extension:migrations:index": "rule:admin_api",
"compute_extension:v3:os-migrations:index": "rule:admin_api",
"volume:create": "",
"volume:get_all": "",
"volume:get_volume_metadata": "",
"volume:get_snapshot": "",
"volume:get_all_snapshots": "",
"volume_extension:types_manage": "rule:admin_api",
"volume_extension:types_extra_specs": "rule:admin_api",
"volume_extension:volume_admin_actions:reset_status": "rule:admin_api",
"volume_extension:snapshot_admin_actions:reset_status": "rule:admin_api",
"volume_extension:volume_admin_actions:force_delete": "rule:admin_api",
"network:get_all": "",
"network:get": "",
"network:create": "",
"network:delete": "",
"network:associate": "",
"network:disassociate": "",
"network:get_vifs_by_instance": "",
"network:allocate_for_instance": "",
"network:deallocate_for_instance": "",
"network:validate_networks": "",
"network:get_instance_uuids_by_ip_filter": "",
"network:get_instance_id_by_floating_address": "",
"network:setup_networks_on_host": "",
"network:get_backdoor_port": "",
"network:get_floating_ip": "",
"network:get_floating_ip_pools": "",
"network:get_floating_ip_by_address": "",
"network:get_floating_ips_by_project": "",
"network:get_floating_ips_by_fixed_address": "",
"network:allocate_floating_ip": "",
"network:deallocate_floating_ip": "",
"network:associate_floating_ip": "",
"network:disassociate_floating_ip": "",
"network:release_floating_ip": "",
"network:migrate_instance_start": "",
"network:migrate_instance_finish": "",
"network:get_fixed_ip": "",
"network:get_fixed_ip_by_address": "",
"network:add_fixed_ip_to_instance": "",
"network:remove_fixed_ip_from_instance": "",
"network:add_network_to_project": "",
"network:get_instance_nw_info": "",
"network:get_dns_domains": "",
"network:add_dns_entry": "",
"network:modify_dns_entry": "",
"network:delete_dns_entry": "",
"network:get_dns_entries_by_address": "",
"network:get_dns_entries_by_name": "",
"network:create_private_dns_domain": "",
"network:create_public_dns_domain": "",
"network:delete_dns_domain": ""
}
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
access_log
error_log
Logs all unsuccessful attempts to access the web server, along with the
reason that each attempt failed.
293
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
4. Database Service
Table of Contents
Configure the database ............................................................................................... 303
Configure the RPC messaging system ........................................................................... 307
The Database Service provides a scalable and reliable Cloud Database-as-a-Service functionality for both relational and non-relational database engines.
The following tables provide a comprehensive list of the Database Service configuration options.
Description
[DEFAULT]
admin_roles = admin
api_paste_config = api-paste.ini
bind_host = 0.0.0.0
bind_port = 8779
black_list_regex = None
db_api_implementation = trove.db.sqlalchemy.api
hostname_require_valid_ip = True
http_delete_rate = 200
http_get_rate = 200
http_mgmt_post_rate = 200
http_post_rate = 200
http_put_rate = 200
instances_page_size = 20
max_header_line = 16384
(IntOpt) Maximum line size of message headers to be accepted. max_header_line may need to be increased when
using large tokens (typically those generated by the Keystone v3 API with big service catalogs).
os_region_name = None
region = LOCAL_DEV
tcp_keepidle = 600
trove_api_workers = None
(IntOpt) Number of workers for the API service. The default will be the number of CPUs available.
trove_auth_url = http://0.0.0.0:5000/v2.0
294
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
trove_conductor_workers = None
trove_security_group_name_prefix = SecGroup
trove_security_group_rule_cidr = 0.0.0.0/0
trove_security_groups_support = True
users_page_size = 20
Description
[keystone_authtoken]
admin_password = None
admin_tenant_name = admin
admin_token = None
admin_user = None
auth_admin_prefix =
auth_host = 127.0.0.1
auth_port = 35357
(IntOpt) Port of the admin Identity API endpoint. Deprecated, use identity_uri.
auth_protocol = https
auth_uri = None
auth_version = None
cache = None
cafile = None
certfile = None
check_revocations_for_cached = False
delay_auth_decision = False
enforce_token_bind = permissive
(StrOpt) Used to control the use and type of token binding. Can be set to: "disabled" to not check token binding.
"permissive" (default) to validate binding information if
the bind type is of a form known to the server and ignore
it if not. "strict" like "permissive" but if the bind type is unknown the token will be rejected. "required" any form of
token binding is needed to be allowed. Finally the name of
a binding method that must be present in tokens.
hash_algorithms = md5
295
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
ferred one first for performance. The result of the first
hash will be stored in the cache. This will typically be set to
multiple values only while migrating from a less secure algorithm to a more secure one. Once all the old tokens are
expired this option should be set to a single value for better performance.
http_connect_timeout = None
http_request_max_retries = 3
identity_uri = None
include_service_catalog = True
(BoolOpt) (optional) indicate whether to set the X-Service-Catalog header. If False, middleware will not ask for
service catalog on token validation and will not set the XService-Catalog header.
insecure = False
keyfile = None
memcache_secret_key = None
memcache_security_strategy = None
(StrOpt) (optional) if defined, indicate whether token data should be authenticated or authenticated and encrypted. Acceptable values are MAC or ENCRYPT. If MAC, token data is authenticated (with HMAC) in the cache. If ENCRYPT, token data is encrypted and authenticated in the
cache. If the value is not one of these options or empty,
auth_token will raise an exception on initialization.
revocation_cache_time = 10
signing_dir = None
token_cache_time = 300
(IntOpt) In order to prevent excessive effort spent validating tokens, the middleware caches previously-seen tokens
for a configurable duration (in seconds). Set to -1 to disable caching completely.
Description
[DEFAULT]
backup_aes_cbc_key = default_aes_cbc_key
backup_chunk_size = 65536
(IntOpt) Chunk size (in bytes) to stream to the Swift container. This should be in multiples of 128 bytes, since this
is the size of an md5 digest block allowing the process to
update the file checksum during streaming. See: http://
stackoverflow.com/questions/1131220/
backup_runner =
trove.guestagent.backup.backup_types.InnoBackupEx
backup_runner_options = {}
backup_segment_max_size = 2147483648
296
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
Description
backup_swift_container = database_backups
backup_use_gzip_compression = True
backup_use_openssl_encryption = True
backup_use_snet = False
backups_page_size = 20
juno
Description
[ssl]
ca_file = None
cert_file = None
key_file = None
(StrOpt) Private key file to use when starting the server securely
Description
[DEFAULT]
remote_cinder_client =
trove.common.remote.cinder_client
remote_dns_client = trove.common.remote.dns_client
remote_neutron_client =
trove.common.remote.neutron_client
remote_nova_client = trove.common.remote.nova_client
remote_swift_client = trove.common.remote.swift_client
Description
[DEFAULT]
cluster_delete_time_out = 180
cluster_usage_timeout = 675
clusters_page_size = 20
Description
[DEFAULT]
configurations_page_size = 20
databases_page_size = 20
default_datastore = None
297
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
default_neutron_networks =
default_notification_level = INFO
default_password_length = 36
expected_filetype_suffixes = json
host = 0.0.0.0
lock_path = None
memcached_servers = None
pybasedir = /usr/lib/python/site-packages/trove/trove
pydev_path = None
taskmanager_queue = taskmanager
template_path = /etc/trove/templates/
usage_timeout = 600
[keystone_authtoken]
memcached_servers = None
Description
[DEFAULT]
ip_regex = None
nova_compute_service_type = compute
nova_compute_url = None
root_grant = ALL
root_grant_option = True
Description
[DEFAULT]
backdoor_port = None
backlog = 4096
disable_process_locking = False
pydev_debug = disabled
(StrOpt) Enable or disable pydev remote debugging. If value is 'auto' tries to connect to remote debugger server,
298
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
but in case of error continues running with debugging disabled.
pydev_debug_host = None
pydev_debug_port = None
Description
[DEFAULT]
dns_account_id =
dns_auth_url =
dns_domain_id =
dns_domain_name =
dns_driver = trove.dns.driver.DnsDriver
dns_endpoint_url = 0.0.0.0
dns_hostname =
dns_instance_entry_factory =
trove.dns.driver.DnsInstanceEntryFactory
dns_management_base_url =
dns_passkey =
dns_region =
dns_service_type =
dns_time_out = 120
(IntOpt) Maximum time (in seconds) to wait for a DNS entry add.
dns_ttl = 300
dns_username =
trove_dns_support = False
(BoolOpt) Whether Trove should add DNS entries on create (using Designate DNSaaS).
Description
[DEFAULT]
agent_call_high_timeout = 60
agent_call_low_timeout = 5
agent_heartbeat_time = 10
guest_config = $pybasedir/etc/trove/troveguestagent.conf.sample
guest_id = None
mount_options = defaults,noatime
storage_namespace =
trove.guestagent.strategies.storage.swift
storage_strategy = SwiftStorage
299
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
Description
usage_sleep_time = 5
juno
Description
[DEFAULT]
heat_service_type = orchestration
heat_time_out = 60
(IntOpt) Maximum time (in seconds) to wait for a Heat request to complete.
heat_url = None
Description
[DEFAULT]
debug = False
(BoolOpt) Print debugging output (set logging level to DEBUG instead of default WARNING level).
fatal_deprecations = False
format_options = -m 5
log_config_append = None
log_dir = None
(StrOpt) (Optional) The base directory used for relative -log-file paths.
log_file = None
(StrOpt) (Optional) Name of log file to output to. If no default is set, logging will go to stdout.
log_format = None
(StrOpt) DEPRECATED. A logging.Formatter log message format string which may use any of the available
logging.LogRecord attributes. This option is deprecated. Please use logging_context_format_string and
logging_default_format_string instead.
logging_context_format_string = %(asctime)s.
%(msecs)03d %(process)d %(levelname)s %(name)s
[%(request_id)s %(user_identity)s] %(instance)s
%(message)s
logging_debug_format_suffix = %(funcName)s
%(pathname)s:%(lineno)d
logging_default_format_string = %(asctime)s.%(msecs)03d (StrOpt) Format string to use for log messages without
%(process)d %(levelname)s %(name)s [-] %(instance)s
context.
%(message)s
logging_exception_prefix = %(asctime)s.%(msecs)03d
%(process)d TRACE %(name)s %(instance)s
300
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
network_label_regex = ^private$
publish_errors = False
syslog_log_facility = LOG_USER
use_stderr = True
use_syslog = False
use_syslog_rfc_format = False
verbose = False
Description
[DEFAULT]
network_driver = trove.network.nova.NovaNetwork
neutron_service_type = network
neutron_url = None
Description
[DEFAULT]
nova_proxy_admin_pass =
nova_proxy_admin_tenant_name =
nova_proxy_admin_user =
Description
[DEFAULT]
max_accepted_volume_size = 5
max_backups_per_user = 50
max_instances_per_user = 5
max_volumes_per_user = 20
(IntOpt) Default maximum volume capacity (in GB) spanning across all Trove volumes per tenant.
quota_driver = trove.quota.quota.DbQuotaDriver
Description
[matchmaker_redis]
301
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
Description
host = 127.0.0.1
password = None
port = 6379
juno
[matchmaker_ring]
ringfile = /etc/oslo/matchmaker_ring.json
Description
[DEFAULT]
fake_rabbit = False
Description
[DEFAULT]
swift_service_type = object-store
swift_url = None
Description
[DEFAULT]
cloudinit_location = /etc/trove/cloudinit
datastore_manager = None
datastore_registry_ext = {}
(DictOpt) Extension for default datastore managers. Allows the use of custom managers for each of the datastores supported by Trove.
exists_notification_ticks = 360
exists_notification_transformer = None
reboot_time_out = 120
resize_time_out = 600
revert_time_out = 600
server_delete_time_out = 60
state_change_wait_time = 180
update_status_on_fail = True
(BoolOpt) Set the service and instance task statuses to ERROR when an instance fails to become active within the
configured usage_timeout.
usage_sleep_time = 5
use_heat = False
use_nova_server_config_drive = False
use_nova_server_volume = False
302
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
verify_swift_checksum_on_restore = True
Description
[DEFAULT]
block_device_mapping = vdb
cinder_service_type = volumev2
cinder_url = None
cinder_volume_type = None
device_path = /dev/vdb
trove_volume_support = True
volume_format_timeout = 120
volume_fstype = ext3
volume_time_out = 60
Description
[DEFAULT]
sql_connection = sqlite:///trove_test.sqlite
sql_idle_timeout = 3600
sql_query_log = False
sql_query_logging = False
Description
[cassandra]
backup_incremental_strategy = {}
(DictOpt) Incremental Backup Runner based on the default strategy. For strategies that do not implement an incremental, the runner will use the default full backup.
backup_namespace = None
backup_strategy = None
device_path = /dev/vdb
303
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
mount_point = /var/lib/cassandra
replication_strategy = None
restore_namespace = None
udp_ports =
volume_support = True
Description
[couchbase]
backup_incremental_strategy = {}
(DictOpt) Incremental Backup Runner based on the default strategy. For strategies that do not implement an incremental, the runner will use the default full backup.
backup_namespace =
trove.guestagent.strategies.backup.couchbase_impl
backup_strategy = CbBackup
device_path = /dev/vdb
mount_point = /var/lib/couchbase
replication_strategy = None
restore_namespace =
trove.guestagent.strategies.restore.couchbase_impl
root_on_create = True
udp_ports =
volume_support = True
Description
[mongodb]
api_strategy =
(StrOpt) Class that implements datastore-specific API logic.
trove.common.strategies.mongodb.api.MongoDbAPIStrategy
backup_incremental_strategy = {}
(DictOpt) Incremental Backup Runner based on the default strategy. For strategies that do not implement an incremental, the runner will use the default full backup.
backup_namespace = None
backup_strategy = None
304
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
cluster_support = True
device_path = /dev/vdb
guestagent_strategy =
(StrOpt) Class that implements datastore-specific Guest
trove.common.strategies.mongodb.guestagent.MongoDbGuestAgentStrategy
Agent API logic.
mount_point = /var/lib/mongodb
num_config_servers_per_cluster = 3
num_query_routers_per_cluster = 1
replication_strategy = None
restore_namespace = None
taskmanager_strategy =
(StrOpt) Class that implements datastore-specific task
trove.common.strategies.mongodb.taskmanager.MongoDbTaskManagerStrategy
manager logic.
tcp_ports = 2500, 27017
udp_ports =
volume_support = True
Description
[mysql]
backup_incremental_strategy = {'InnoBackupEx': 'InnoBackupExIncremental'}
(DictOpt) Incremental Backup Runner based on the default strategy. For strategies that do not implement an
incremental backup, the runner will use the default full
backup.
backup_namespace =
trove.guestagent.strategies.backup.mysql_impl
backup_strategy = InnoBackupEx
device_path = /dev/vdb
mount_point = /var/lib/mysql
replication_namespace =
trove.guestagent.strategies.replication.mysql_binlog
replication_password = NETOU7897NNLOU
replication_strategy = MysqlBinlogReplication
replication_user = slave_user
restore_namespace =
trove.guestagent.strategies.restore.mysql_impl
root_on_create = False
tcp_ports = 3306
305
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
udp_ports =
usage_timeout = 400
volume_support = True
Description
[percona]
backup_incremental_strategy = {'InnoBackupEx': 'InnoBackupExIncremental'}
(DictOpt) Incremental Backup Runner based on the default strategy. For strategies that do not implement an
incremental backup, the runner will use the default full
backup.
backup_namespace =
trove.guestagent.strategies.backup.mysql_impl
backup_strategy = InnoBackupEx
device_path = /dev/vdb
mount_point = /var/lib/mysql
replication_namespace =
trove.guestagent.strategies.replication.mysql_binlog
replication_password = NETOU7897NNLOU
replication_strategy = MysqlBinlogReplication
replication_user = slave_user
restore_namespace =
trove.guestagent.strategies.restore.mysql_impl
root_on_create = False
tcp_ports = 3306
udp_ports =
usage_timeout = 450
volume_support = True
Description
[postgresql]
backup_incremental_strategy = {}
(DictOpt) Incremental Backup Runner based on the default strategy. For strategies that do not implement an incremental, the runner will use the default full backup.
backup_namespace =
trove.guestagent.strategies.backup.postgresql_impl
306
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
backup_strategy = PgDump
device_path = /dev/vdb
ignore_dbs = postgres
mount_point = /var/lib/postgresql
restore_namespace =
trove.guestagent.strategies.restore.postgresql_impl
root_on_create = False
tcp_ports = 5432
udp_ports =
volume_support = True
Description
[redis]
backup_incremental_strategy = {}
(DictOpt) Incremental Backup Runner based on the default strategy. For strategies that do not implement an incremental, the runner will use the default full backup.
backup_namespace = None
backup_strategy = None
device_path = None
mount_point = /var/lib/redis
replication_strategy = None
restore_namespace = None
tcp_ports = 6379
udp_ports =
volume_support = False
307
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Configure RabbitMQ
Use these options to configure the RabbitMQ messaging system:
Description
[DEFAULT]
kombu_ssl_ca_certs =
kombu_ssl_certfile =
kombu_ssl_keyfile =
kombu_ssl_version =
rabbit_ha_queues = False
rabbit_host = localhost
rabbit_hosts = $rabbit_host:$rabbit_port
rabbit_max_retries = 0
(IntOpt) maximum retries with trying to connect to RabbitMQ (the default of 0 implies an infinite retry count)
rabbit_password = guest
rabbit_port = 5672
rabbit_retry_backoff = 2
rabbit_retry_interval = 1
rabbit_use_ssl = False
rabbit_userid = guest
rabbit_virtual_host = /
Configure Qpid
Use these options to configure the Qpid messaging system:
Description
[DEFAULT]
qpid_heartbeat = 60
qpid_hostname = localhost
qpid_hosts = $qpid_hostname:$qpid_port
qpid_password =
qpid_port = 5672
qpid_protocol = tcp
qpid_sasl_mechanisms =
308
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
Description
qpid_tcp_nodelay = True
qpid_username =
juno
Configure ZeroMq
Use these options to configure the ZeroMq messaging system:
Description
[DEFAULT]
rpc_zmq_bind_address = *
rpc_zmq_contexts = 1
rpc_zmq_host = localhost
rpc_zmq_ipc_dir = /var/run/openstack
rpc_zmq_matchmaker =
(StrOpt) MatchMaker driver
trove.openstack.common.rpc.matchmaker.MatchMakerLocalhost
rpc_zmq_port = 9501
rpc_zmq_topic_backlog = None
Configure messaging
Use these common options to configure the RabbitMQ, Qpid, and ZeroMq messaging
drivers:
Description
[DEFAULT]
amqp_auto_delete = False
amqp_durable_queues = False
conductor_manager = trove.conductor.manager.Manager (StrOpt) Qualified class name to use for conductor manager.
conductor_queue = trove-conductor
control_exchange = openstack
default_publisher_id = $host
notification_driver = []
notification_service_id = {'postgresql':
'ac277e0d-4f21-40aa-b347-1ea31e571720', 'couchbase': 'fa62fe68-74d9-4779-a24e-36f19602c415', 'mongodb': 'c8c907af-7375-456f-b929-b637ff9209ee', 'redis': 'b216ffc5-1947-456c-a4cf-70f94c05f7d0', 'mysql':
'2f3ff068-2bfb-4f70-9a9d-a6bb65bc084b', 'cassandra':
'459a230d-4e97-4344-9067-2a54a310b0ed'}
309
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
notification_topics = notifications
Description
[DEFAULT]
allowed_rpc_exception_modules = nova.exception,
cinder.exception, exceptions
matchmaker_heartbeat_freq = 300
matchmaker_heartbeat_ttl = 600
num_tries = 3
report_interval = 10
rpc_backend = trove.openstack.common.rpc.impl_kombu
rpc_cast_timeout = 30
rpc_conn_pool_size = 30
rpc_response_timeout = 60
rpc_thread_pool_size = 64
[rpc_notifier2]
topics = notifications
[secure_messages]
enabled = True
encrypt = False
enforced = False
kds_endpoint = None
secret_key = None
secret_keys_file = None
(StrOpt) Path to the file containing the keys, takes precedence over secret_key
310
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
[DEFAULT]
amqp_auto_delete = False
amqp_durable_queues = False
control_exchange = openstack
notification_driver = []
notification_level = INFO
notification_publisher_id = None
notification_topics = notifications
transport_url = None
Description
[keystone_authtoken]
admin_password = None
admin_tenant_name = admin
admin_token = None
admin_user = None
auth_admin_prefix =
auth_host = 127.0.0.1
auth_port = 35357
(IntOpt) Port of the admin Identity API endpoint. Deprecated, use identity_uri.
auth_protocol = https
auth_uri = None
auth_version = None
cache = None
311
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
cafile = None
certfile = None
check_revocations_for_cached = False
delay_auth_decision = False
enforce_token_bind = permissive
(StrOpt) Used to control the use and type of token binding. Can be set to: "disabled" to not check token binding.
"permissive" (default) to validate binding information if
the bind type is of a form known to the server and ignore
it if not. "strict" like "permissive" but if the bind type is unknown the token will be rejected. "required" any form of
token binding is needed to be allowed. Finally the name of
a binding method that must be present in tokens.
hash_algorithms = md5
http_connect_timeout = None
http_request_max_retries = 3
identity_uri = None
include_service_catalog = True
(BoolOpt) (optional) indicate whether to set the X-Service-Catalog header. If False, middleware will not ask for
service catalog on token validation and will not set the XService-Catalog header.
insecure = False
keyfile = None
memcache_secret_key = None
memcache_security_strategy = None
(StrOpt) (optional) if defined, indicate whether token data should be authenticated or authenticated and encrypted. Acceptable values are MAC or ENCRYPT. If MAC, token data is authenticated (with HMAC) in the cache. If ENCRYPT, token data is encrypted and authenticated in the
cache. If the value is not one of these options or empty,
auth_token will raise an exception on initialization.
revocation_cache_time = 10
signing_dir = None
312
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
token_cache_time = 300
(IntOpt) In order to prevent excessive effort spent validating tokens, the middleware caches previously-seen tokens
for a configurable duration (in seconds). Set to -1 to disable caching completely.
Description
[DEFAULT]
cluster_remote_threshold = 70
compute_topology_file = etc/sahara/compute.topology
(StrOpt) File with nova compute topology. It should contain mapping between nova computes and racks. File format: compute1 /rack1 compute2 /rack2 compute3 /rack2
detach_volume_timeout = 300
enable_data_locality = False
enable_hypervisor_awareness = True
enable_notifications = False
global_remote_threshold = 100
host =
infrastructure_engine = direct
(StrOpt) An engine which will be used to provision infrastructure for Hadoop cluster.
job_binary_max_KB = 5120
job_workflow_postfix =
lock_path = None
memcached_servers = None
min_transient_cluster_active_time = 30
(IntOpt) Minimal "lifetime" in seconds for a transient cluster. Cluster is guaranteed to be "alive" within this time period.
node_domain = novalocal
os_region_name = None
periodic_enable = True
periodic_fuzzy_delay = 60
(IntOpt) Range in seconds to randomly delay when starting the periodic task scheduler to reduce stampeding. (Disable by setting to 0).
periodic_interval_max = 60
port = 8386
remote = ssh
313
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
run_external_periodic_tasks = True
swift_topology_file = etc/sahara/swift.topology
(StrOpt) File with Swift topology. It should contain mapping between Swift nodes and racks. File format: node1 /
rack1 node2 /rack2 node3 /rack2
use_floating_ips = True
use_identity_api_v3 = True
use_namespaces = False
use_neutron = False
[conductor]
use_local = True
[keystone_authtoken]
memcached_servers = None
Description
[DEFAULT]
db_driver = sahara.db
[database]
backend = sqlalchemy
connection = None
connection_debug = 0
connection_trace = False
db_inc_retry_interval = True
db_max_retries = 20
(IntOpt) Maximum database connection retries before error is raised. Set to -1 to specify an infinite retry count.
db_max_retry_interval = 10
(IntOpt) If db_inc_retry_interval is set, the maximum seconds between database connection retries.
db_retry_interval = 1
idle_timeout = 3600
max_overflow = None
max_pool_size = None
max_retries = 10
314
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
min_pool_size = 1
mysql_sql_mode = TRADITIONAL
pool_timeout = None
retry_interval = 10
slave_connection = None
(StrOpt) The SQLAlchemy connection string to use to connect to the slave database.
sqlite_db = oslo.sqlite
sqlite_synchronous = True
use_db_reconnect = False
Description
[DEFAULT]
proxy_user_domain_name = None
proxy_user_role_names = Member
use_domain_for_proxy_users = False
Description
[DEFAULT]
disable_process_locking = False
Description
[DEFAULT]
debug = False
(BoolOpt) Print debugging output (set logging level to DEBUG instead of default WARNING level).
default_log_levels = amqplib=WARN,
qpid.messaging=INFO, stevedore=INFO,
eventlet.wsgi.server=WARN, sqlalchemy=WARN,
boto=WARN, suds=INFO, keystone=INFO,
paramiko=WARN, requests=WARN, iso8601=WARN
fatal_deprecations = False
log_config_append = None
315
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
details about logging configuration files, see the Python
logging module documentation.
log_dir = None
(StrOpt) (Optional) The base directory used for relative -log-file paths.
log_exchange = False
log_file = None
(StrOpt) (Optional) Name of log file to output to. If no default is set, logging will go to stdout.
log_format = None
(StrOpt) DEPRECATED. A logging.Formatter log message format string which may use any of the available
logging.LogRecord attributes. This option is deprecated. Please use logging_context_format_string and
logging_default_format_string instead.
logging_context_format_string = %(asctime)s.
%(msecs)03d %(process)d %(levelname)s %(name)s
[%(request_id)s %(user_identity)s] %(instance)s
%(message)s
logging_debug_format_suffix = %(funcName)s
%(pathname)s:%(lineno)d
logging_default_format_string = %(asctime)s.%(msecs)03d (StrOpt) Format string to use for log messages without
%(process)d %(levelname)s %(name)s [-] %(instance)s
context.
%(message)s
logging_exception_prefix = %(asctime)s.%(msecs)03d
%(process)d TRACE %(name)s %(instance)s
publish_errors = False
syslog_log_facility = LOG_USER
use_stderr = True
use_syslog = False
use_syslog_rfc_format = False
verbose = False
Description
[DEFAULT]
qpid_heartbeat = 60
qpid_hostname = localhost
qpid_hosts = $qpid_hostname:$qpid_port
qpid_password =
qpid_port = 5672
qpid_protocol = tcp
qpid_receiver_capacity = 1
316
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
qpid_sasl_mechanisms =
qpid_tcp_nodelay = True
qpid_topology_version = 1
qpid_username =
Description
[DEFAULT]
kombu_reconnect_delay = 1.0
(FloatOpt) How long to wait before reconnecting in response to an AMQP consumer cancel notification.
kombu_ssl_ca_certs =
kombu_ssl_certfile =
kombu_ssl_keyfile =
kombu_ssl_version =
rabbit_ha_queues = False
rabbit_host = localhost
rabbit_hosts = $rabbit_host:$rabbit_port
rabbit_login_method = AMQPLAIN
rabbit_max_retries = 0
(IntOpt) Maximum number of RabbitMQ connection retries. Default is 0 (infinite retry count).
rabbit_password = guest
rabbit_port = 5672
rabbit_retry_backoff = 2
rabbit_retry_interval = 1
rabbit_use_ssl = False
rabbit_userid = guest
rabbit_virtual_host = /
Description
[matchmaker_redis]
host = 127.0.0.1
password = None
port = 6379
[matchmaker_ring]
317
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
Description
ringfile = /etc/oslo/matchmaker_ring.json
juno
Description
[DEFAULT]
matchmaker_heartbeat_freq = 300
matchmaker_heartbeat_ttl = 600
rpc_backend = rabbit
rpc_cast_timeout = 30
rpc_conn_pool_size = 30
rpc_response_timeout = 60
rpc_thread_pool_size = 64
Description
[DEFAULT]
fake_rabbit = False
Description
[DEFAULT]
rpc_zmq_bind_address = *
rpc_zmq_contexts = 1
rpc_zmq_host = localhost
rpc_zmq_ipc_dir = /var/run/openstack
rpc_zmq_matchmaker =
(StrOpt) MatchMaker driver.
oslo.messaging._drivers.matchmaker.MatchMakerLocalhost
rpc_zmq_port = 9501
rpc_zmq_topic_backlog = None
318
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
6. Identity service
Table of Contents
Caching layer ...............................................................................................................
Identity service configuration file .................................................................................
Identity service sample configuration files ....................................................................
New, updated and deprecated options in Juno for OpenStack Identity .........................
319
321
338
367
This chapter details the OpenStack Identity service configuration options. For installation
prerequisites and step-by-step walkthroughs, see the OpenStack Installation Guide for your
distribution (docs.openstack.org) and Cloud Administrator Guide.
Caching layer
Identity supports a caching layer that is above the configurable subsystems, such as token
or assignment. The majority of the caching configuration options are set in the [cache]
section. However, each section that has the capability to be cached usually has a caching
option that will toggle caching for that specific section. By default, caching is globally disabled. Options are as follows:
Description
[cache]
backend = keystone.common.cache.noop
backend_argument = []
config_prefix = cache.keystone
debug_cache_backend = False
enabled = False
expiration_time = 600
319
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
memcache_dead_retry = 300
memcache_pool_connection_get_timeout = 10
memcache_pool_maxsize = 10
memcache_pool_unused_timeout = 60
(IntOpt) Number of seconds a connection to memcached is held unused in the pool before it is closed.
(keystone.cache.memcache_pool backend only)
memcache_servers = localhost:11211
memcache_socket_timeout = 3
proxies =
[memcache]
dead_retry = 300
(IntOpt) Number of seconds memcached server is considered dead before it is tried again. This is used by the key
value store system (e.g. token pooled memcached persistence backend).
pool_connection_get_timeout = 10
pool_maxsize = 10
pool_unused_timeout = 60
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
[DEFAULT]
admin_bind_host = 0.0.0.0
admin_endpoint = None
admin_port = 35357
admin_token = ADMIN
admin_workers = None
(IntOpt) The number of worker processes to serve the admin WSGI application. Defaults to number of CPUs (minimum of 2).
compute_port = 8774
domain_id_immutable = True
(BoolOpt) Set this to false if you want to enable the ability for user, group and project entities to be moved between domains by updating their domain_id. Allowing
such movement is not recommended if the scope of a domain admin is being restricted by use of an appropriate
policy file (see policy.v3cloudsample as an example).
list_limit = None
(IntOpt) The maximum number of entities that will be returned in a collection, with no limit set by default. This
global limit may be then overridden for a specific driver, by
specifying a list_limit in the appropriate section (e.g. [assignment]).
max_param_size = 64
max_request_body_size = 114688
max_token_size = 8192
member_role_id = 9fe2ff9ee4384b1894a90878d3e92bab
321
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
migration, the member_role_id will be used in the API
add_user_to_project.
member_role_name = _member_
public_bind_host = 0.0.0.0
public_endpoint = None
public_port = 5000
public_workers = None
strict_password_check = False
tcp_keepalive = False
tcp_keepidle = 600
[endpoint_filter]
driver =
(StrOpt) Endpoint Filter backend driver
keystone.contrib.endpoint_filter.backends.sql.EndpointFilter
return_all_endpoints_if_no_filter = True
[endpoint_policy]
driver =
(StrOpt) Endpoint policy backend driver
keystone.contrib.endpoint_policy.backends.sql.EndpointPolicy
[paste_deploy]
config_file = keystone-paste.ini
Description
[assignment]
cache_time = None
caching = True
(BoolOpt) Toggle for assignment caching. This has no effect unless global caching is enabled.
driver = None
list_limit = None
322
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
[auth]
external = keystone.auth.plugins.external.DefaultDomain
password = keystone.auth.plugins.password.Password
token = keystone.auth.plugins.token.Token
Description
[keystone_authtoken]
admin_password = None
admin_tenant_name = admin
admin_token = None
admin_user = None
auth_admin_prefix =
auth_host = 127.0.0.1
auth_port = 35357
(IntOpt) Port of the admin Identity API endpoint. Deprecated, use identity_uri.
auth_protocol = https
auth_uri = None
auth_version = None
cache = None
cafile = None
certfile = None
check_revocations_for_cached = False
delay_auth_decision = False
enforce_token_bind = permissive
(StrOpt) Used to control the use and type of token binding. Can be set to: "disabled" to not check token binding.
"permissive" (default) to validate binding information if
the bind type is of a form known to the server and ignore
it if not. "strict" like "permissive" but if the bind type is unknown the token will be rejected. "required" any form of
token binding is needed to be allowed. Finally the name of
a binding method that must be present in tokens.
hash_algorithms = md5
323
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
The hashes will be tried in the order given, so put the preferred one first for performance. The result of the first
hash will be stored in the cache. This will typically be set to
multiple values only while migrating from a less secure algorithm to a more secure one. Once all the old tokens are
expired this option should be set to a single value for better performance.
http_connect_timeout = None
http_request_max_retries = 3
identity_uri = None
include_service_catalog = True
(BoolOpt) (optional) indicate whether to set the X-Service-Catalog header. If False, middleware will not ask for
service catalog on token validation and will not set the XService-Catalog header.
insecure = False
keyfile = None
memcache_secret_key = None
memcache_security_strategy = None
(StrOpt) (optional) if defined, indicate whether token data should be authenticated or authenticated and encrypted. Acceptable values are MAC or ENCRYPT. If MAC, token data is authenticated (with HMAC) in the cache. If ENCRYPT, token data is encrypted and authenticated in the
cache. If the value is not one of these options or empty,
auth_token will raise an exception on initialization.
revocation_cache_time = 10
signing_dir = None
token_cache_time = 300
(IntOpt) In order to prevent excessive effort spent validating tokens, the middleware caches previously-seen tokens
for a configurable duration (in seconds). Set to -1 to disable caching completely.
Description
[signing]
ca_certs = /etc/keystone/ssl/certs/ca.pem
ca_key = /etc/keystone/ssl/private/cakey.pem
cert_subject = /C=US/ST=Unset/L=Unset/O=Unset/
CN=www.example.com
certfile = /etc/keystone/ssl/certs/signing_cert.pem
(StrOpt) Path of the certfile for token signing. For nonproduction environments, you may be interested in using
`keystone-manage pki_setup` to generate self-signed certificates.
key_size = 2048
(IntOpt) Key size (in bits) for token signing cert (auto generated certificate).
324
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
keyfile = /etc/keystone/ssl/private/signing_key.pem
token_format = None
valid_days = 3650
(IntOpt) Days the token signing cert is valid for (auto generated certificate).
[ssl]
ca_certs = /etc/keystone/ssl/certs/ca.pem
ca_key = /etc/keystone/ssl/private/cakey.pem
cert_required = False
cert_subject = /C=US/ST=Unset/L=Unset/O=Unset/
CN=localhost
certfile = /etc/keystone/ssl/certs/keystone.pem
(StrOpt) Path of the certfile for SSL. For non-production environments, you may be interested in using `keystone-manage ssl_setup` to generate self-signed certificates.
enable = False
key_size = 1024
keyfile = /etc/keystone/ssl/private/keystonekey.pem
valid_days = 3650
Description
[catalog]
cache_time = None
caching = True
driver = keystone.catalog.backends.sql.Catalog
list_limit = None
template_file = default_catalog.templates
(StrOpt) Catalog template file name for use with the template catalog backend.
Description
[DEFAULT]
memcached_servers = None
[keystone_authtoken]
memcached_servers = None
325
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
[credential]
driver = keystone.credential.backends.sql.Credential
Description
[database]
backend = sqlalchemy
connection = None
connection_debug = 0
connection_trace = False
db_inc_retry_interval = True
db_max_retries = 20
(IntOpt) Maximum database connection retries before error is raised. Set to -1 to specify an infinite retry count.
db_max_retry_interval = 10
(IntOpt) If db_inc_retry_interval is set, the maximum seconds between database connection retries.
db_retry_interval = 1
idle_timeout = 3600
max_overflow = None
max_pool_size = None
max_retries = 10
min_pool_size = 1
mysql_sql_mode = TRADITIONAL
pool_timeout = None
retry_interval = 10
slave_connection = None
(StrOpt) The SQLAlchemy connection string to use to connect to the slave database.
sqlite_db = oslo.sqlite
sqlite_synchronous = True
use_db_reconnect = False
Description
[DEFAULT]
326
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
backdoor_port = None
pydev_debug_host = None
pydev_debug_port = None
standard_threads = False
[audit]
namespace = openstack
Description
[ec2]
driver = keystone.contrib.ec2.backends.kvs.Ec2
[keystone_ec2_token]
cafile = None
certfile = None
insecure = False
keyfile = None
url = http://localhost:5000/v2.0/ec2tokens
Description
[federation]
assertion_prefix =
(StrOpt) Value to be used when filtering assertion parameters from the environment.
driver =
keystone.contrib.federation.backends.sql.Federation
Description
[identity]
default_domain_id = default
domain_config_dir = /etc/keystone/domains
(StrOpt) Path for Keystone to locate the domain specific identity configuration files if
domain_specific_drivers_enabled is set to true.
domain_specific_drivers_enabled = False
327
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
file in a domain configuration directory. Only values specific to the domain need to be placed in the domain specific
configuration file. This feature is disabled by default; set to
true to enable.
driver = keystone.identity.backends.sql.Identity
list_limit = None
max_password_length = 4096
Description
[kvs]
backends =
config_prefix = keystone.kvs
default_lock_timeout = 5
enable_key_mangler = True
Description
[ldap]
alias_dereferencing = default
allow_subtree_delete = False
(BoolOpt) Delete subtrees using the subtree delete control. Only enable this option if your LDAP server supports
subtree deletion.
auth_pool_connection_lifetime = 60
auth_pool_size = 100
chase_referrals = None
debug_level = None
dumb_member = cn=dumb,dc=nonexistent
group_additional_attribute_mapping =
(ListOpt) Additional attribute mappings for groups. Attribute mapping format is <ldap_attr>:<user_attr>, where
ldap_attr is the attribute in the LDAP entry and user_attr is
the Identity API attribute.
group_allow_create = True
group_allow_delete = True
group_allow_update = True
328
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
group_attribute_ignore =
group_desc_attribute = description
group_filter = None
group_id_attribute = cn
group_member_attribute = member
group_name_attribute = ou
group_objectclass = groupOfNames
group_tree_dn = None
page_size = 0
password = None
pool_connection_lifetime = 600
pool_connection_timeout = -1
pool_retry_delay = 0.1
pool_retry_max = 3
pool_size = 10
project_additional_attribute_mapping =
(ListOpt) Additional attribute mappings for projects. Attribute mapping format is <ldap_attr>:<user_attr>, where
ldap_attr is the attribute in the LDAP entry and user_attr is
the Identity API attribute.
project_allow_create = True
project_allow_delete = True
project_allow_update = True
project_attribute_ignore =
project_desc_attribute = description
project_domain_id_attribute = businessCategory
project_enabled_attribute = enabled
project_enabled_emulation = False
project_enabled_emulation_dn = None
project_filter = None
project_id_attribute = cn
project_member_attribute = member
project_name_attribute = ou
project_objectclass = groupOfNames
project_tree_dn = None
query_scope = one
329
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
role_additional_attribute_mapping =
(ListOpt) Additional attribute mappings for roles. Attribute mapping format is <ldap_attr>:<user_attr>, where
ldap_attr is the attribute in the LDAP entry and user_attr is
the Identity API attribute.
role_allow_create = True
role_allow_delete = True
role_allow_update = True
role_attribute_ignore =
role_filter = None
role_id_attribute = cn
role_member_attribute = roleOccupant
role_name_attribute = ou
role_objectclass = organizationalRole
role_tree_dn = None
suffix = cn=example,cn=com
tls_cacertdir = None
tls_cacertfile = None
tls_req_cert = demand
url = ldap://localhost
use_auth_pool = False
use_dumb_member = False
use_pool = False
use_tls = False
user = None
user_additional_attribute_mapping =
user_allow_create = True
user_allow_delete = True
user_allow_update = True
user_default_project_id_attribute = None
user_enabled_attribute = enabled
user_enabled_default = True
330
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
user_enabled_emulation = False
user_enabled_emulation_dn = None
user_enabled_invert = False
user_enabled_mask = 0
(IntOpt) Bitmask integer to indicate the bit that the enabled value is stored in if the LDAP server represents "enabled" as a bit on an integer rather than a boolean. A value of "0" indicates the mask is not used. If this is not set
to "0" the typical value is "2". This is typically used when
"user_enabled_attribute = userAccountControl".
user_filter = None
user_id_attribute = cn
user_mail_attribute = mail
user_name_attribute = sn
user_objectclass = inetOrgPerson
user_pass_attribute = userPassword
user_tree_dn = None
Description
[DEFAULT]
debug = False
(BoolOpt) Print debugging output (set logging level to DEBUG instead of default WARNING level).
fatal_deprecations = False
log_config_append = None
log_dir = None
(StrOpt) (Optional) The base directory used for relative -log-file paths.
log_file = None
(StrOpt) (Optional) Name of log file to output to. If no default is set, logging will go to stdout.
331
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
log_format = None
(StrOpt) DEPRECATED. A logging.Formatter log message format string which may use any of the available
logging.LogRecord attributes. This option is deprecated. Please use logging_context_format_string and
logging_default_format_string instead.
logging_context_format_string = %(asctime)s.
%(msecs)03d %(process)d %(levelname)s %(name)s
[%(request_id)s %(user_identity)s] %(instance)s
%(message)s
logging_debug_format_suffix = %(funcName)s
%(pathname)s:%(lineno)d
logging_default_format_string = %(asctime)s.%(msecs)03d (StrOpt) Format string to use for log messages without
%(process)d %(levelname)s %(name)s [-] %(instance)s
context.
%(message)s
logging_exception_prefix = %(asctime)s.%(msecs)03d
%(process)d TRACE %(name)s %(instance)s
publish_errors = False
syslog_log_facility = LOG_USER
use_stderr = True
use_syslog = False
use_syslog_rfc_format = False
verbose = False
Description
[identity_mapping]
backward_compatible_ids = True
332
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
[memcache]
servers = localhost:11211
socket_timeout = 3
Description
[oauth1]
access_token_duration = 86400
driver = keystone.contrib.oauth1.backends.sql.OAuth1
request_token_duration = 28800
Description
[os_inherit]
enabled = False
Description
[DEFAULT]
policy_default_rule = default
policy_file = policy.json
[policy]
driver = keystone.policy.backends.sql.Policy
list_limit = None
Description
[revoke]
caching = True
driver = keystone.contrib.revoke.backends.kvs.Revoke
expiration_buffer = 1800
(IntOpt) This value (calculated in seconds) is added to token expiration before a revocation event may be removed
from the backend.
Description
[saml]
333
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
assertion_expiration_time = 3600
certfile = /etc/keystone/ssl/certs/signing_cert.pem
(StrOpt) Path of the certfile for SAML signing. For nonproduction environments, you may be interested in using
`keystone-manage pki_setup` to generate self-signed certificates. Note, the path cannot contain a comma.
idp_contact_company = None
idp_contact_email = None
idp_contact_name = None
idp_contact_surname = None
idp_contact_telephone = None
idp_contact_type = other
(StrOpt) Contact type. Allowed values are: technical, support, administrative billing, and other
idp_entity_id = None
(StrOpt) Entity ID value for unique Identity Provider identification. Usually FQDN is set with a suffix. A value is required to generate IDP Metadata. For example: https://
keystone.example.com/v3/OS-FEDERATION/saml2/idp
idp_lang = en
idp_metadata_path = /etc/keystone/saml2_idp_metadata.xml
idp_organization_display_name = None
idp_organization_name = None
idp_organization_url = None
idp_sso_endpoint = None
(StrOpt) Identity Provider Single-Sign-On service value, required in the Identity Provider's metadata. A value is required to generate IDP Metadata. For example: https://
keystone.example.com/v3/OS-FEDERATION/saml2/sso
keyfile = /etc/keystone/ssl/private/signing_key.pem
xmlsec1_binary = xmlsec1
(StrOpt) Binary to be called for XML signing. Install the appropriate package, specify absolute path or adjust your
PATH environment variable if the binary cannot be found.
Description
[DEFAULT]
crypt_strength = 40000
Description
[stats]
driver = keystone.contrib.stats.backends.kvs.Stats
Description
[DEFAULT]
fake_rabbit = False
334
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
[token]
bind =
cache_time = None
(IntOpt) Time to cache tokens (in seconds). This has no effect unless global and token caching are enabled.
caching = True
(BoolOpt) Toggle for token system caching. This has no effect unless global caching is enabled.
driver = keystone.token.persistence.backends.sql.Token
enforce_token_bind = permissive
(StrOpt) Enforcement policy on tokens presented to Keystone with bind information. One of disabled, permissive,
strict, required or a specifically required bind mode, e.g.,
kerberos or x509 to require binding to that authentication.
expiration = 3600
hash_algorithm = md5
(StrOpt) The hash algorithm to use for PKI tokens. This can
be set to any algorithm that hashlib supports. WARNING:
Before changing this value, the auth_token middleware
must be configured with the hash_algorithms, otherwise
token revocation will not be processed correctly.
provider = None
(StrOpt) Controls the token construction, validation, and revocation operations. Core providers are
"keystone.token.providers.[pkiz|pki|uuid].Provider". The
default provider is uuid.
revocation_cache_time = 3600
(IntOpt) Time to cache the revocation list and the revocation events if revoke extension is enabled (in seconds). This
has no effect unless global and token caching are enabled.
revoke_by_id = True
Description
[trust]
driver = keystone.trust.backends.sql.Trust
enabled = True
Description
[DEFAULT]
matchmaker_heartbeat_freq = 300
matchmaker_heartbeat_ttl = 600
rpc_backend = rabbit
rpc_cast_timeout = 30
335
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
Description
rpc_conn_pool_size = 30
rpc_response_timeout = 60
rpc_thread_pool_size = 64
juno
Description
[DEFAULT]
amqp_auto_delete = False
amqp_durable_queues = False
control_exchange = keystone
default_publisher_id = None
notification_driver = []
notification_topics = notifications
transport_url = None
Description
[DEFAULT]
qpid_heartbeat = 60
qpid_hostname = localhost
qpid_hosts = $qpid_hostname:$qpid_port
qpid_password =
qpid_port = 5672
qpid_protocol = tcp
qpid_receiver_capacity = 1
qpid_sasl_mechanisms =
qpid_tcp_nodelay = True
qpid_topology_version = 1
qpid_username =
Description
[DEFAULT]
kombu_reconnect_delay = 1.0
(FloatOpt) How long to wait before reconnecting in response to an AMQP consumer cancel notification.
336
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
kombu_ssl_ca_certs =
kombu_ssl_certfile =
kombu_ssl_keyfile =
kombu_ssl_version =
rabbit_ha_queues = False
rabbit_host = localhost
rabbit_hosts = $rabbit_host:$rabbit_port
rabbit_login_method = AMQPLAIN
rabbit_max_retries = 0
(IntOpt) Maximum number of RabbitMQ connection retries. Default is 0 (infinite retry count).
rabbit_password = guest
rabbit_port = 5672
rabbit_retry_backoff = 2
rabbit_retry_interval = 1
rabbit_use_ssl = False
rabbit_userid = guest
rabbit_virtual_host = /
Description
[DEFAULT]
rpc_zmq_bind_address = *
rpc_zmq_contexts = 1
rpc_zmq_host = localhost
rpc_zmq_ipc_dir = /var/run/openstack
rpc_zmq_matchmaker =
(StrOpt) MatchMaker driver.
oslo.messaging._drivers.matchmaker.MatchMakerLocalhost
rpc_zmq_port = 9501
rpc_zmq_topic_backlog = None
Description
[matchmaker_redis]
host = 127.0.0.1
password = None
337
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
Description
port = 6379
[matchmaker_ring]
ringfile = /etc/oslo/matchmaker_ring.json
keystone.conf
Use the keystone.conf file to configure most Identity service options:
[DEFAULT]
#
# Options defined in keystone
#
# A "shared secret" that can be used to bootstrap Keystone.
# This "token" does not represent a user, and carries no
# explicit authorization. To disable in production (highly
# recommended), remove AdminTokenAuthMiddleware from your
# paste application pipelines (for example, in keystone# paste.ini). (string value)
#admin_token=ADMIN
# The IP Address of the network interface to for the public
# service to listen on. (string value)
# Deprecated group/name - [DEFAULT]/bind_host
#public_bind_host=0.0.0.0
# The IP Address of the network interface to for the admin
# service to listen on. (string value)
# Deprecated group/name - [DEFAULT]/bind_host
#admin_bind_host=0.0.0.0
# The port which the OpenStack Compute service listens on.
# (integer value)
#compute_port=8774
# The port number which the admin service listens on. (integer
# value)
#admin_port=35357
# The port number which the public service listens on.
# (integer value)
#public_port=5000
#
#
#
#
#
#
#
#
338
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
# a different server.
#public_endpoint=http://localhost:%(public_port)s/
# The base admin endpoint URL for keystone that are advertised
# to clients (NOTE: this does NOT affect how keystone listens
# for connections) (string value).
# Defaults to the base host URL of the request. Eg a
# request to http://server:35357/v2.0/users will
# default to http://server:35357. You should only need
# to set this value if the base URL contains a path
# (eg /prefix/v2.0) or the endpoint should be found on
# a different server.
#admin_endpoint=http://localhost:%(admin_port)s/
# onready allows you to send a notification when the process
# is ready to serve For example, to have it notify using
# systemd, one could set shell command: "onready = systemd# notify --ready" or a module with notify() method: "onready =
# keystone.common.systemd". (string value)
#onready=<None>
# enforced by optional sizelimit middleware
# (keystone.middleware:RequestBodySizeLimiter). (integer
# value)
#max_request_body_size=114688
# limit the sizes of user & tenant ID/names. (integer value)
#max_param_size=64
# similar to max_param_size, but provides an exception for
# token values. (integer value)
#max_token_size=8192
# During a SQL upgrade member_role_id will be used to create a
# new role that will replace records in the
# user_tenant_membership table with explicit role grants.
# After migration, the member_role_id will be used in the API
# add_user_to_project. (string value)
#member_role_id=9fe2ff9ee4384b1894a90878d3e92bab
# During a SQL upgrade member_role_id will be used to create a
# new role that will replace records in the
# user_tenant_membership table with explicit role grants.
# After migration, member_role_name will be ignored. (string
# value)
#member_role_name=_member_
# The value passed as the keyword "rounds" to passlib encrypt
# method. (integer value)
#crypt_strength=40000
# Set this to True if you want to enable TCP_KEEPALIVE on
# server sockets i.e. sockets used by the keystone wsgi server
# for client connections. (boolean value)
#tcp_keepalive=false
# Sets the value of TCP_KEEPIDLE in seconds for each server
# socket. Only applies if tcp_keepalive is True. Not supported
# on OS X. (integer value)
#tcp_keepidle=600
339
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
#
# Options defined in oslo.messaging
#
# Use durable queues in amqp. (boolean value)
# Deprecated group/name - [DEFAULT]/rabbit_durable_queues
#amqp_durable_queues=false
# Auto-delete queues in amqp. (boolean value)
#amqp_auto_delete=false
# Size of RPC connection pool. (integer value)
#rpc_conn_pool_size=30
# Modules of exceptions that are permitted to be recreated
# upon receiving exception data from an rpc call. (list value)
#allowed_rpc_exception_modules=oslo.messaging.exceptions,nova.exception,
cinder.exception,exceptions
# Qpid broker hostname. (string value)
#qpid_hostname=localhost
# Qpid broker port. (integer value)
#qpid_port=5672
# Qpid HA cluster host:port pairs. (list value)
#qpid_hosts=$qpid_hostname:$qpid_port
# Username for Qpid connection. (string value)
#qpid_username=
# Password for Qpid connection. (string value)
#qpid_password=
# Space separated list of SASL mechanisms to use for auth.
# (string value)
#qpid_sasl_mechanisms=
# Seconds between connection keepalive heartbeats. (integer
# value)
#qpid_heartbeat=60
# Transport to use, either 'tcp' or 'ssl'. (string value)
340
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
#qpid_protocol=tcp
# Whether to disable the Nagle algorithm. (boolean value)
#qpid_tcp_nodelay=true
# The qpid topology version to use. Version 1 is what was
# originally used by impl_qpid. Version 2 includes some
# backwards-incompatible changes that allow broker federation
# to work. Users should update to version 2 when they are
# able to take everything down, as it requires a clean break.
# (integer value)
#qpid_topology_version=1
# SSL version to use (valid only if SSL enabled). valid values
# are TLSv1, SSLv23 and SSLv3. SSLv2 may be available on some
# distributions. (string value)
#kombu_ssl_version=
# SSL key file (valid only if SSL enabled). (string value)
#kombu_ssl_keyfile=
# SSL cert file (valid only if SSL enabled). (string value)
#kombu_ssl_certfile=
# SSL certification authority file (valid only if SSL
# enabled). (string value)
#kombu_ssl_ca_certs=
# How long to wait before reconnecting in response to an AMQP
# consumer cancel notification. (floating point value)
#kombu_reconnect_delay=1.0
# The RabbitMQ broker address where a single node is used.
# (string value)
#rabbit_host=localhost
# The RabbitMQ broker port where a single node is used.
# (integer value)
#rabbit_port=5672
# RabbitMQ HA cluster host:port pairs. (list value)
#rabbit_hosts=$rabbit_host:$rabbit_port
# Connect over SSL for RabbitMQ. (boolean value)
#rabbit_use_ssl=false
# The RabbitMQ userid. (string value)
#rabbit_userid=guest
# The RabbitMQ password. (string value)
#rabbit_password=guest
# the RabbitMQ login method (string value)
#rabbit_login_method=AMQPLAIN
# The RabbitMQ virtual host. (string value)
#rabbit_virtual_host=/
# How frequently to retry connecting with RabbitMQ. (integer
# value)
341
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
#rabbit_retry_interval=1
# How long to backoff for between retries when connecting to
# RabbitMQ. (integer value)
#rabbit_retry_backoff=2
# Maximum number of RabbitMQ connection retries. Default is 0
# (infinite retry count). (integer value)
#rabbit_max_retries=0
# Use HA queues in RabbitMQ (x-ha-policy: all). If you change
# this option, you must wipe the RabbitMQ database. (boolean
# value)
#rabbit_ha_queues=false
# If passed, use a fake RabbitMQ provider. (boolean value)
#fake_rabbit=false
# ZeroMQ bind address. Should be a wildcard (*), an ethernet
# interface, or IP. The "host" option should point or resolve
# to this address. (string value)
#rpc_zmq_bind_address=*
# MatchMaker driver. (string value)
#rpc_zmq_matchmaker=oslo.messaging._drivers.matchmaker.MatchMakerLocalhost
# ZeroMQ receiver listening port. (integer value)
#rpc_zmq_port=9501
# Number of ZeroMQ contexts, defaults to 1. (integer value)
#rpc_zmq_contexts=1
# Maximum number of ingress messages to locally buffer per
# topic. Default is unlimited. (integer value)
#rpc_zmq_topic_backlog=<None>
# Directory for holding IPC sockets. (string value)
#rpc_zmq_ipc_dir=/var/run/openstack
# Name of this node. Must be a valid hostname, FQDN, or IP
# address. Must match "host" option, if running Nova. (string
# value)
#rpc_zmq_host=keystone
# Seconds to wait before a cast expires (TTL). Only supported
# by impl_zmq. (integer value)
#rpc_cast_timeout=30
# Heartbeat frequency. (integer value)
#matchmaker_heartbeat_freq=300
# Heartbeat time-to-live. (integer value)
#matchmaker_heartbeat_ttl=600
# Host to locate redis. (string value)
#host=127.0.0.1
# Use this port to connect to redis host. (integer value)
#port=6379
342
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
#
# Options defined in keystone.notifications
#
# Default publisher_id for outgoing notifications (string
# value)
#default_publisher_id=<None>
#
# Options defined in keystone.middleware.ec2_token
#
# URL to get token from ec2 request. (string value)
#keystone_ec2_url=http://localhost:5000/v2.0/ec2tokens
# Required if EC2 server requires client certificate. (string
# value)
#keystone_ec2_keyfile=<None>
# Client certificate key filename. Required if EC2 server
# requires client certificate. (string value)
#keystone_ec2_certfile=<None>
# A PEM encoded certificate authority to use when verifying
# HTTPS connections. Defaults to the system CAs. (string
# value)
#keystone_ec2_cafile=<None>
343
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
#
# Options defined in keystone.openstack.common.eventlet_backdoor
#
# Enable eventlet backdoor. Acceptable values are 0, <port>,
# and <start>:<end>, where 0 results in listening on a random
# tcp port number; <port> results in listening on the
# specified port number (and not enabling backdoor if that
# port is in use); and <start>:<end> results in listening on
# the smallest unused port number within the specified range
# of port numbers. The chosen port is displayed in the
# service's log file. (string value)
#backdoor_port=<None>
#
# Options defined in keystone.openstack.common.lockutils
#
# Whether to disable inter-process locks (boolean value)
#disable_process_locking=false
# Directory to use for lock files. (string value)
#lock_path=<None>
#
# Options defined in keystone.openstack.common.log
#
# Print debugging output (set logging level to DEBUG instead
# of default WARNING level). (boolean value)
#debug=false
# Print more verbose output (set logging level to INFO instead
# of default WARNING level). (boolean value)
#verbose=false
# Log output to standard error (boolean value)
#use_stderr=true
# Format string to use for log messages with context (string
# value)
#logging_context_format_string=%(asctime)s.%(msecs)03d %(process)d
%(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s
%(message)s
# Format string to use for log messages without context
# (string value)
#logging_default_format_string=%(asctime)s.%(msecs)03d %(process)d
%(levelname)s %(name)s [-] %(instance)s%(message)s
# Data to append to log format when level is DEBUG (string
# value)
#logging_debug_format_suffix=%(funcName)s %(pathname)s:%(lineno)d
344
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
345
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
#
# Options defined in keystone.openstack.common.policy
#
# JSON file containing policy (string value)
#policy_file=policy.json
# Rule enforced when requested rule is not found (string
# value)
#policy_default_rule=default
[assignment]
#
# Options defined in keystone
#
# Keystone Assignment backend driver. (string value)
#driver=<None>
# Toggle for assignment caching. This has no effect unless
# global caching is enabled. (boolean value)
#caching=true
# TTL (in seconds) to cache assignment data. This has no
# effect unless global caching is enabled. (integer value)
#cache_time=<None>
# Maximum number of entities that will be returned in an
# assignment collection. (integer value)
#list_limit=<None>
[auth]
#
# Options defined in keystone
#
# Default auth methods. (list value)
#methods=external,password,token
# The password auth plugin module. (string value)
#password=keystone.auth.plugins.password.Password
# The token auth plugin module. (string value)
#token=keystone.auth.plugins.token.Token
# The external (REMOTE_USER) auth plugin module. (string
# value)
#external=keystone.auth.plugins.external.DefaultDomain
346
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
[cache]
#
# Options defined in keystone
#
# Prefix for building the configuration dictionary for the
# cache region. This should not need to be changed unless
# there is another dogpile.cache region with the same
# configuration name. (string value)
#config_prefix=cache.keystone
# Default TTL, in seconds, for any cached item in the
# dogpile.cache region. This applies to any cached method that
# doesn't have an explicit cache expiration time defined for
# it. (integer value)
#expiration_time=600
# Dogpile.cache backend module. It is recommended that
# Memcache (dogpile.cache.memcached) or Redis
# (dogpile.cache.redis) be used in production deployments.
# Small workloads (single process) like devstack can use the
# dogpile.cache.memory backend. (string value)
#backend=keystone.common.cache.noop
# Use a key-mangling function (sha1) to ensure fixed length
# cache-keys. This is toggle-able for debugging purposes, it
# is highly recommended to always leave this set to True.
# (boolean value)
#use_key_mangler=true
# Arguments supplied to the backend module. Specify this
# option once per argument to be passed to the dogpile.cache
# backend. Example format: "<argname>:<value>". (multi valued)
#backend_argument=
# Proxy Classes to import that will affect the way the
# dogpile.cache backend functions. See the dogpile.cache
# documentation on changing-backend-behavior. Comma delimited
# list e.g. my.dogpile.proxy.Class, my.dogpile.proxyClass2.
# (list value)
#proxies=
# Global toggle for all caching using the should_cache_fn
# mechanism. (boolean value)
#enabled=false
# Extra debugging from the cache backend (cache keys,
# get/set/delete/etc calls) This is only really useful if you
# need to see the specific cache-backend get/set/delete calls
# with the keys/values. Typically this should be left set to
# False. (boolean value)
#debug_cache_backend=false
[catalog]
#
347
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
[credential]
#
# Options defined in keystone
#
# Keystone Credential backend driver. (string value)
#driver=keystone.credential.backends.sql.Credential
[database]
#
# Options defined in keystone.openstack.common.db.options
#
# The file name to use with SQLite (string value)
#sqlite_db=keystone.sqlite
# If True, SQLite uses synchronous mode (boolean value)
#sqlite_synchronous=true
# The backend to use for db (string value)
# Deprecated group/name - [DEFAULT]/db_backend
#backend=sqlalchemy
# The SQLAlchemy connection string used to connect to the
# database (string value)
# Deprecated group/name - [DEFAULT]/sql_connection
# Deprecated group/name - [DATABASE]/sql_connection
# Deprecated group/name - [sql]/connection
#connection=<None>
# The SQL mode to be used for MySQL sessions. This option,
# including the default, overrides any server-set SQL mode. To
# use whatever SQL mode is set by the server configuration,
# set this to no value. Example: mysql_sql_mode= (string
# value)
#mysql_sql_mode=TRADITIONAL
#
#
#
#
#
348
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
#idle_timeout=3600
# Minimum number of SQL connections to keep open in a pool
# (integer value)
# Deprecated group/name - [DEFAULT]/sql_min_pool_size
# Deprecated group/name - [DATABASE]/sql_min_pool_size
#min_pool_size=1
# Maximum number of SQL connections to keep open in a pool
# (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_pool_size
# Deprecated group/name - [DATABASE]/sql_max_pool_size
#max_pool_size=<None>
# Maximum db connection retries during startup. (setting -1
# implies an infinite retry count) (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_retries
# Deprecated group/name - [DATABASE]/sql_max_retries
#max_retries=10
# Interval between retries of opening a sql connection
# (integer value)
# Deprecated group/name - [DEFAULT]/sql_retry_interval
# Deprecated group/name - [DATABASE]/reconnect_interval
#retry_interval=10
# If set, use this value for max_overflow with sqlalchemy
# (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_overflow
# Deprecated group/name - [DATABASE]/sqlalchemy_max_overflow
#max_overflow=<None>
# Verbosity of SQL debugging information. 0=None,
# 100=Everything (integer value)
# Deprecated group/name - [DEFAULT]/sql_connection_debug
#connection_debug=0
# Add python stack traces to SQL as comment strings (boolean
# value)
# Deprecated group/name - [DEFAULT]/sql_connection_trace
#connection_trace=false
# If set, use this value for pool_timeout with sqlalchemy
# (integer value)
# Deprecated group/name - [DATABASE]/sqlalchemy_pool_timeout
#pool_timeout=<None>
# Enable the experimental use of database reconnect on
# connection lost (boolean value)
#use_db_reconnect=false
# seconds between db connection retries (integer value)
#db_retry_interval=1
# Whether to increase interval between db connection retries,
# up to db_max_retry_interval (boolean value)
#db_inc_retry_interval=true
# max seconds between db connection retries, if
# db_inc_retry_interval is enabled (integer value)
349
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
#db_max_retry_interval=10
# maximum db connection retries before error is raised.
# (setting -1 implies an infinite retry count) (integer value)
#db_max_retries=20
[ec2]
#
# Options defined in keystone
#
# Keystone EC2Credential backend driver. (string value)
#driver=keystone.contrib.ec2.backends.kvs.Ec2
[endpoint_filter]
#
# Options defined in keystone
#
# Keystone Endpoint Filter backend driver (string value)
#driver=keystone.contrib.endpoint_filter.backends.sql.EndpointFilter
# Toggle to return all active endpoints if no filter exists.
# (boolean value)
#return_all_endpoints_if_no_filter=true
[federation]
#
# Options defined in keystone
#
# Keystone Federation backend driver. (string value)
#driver=keystone.contrib.federation.backends.sql.Federation
# Value to be used when filtering assertion parameters from
# the environment. (string value)
#assertion_prefix=
[identity]
#
# Options defined in keystone
#
# This references the domain to use for all Identity API v2
# requests (which are not aware of domains). A domain with
# this ID will be created for you by keystone-manage db_sync
# in migration 008. The domain referenced by this ID cannot
# be deleted on the v3 API, to prevent accidentally breaking
# the v2 API. There is nothing special about this domain,
# other than the fact that it must exist to order to maintain
# support for your v2 clients. (string value)
#default_domain_id=default
350
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
[kvs]
#
# Options defined in keystone
#
# Extra dogpile.cache backend modules to register with the
# dogpile.cache library. (list value)
#backends=
# Prefix for building the configuration dictionary for the KVS
# region. This should not need to be changed unless there is
# another dogpile.cache region with the same configuration
# name. (string value)
#config_prefix=keystone.kvs
# Toggle to disable using a key-mangling function to ensure
# fixed length keys. This is toggle-able for debugging
# purposes, it is highly recommended to always leave this set
# to True. (boolean value)
#enable_key_mangler=true
# Default lock timeout for distributed locking. (integer
# value)
#default_lock_timeout=5
[ldap]
#
# Options defined in keystone
#
# URL for connecting to the LDAP server. (string value)
351
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
#url=ldap://localhost
# User BindDN to query the LDAP server. (string value)
#user=<None>
# Password for the BindDN to query the LDAP server. (string
# value)
#password=<None>
# LDAP server suffix (string value)
#suffix=cn=example,cn=com
# If true, will add a dummy member to groups. This is required
# if the objectclass for groups requires the "member"
# attribute. (boolean value)
#use_dumb_member=false
# DN of the "dummy member" to use when "use_dumb_member" is
# enabled. (string value)
#dumb_member=cn=dumb,dc=nonexistent
# allow deleting subtrees. (boolean value)
#allow_subtree_delete=false
# The LDAP scope for queries, this can be either "one"
# (onelevel/singleLevel) or "sub" (subtree/wholeSubtree).
# (string value)
#query_scope=one
# Maximum results per page; a value of zero ("0") disables
# paging. (integer value)
#page_size=0
# The LDAP dereferencing option for queries. This can be
# either "never", "searching", "always", "finding" or
# "default". The "default" option falls back to using default
# dereferencing configured by your ldap.conf. (string value)
#alias_dereferencing=default
# Override the system's default referral chasing behavior for
# queries. (boolean value)
#chase_referrals=<None>
# Search base for users. (string value)
#user_tree_dn=<None>
# LDAP search filter for users. (string value)
#user_filter=<None>
# LDAP objectClass for users. (string value)
#user_objectclass=inetOrgPerson
# LDAP attribute mapped to user id. (string value)
#user_id_attribute=cn
# LDAP attribute mapped to user name. (string value)
#user_name_attribute=sn
# LDAP attribute mapped to user email. (string value)
#user_mail_attribute=email
352
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
353
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
354
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
#role_objectclass=organizationalRole
# LDAP attribute mapped to role id. (string value)
#role_id_attribute=cn
# LDAP attribute mapped to role name. (string value)
#role_name_attribute=ou
# LDAP attribute mapped to role membership. (string value)
#role_member_attribute=roleOccupant
# List of attributes stripped off the role on update. (list
# value)
#role_attribute_ignore=
# Allow role creation in LDAP backend. (boolean value)
#role_allow_create=true
# Allow role update in LDAP backend. (boolean value)
#role_allow_update=true
# Allow role deletion in LDAP backend. (boolean value)
#role_allow_delete=true
# Additional attribute mappings for roles. Attribute mapping
# format is <ldap_attr>:<user_attr>, where ldap_attr is the
# attribute in the LDAP entry and user_attr is the Identity
# API attribute. (list value)
#role_additional_attribute_mapping=
# Search base for groups. (string value)
#group_tree_dn=<None>
# LDAP search filter for groups. (string value)
#group_filter=<None>
# LDAP objectClass for groups. (string value)
#group_objectclass=groupOfNames
# LDAP attribute mapped to group id. (string value)
#group_id_attribute=cn
# LDAP attribute mapped to group name. (string value)
#group_name_attribute=ou
# LDAP attribute mapped to show group membership. (string
# value)
#group_member_attribute=member
# LDAP attribute mapped to group description. (string value)
#group_desc_attribute=description
# List of attributes stripped off the group on update. (list
# value)
#group_attribute_ignore=
# Allow group creation in LDAP backend. (boolean value)
#group_allow_create=true
# Allow group update in LDAP backend. (boolean value)
355
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
#group_allow_update=true
# Allow group deletion in LDAP backend. (boolean value)
#group_allow_delete=true
# Additional attribute mappings for groups. Attribute mapping
# format is <ldap_attr>:<user_attr>, where ldap_attr is the
# attribute in the LDAP entry and user_attr is the Identity
# API attribute. (list value)
#group_additional_attribute_mapping=
# CA certificate file path for communicating with LDAP
# servers. (string value)
#tls_cacertfile=<None>
# CA certificate directory path for communicating with LDAP
# servers. (string value)
#tls_cacertdir=<None>
# Enable TLS for communicating with LDAP servers. (boolean
# value)
#use_tls=false
# valid options for tls_req_cert are demand, never, and allow.
# (string value)
#tls_req_cert=demand
[matchmaker_ring]
#
# Options defined in oslo.messaging
#
# Matchmaker ring file (JSON). (string value)
# Deprecated group/name - [DEFAULT]/matchmaker_ringfile
#ringfile=/etc/oslo/matchmaker_ring.json
[memcache]
#
# Options defined in keystone
#
# Memcache servers in the format of "host:port" (list value)
#servers=localhost:11211
# Number of compare-and-set attempts to make when using
# compare-and-set in the token memcache back end. (integer
# value)
#max_compare_and_set_retry=16
[oauth1]
#
# Options defined in keystone
#
356
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
[os_inherit]
#
# Options defined in keystone
#
# role-assignment inheritance to projects from owning domain
# can be optionally enabled. (boolean value)
#enabled=false
[paste_deploy]
#
# Options defined in keystone
#
# Name of the paste configuration file that defines the
# available pipelines. (string value)
#config_file=keystone-paste.ini
[policy]
#
# Options defined in keystone
#
# Keystone Policy backend driver. (string value)
#driver=keystone.policy.backends.sql.Policy
# Maximum number of entities that will be returned in a policy
# collection. (integer value)
#list_limit=<None>
[revoke]
#
# Options defined in keystone
#
# An implementation of the backend for persisting revocation
# events. (string value)
#driver=keystone.contrib.revoke.backends.kvs.Revoke
# This value (calculated in seconds) is added to token
# expiration before a revocation event may be removed from the
357
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
[signing]
#
# Options defined in keystone
#
# Deprecated in favor of provider in the [token] section.
# (string value)
#token_format=<None>
# Path of the certfile for token signing. (string value)
#certfile=/etc/keystone/ssl/certs/signing_cert.pem
# Path of the keyfile for token signing. (string value)
#keyfile=/etc/keystone/ssl/private/signing_key.pem
# Path of the CA for token signing. (string value)
#ca_certs=/etc/keystone/ssl/certs/ca.pem
# Path of the CA Key for token signing. (string value)
#ca_key=/etc/keystone/ssl/private/cakey.pem
# Key Size (in bits) for token signing cert (auto generated
# certificate). (integer value)
#key_size=2048
# Day the token signing cert is valid for (auto generated
# certificate). (integer value)
#valid_days=3650
# Certificate Subject (auto generated certificate) for token
# signing. (string value)
#cert_subject=/C=US/ST=Unset/L=Unset/O=Unset/CN=www.example.com
[ssl]
#
# Options defined in keystone
#
# Toggle for SSL support on the keystone eventlet servers.
# (boolean value)
#enable=false
# Path of the certfile for SSL. (string value)
#certfile=/etc/keystone/ssl/certs/keystone.pem
# Path of the keyfile for SSL. (string value)
#keyfile=/etc/keystone/ssl/private/keystonekey.pem
# Path of the ca cert file for SSL. (string value)
358
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
#ca_certs=/etc/keystone/ssl/certs/ca.pem
# Path of the CA key file for SSL. (string value)
#ca_key=/etc/keystone/ssl/private/cakey.pem
# Require client certificate. (boolean value)
#cert_required=false
# SSL Key Length (in bits) (auto generated certificate).
# (integer value)
#key_size=1024
# Days the certificate is valid for once signed (auto
# generated certificate). (integer value)
#valid_days=3650
# SSL Certificate Subject (auto generated certificate).
# (string value)
#cert_subject=/C=US/ST=Unset/L=Unset/O=Unset/CN=localhost
[stats]
#
# Options defined in keystone
#
# Keystone stats backend driver. (string value)
#driver=keystone.contrib.stats.backends.kvs.Stats
[token]
#
# Options defined in keystone
#
# External auth mechanisms that should add bind information to
# token e.g. kerberos, x509. (list value)
#bind=
# Enforcement policy on tokens presented to keystone with bind
# information. One of disabled, permissive, strict, required
# or a specifically required bind mode e.g. kerberos or x509
# to require binding to that authentication. (string value)
#enforce_token_bind=permissive
# Amount of time a token should remain valid (in seconds).
# (integer value)
#expiration=3600
# Controls the token construction, validation, and revocation
# operations. Core providers are
# "keystone.token.providers.[pki|uuid].Provider". (string
# value)
#provider=<None>
# Keystone Token persistence backend driver. (string value)
#driver=keystone.token.backends.sql.Token
359
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
[trust]
#
# Options defined in keystone
#
# delegation and impersonation features can be optionally
# disabled. (boolean value)
#enabled=true
# Keystone Trust backend driver. (string value)
#driver=keystone.trust.backends.sql.Trust
keystone-paste.ini
Use the keystone-paste.ini file to configure the Web Service Gateway Interface (WSGI) middleware pipeline for the Identity service.
# Keystone PasteDeploy configuration file.
[filter:debug]
paste.filter_factory = keystone.common.wsgi:Debug.factory
[filter:build_auth_context]
paste.filter_factory = keystone.middleware:AuthContextMiddleware.factory
[filter:token_auth]
paste.filter_factory = keystone.middleware:TokenAuthMiddleware.factory
[filter:admin_token_auth]
paste.filter_factory = keystone.middleware:AdminTokenAuthMiddleware.factory
360
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
[filter:xml_body]
paste.filter_factory = keystone.middleware:XmlBodyMiddleware.factory
[filter:xml_body_v2]
paste.filter_factory = keystone.middleware:XmlBodyMiddlewareV2.factory
[filter:xml_body_v3]
paste.filter_factory = keystone.middleware:XmlBodyMiddlewareV3.factory
[filter:json_body]
paste.filter_factory = keystone.middleware:JsonBodyMiddleware.factory
[filter:user_crud_extension]
paste.filter_factory = keystone.contrib.user_crud:CrudExtension.factory
[filter:crud_extension]
paste.filter_factory = keystone.contrib.admin_crud:CrudExtension.factory
[filter:ec2_extension]
paste.filter_factory = keystone.contrib.ec2:Ec2Extension.factory
[filter:ec2_extension_v3]
paste.filter_factory = keystone.contrib.ec2:Ec2ExtensionV3.factory
[filter:federation_extension]
paste.filter_factory = keystone.contrib.federation.
routers:FederationExtension.factory
[filter:oauth1_extension]
paste.filter_factory = keystone.contrib.oauth1.routers:OAuth1Extension.factory
[filter:s3_extension]
paste.filter_factory = keystone.contrib.s3:S3Extension.factory
[filter:endpoint_filter_extension]
paste.filter_factory = keystone.contrib.endpoint_filter.
routers:EndpointFilterExtension.factory
[filter:simple_cert_extension]
paste.filter_factory = keystone.contrib.simple_cert:SimpleCertExtension.
factory
[filter:revoke_extension]
paste.filter_factory = keystone.contrib.revoke.routers:RevokeExtension.factory
[filter:url_normalize]
paste.filter_factory = keystone.middleware:NormalizingFilter.factory
[filter:sizelimit]
paste.filter_factory = keystone.middleware:RequestBodySizeLimiter.factory
[filter:stats_monitoring]
paste.filter_factory = keystone.contrib.stats:StatsMiddleware.factory
[filter:stats_reporting]
paste.filter_factory = keystone.contrib.stats:StatsExtension.factory
[filter:access_log]
paste.filter_factory = keystone.contrib.access:AccessLogMiddleware.factory
361
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
[app:public_service]
paste.app_factory = keystone.service:public_app_factory
[app:service_v3]
paste.app_factory = keystone.service:v3_app_factory
[app:admin_service]
paste.app_factory = keystone.service:admin_app_factory
[pipeline:public_api]
pipeline = sizelimit url_normalize build_auth_context token_auth
admin_token_auth xml_body_v2 json_body ec2_extension user_crud_extension
public_service
[pipeline:admin_api]
pipeline = sizelimit url_normalize build_auth_context token_auth
admin_token_auth xml_body_v2 json_body ec2_extension s3_extension
crud_extension admin_service
[pipeline:api_v3]
pipeline = sizelimit url_normalize build_auth_context token_auth
admin_token_auth xml_body_v3 json_body ec2_extension_v3 s3_extension
simple_cert_extension service_v3
[app:public_version_service]
paste.app_factory = keystone.service:public_version_app_factory
[app:admin_version_service]
paste.app_factory = keystone.service:admin_version_app_factory
[pipeline:public_version_api]
pipeline = sizelimit url_normalize xml_body public_version_service
[pipeline:admin_version_api]
pipeline = sizelimit url_normalize xml_body admin_version_service
[composite:main]
use = egg:Paste#urlmap
/v2.0 = public_api
/v3 = api_v3
/ = public_version_api
[composite:admin]
use = egg:Paste#urlmap
/v2.0 = admin_api
/v3 = api_v3
/ = admin_version_api
logging.conf
You can specify a special logging configuration file in the keystone.conf configuration
file. For example, /etc/keystone/logging.conf.
For details, see the (Python logging module documentation).
[loggers]
keys=root,access
362
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
[handlers]
keys=production,file,access_file,devel
[formatters]
keys=minimal,normal,debug
###########
# Loggers #
###########
[logger_root]
level=WARNING
handlers=file
[logger_access]
level=INFO
qualname=access
handlers=access_file
################
# Log Handlers #
################
[handler_production]
class=handlers.SysLogHandler
level=ERROR
formatter=normal
args=(('localhost', handlers.SYSLOG_UDP_PORT), handlers.SysLogHandler.
LOG_USER)
[handler_file]
class=handlers.WatchedFileHandler
level=WARNING
formatter=normal
args=('error.log',)
[handler_access_file]
class=handlers.WatchedFileHandler
level=INFO
formatter=minimal
args=('access.log',)
[handler_devel]
class=StreamHandler
level=NOTSET
formatter=debug
args=(sys.stdout,)
##################
# Log Formatters #
##################
[formatter_minimal]
format=%(message)s
[formatter_normal]
363
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
policy.json
Use the policy.json file to define additional access controls that apply to the Identity
service.
{
"admin_required": "role:admin or is_admin:1",
"service_role": "role:service",
"service_or_admin": "rule:admin_required or rule:service_role",
"owner" : "user_id:%(user_id)s",
"admin_or_owner": "rule:admin_required or rule:owner",
"default": "rule:admin_required",
"identity:get_region": "",
"identity:list_regions": "",
"identity:create_region": "rule:admin_required",
"identity:update_region": "rule:admin_required",
"identity:delete_region": "rule:admin_required",
"identity:get_service": "rule:admin_required",
"identity:list_services": "rule:admin_required",
"identity:create_service": "rule:admin_required",
"identity:update_service": "rule:admin_required",
"identity:delete_service": "rule:admin_required",
"identity:get_endpoint": "rule:admin_required",
"identity:list_endpoints": "rule:admin_required",
"identity:create_endpoint": "rule:admin_required",
"identity:update_endpoint": "rule:admin_required",
"identity:delete_endpoint": "rule:admin_required",
"identity:get_domain": "rule:admin_required",
"identity:list_domains": "rule:admin_required",
"identity:create_domain": "rule:admin_required",
"identity:update_domain": "rule:admin_required",
"identity:delete_domain": "rule:admin_required",
"identity:get_project": "rule:admin_required",
"identity:list_projects": "rule:admin_required",
"identity:list_user_projects": "rule:admin_or_owner",
"identity:create_project": "rule:admin_required",
"identity:update_project": "rule:admin_required",
"identity:delete_project": "rule:admin_required",
"identity:get_user": "rule:admin_required",
"identity:list_users": "rule:admin_required",
"identity:create_user": "rule:admin_required",
"identity:update_user": "rule:admin_required",
"identity:delete_user": "rule:admin_required",
"identity:change_password": "rule:admin_or_owner",
364
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
"identity:get_group": "rule:admin_required",
"identity:list_groups": "rule:admin_required",
"identity:list_groups_for_user": "rule:admin_or_owner",
"identity:create_group": "rule:admin_required",
"identity:update_group": "rule:admin_required",
"identity:delete_group": "rule:admin_required",
"identity:list_users_in_group": "rule:admin_required",
"identity:remove_user_from_group": "rule:admin_required",
"identity:check_user_in_group": "rule:admin_required",
"identity:add_user_to_group": "rule:admin_required",
"identity:get_credential": "rule:admin_required",
"identity:list_credentials": "rule:admin_required",
"identity:create_credential": "rule:admin_required",
"identity:update_credential": "rule:admin_required",
"identity:delete_credential": "rule:admin_required",
"identity:ec2_get_credential": "rule:admin_or_owner",
"identity:ec2_list_credentials": "rule:admin_or_owner",
"identity:ec2_create_credential": "rule:admin_or_owner",
"identity:ec2_delete_credential": "rule:admin_required or (rule:owner and
user_id:%(target.credential.user_id)s)",
"identity:get_role": "rule:admin_required",
"identity:list_roles": "rule:admin_required",
"identity:create_role": "rule:admin_required",
"identity:update_role": "rule:admin_required",
"identity:delete_role": "rule:admin_required",
"identity:check_grant": "rule:admin_required",
"identity:list_grants": "rule:admin_required",
"identity:create_grant": "rule:admin_required",
"identity:revoke_grant": "rule:admin_required",
"identity:list_role_assignments": "rule:admin_required",
"identity:get_policy": "rule:admin_required",
"identity:list_policies": "rule:admin_required",
"identity:create_policy": "rule:admin_required",
"identity:update_policy": "rule:admin_required",
"identity:delete_policy": "rule:admin_required",
"identity:check_token": "rule:admin_required",
"identity:validate_token": "rule:service_or_admin",
"identity:validate_token_head": "rule:service_or_admin",
"identity:revocation_list": "rule:service_or_admin",
"identity:revoke_token": "rule:admin_or_owner",
"identity:create_trust": "user_id:%(trust.trustor_user_id)s",
"identity:get_trust": "rule:admin_or_owner",
"identity:list_trusts": "",
"identity:list_roles_for_trust": "",
"identity:check_role_for_trust": "",
"identity:get_role_for_trust": "",
"identity:delete_trust": "",
"identity:create_consumer": "rule:admin_required",
"identity:get_consumer": "rule:admin_required",
"identity:list_consumers": "rule:admin_required",
365
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
"identity:delete_consumer": "rule:admin_required",
"identity:update_consumer": "rule:admin_required",
"identity:authorize_request_token": "rule:admin_required",
"identity:list_access_token_roles": "rule:admin_required",
"identity:get_access_token_role": "rule:admin_required",
"identity:list_access_tokens": "rule:admin_required",
"identity:get_access_token": "rule:admin_required",
"identity:delete_access_token": "rule:admin_required",
"identity:list_projects_for_endpoint": "rule:admin_required",
"identity:add_endpoint_to_project": "rule:admin_required",
"identity:check_endpoint_in_project": "rule:admin_required",
"identity:list_endpoints_for_project": "rule:admin_required",
"identity:remove_endpoint_from_project": "rule:admin_required",
"identity:create_identity_provider": "rule:admin_required",
"identity:list_identity_providers": "rule:admin_required",
"identity:get_identity_providers": "rule:admin_required",
"identity:update_identity_provider": "rule:admin_required",
"identity:delete_identity_provider": "rule:admin_required",
"identity:create_protocol": "rule:admin_required",
"identity:update_protocol": "rule:admin_required",
"identity:get_protocol": "rule:admin_required",
"identity:list_protocols": "rule:admin_required",
"identity:delete_protocol": "rule:admin_required",
"identity:create_mapping": "rule:admin_required",
"identity:get_mapping": "rule:admin_required",
"identity:list_mappings": "rule:admin_required",
"identity:delete_mapping": "rule:admin_required",
"identity:update_mapping": "rule:admin_required",
"identity:list_projects_for_groups": "",
"identity:list_domains_for_groups": "",
"identity:list_revoke_events": ""
}
Domain-specific configuration
Identity enables you to configure domain-specific authentication drivers. For example, you
can configure a domain to have its own LDAP or SQL server.
By default, the option to configure domain-specific drivers is disabled.
To enable domain-specific drivers, set these options in [identity] section in the
keystone.conf file:
[identity]
domain_specific_drivers_enabled = True
domain_config_dir = /etc/keystone/domains
366
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Any options that you define in the domain-specific configuration file override options in the
primary configuration file for the specified domain. Any domain without a domain-specific
configuration file uses only the options in the primary configuration file.
(IntOpt) The number of worker processes to serve the admin WSGI application. Defaults to number of CPUs (minimum of 2).
[cache] memcache_pool_connection_get_timeout = 10
[cache] memcache_pool_maxsize = 10
[cache] memcache_pool_unused_timeout = 60
(IntOpt) Number of seconds a connection to memcached is held unused in the pool before it is closed.
(keystone.cache.memcache_pool backend only)
[cache] memcache_socket_timeout = 3
(StrOpt) The SQLAlchemy connection string to use to connect to the slave database.
[endpoint_policy] driver =
(StrOpt) Endpoint policy backend driver
keystone.contrib.endpoint_policy.backends.sql.EndpointPolicy
[identity_mapping] backward_compatible_ids = True
367
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
[identity_mapping] driver =
keystone.identity.mapping_backends.sql.Mapping
[identity_mapping] generator =
keystone.identity.id_generators.sha256.Generator
[keystone_authtoken] check_revocations_for_cached =
False
[ldap] auth_pool_connection_lifetime = 60
[ldap] pool_connection_timeout = -1
[ldap] pool_retry_max = 3
[ldap] pool_size = 10
[ldap] project_additional_attribute_mapping = []
(ListOpt) Additional attribute mappings for projects. Attribute mapping format is <ldap_attr>:<user_attr>, where
ldap_attr is the attribute in the LDAP entry and user_attr is
the Identity API attribute.
368
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
[ldap] project_attribute_ignore = []
[ldap] project_id_attribute = cn
[ldap] project_name_attribute = ou
(IntOpt) Number of seconds memcached server is considered dead before it is tried again. This is used by the key
value store system (e.g. token pooled memcached persistence backend).
[memcache] pool_connection_get_timeout = 10
[memcache] pool_maxsize = 10
[memcache] pool_unused_timeout = 60
[memcache] socket_timeout = 3
(StrOpt) Path of the certfile for SAML signing. For nonproduction environments, you may be interested in using
`keystone-manage pki_setup` to generate self-signed certificates. Note, the path cannot contain a comma.
369
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
(StrOpt) Contact type. Allowed values are: technical, support, administrative billing, and other
(StrOpt) Entity ID value for unique Identity Provider identification. Usually FQDN is set with a suffix. A value is required to generate IDP Metadata. For example: https://
keystone.example.com/v3/OS-FEDERATION/saml2/idp
[saml] idp_lang = en
(StrOpt) Identity Provider Single-Sign-On service value, required in the Identity Provider's metadata. A value is required to generate IDP Metadata. For example: https://
keystone.example.com/v3/OS-FEDERATION/saml2/sso
[saml] keyfile = /etc/keystone/ssl/private/signing_key.pem (StrOpt) Path of the keyfile for SAML signing. Note, the
path cannot contain a comma.
[saml] xmlsec1_binary = xmlsec1
(StrOpt) Binary to be called for XML signing. Install the appropriate package, specify absolute path or adjust your
PATH environment variable if the binary cannot be found.
(StrOpt) The hash algorithm to use for PKI tokens. This can
be set to any algorithm that hashlib supports. WARNING:
Before changing this value, the auth_token middleware
must be configured with the hash_algorithms, otherwise
token revocation will not be processed correctly.
[DEFAULT] control_exchange
openstack
keystone
[DEFAULT] default_log_levels
amqp=WARN, amqplib=WARN,
amqp=WARN, amqplib=WARN,
boto=WARN, qpid=WARN,
boto=WARN, qpid=WARN,
sqlalchemy=WARN,
sqlalchemy=WARN, suds=INFO,
suds=INFO, iso8601=WARN,
oslo.messaging=INFO,
requests.packages.urllib3.connectionpool=WARN
iso8601=WARN,
requests.packages.urllib3.connectionpool=WARN,
urllib3.connectionpool=WARN,
websocket=WARN,
keystonemiddleware=WARN,
routes.middleware=WARN,
stevedore=WARN
[database] sqlite_db
keystone.sqlite
oslo.sqlite
[keystone_authtoken]
revocation_cache_time
300
10
[ldap] user_mail_attribute
[token] driver
keystone.token.backends.sql.Token
keystone.token.persistence.backends.sql.Token
370
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
Table6.38.Deprecated options
Deprecated option
New Option
[ldap] tenant_allow_delete
[ldap] project_allow_delete
[ldap] tenant_allow_create
[ldap] project_allow_create
[ldap] tenant_objectclass
[ldap] project_objectclass
[ldap] tenant_filter
[ldap] project_filter
[ldap] tenant_member_attribute
[ldap] project_member_attribute
[ldap] tenant_additional_attribute_mapping
[ldap] project_additional_attribute_mapping
[ldap] tenant_allow_update
[ldap] project_allow_update
[ldap] tenant_desc_attribute
[ldap] project_desc_attribute
[ldap] tenant_enabled_emulation
[ldap] project_enabled_emulation
[ldap] tenant_name_attribute
[ldap] project_name_attribute
[ldap] tenant_attribute_ignore
[ldap] project_attribute_ignore
[ldap] tenant_enabled_attribute
[ldap] project_enabled_attribute
[ldap] tenant_id_attribute
[ldap] project_id_attribute
[ldap] tenant_domain_id_attribute
[ldap] project_domain_id_attribute
[ldap] tenant_tree_dn
[ldap] project_tree_dn
[ldap] tenant_enabled_emulation_dn
[ldap] project_enabled_emulation_dn
371
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
7. Image Service
Table of Contents
Configure the API ........................................................................................................
Configure the RPC messaging system ...........................................................................
Support for ISO images ...............................................................................................
Configure back ends ....................................................................................................
Image Service sample configuration files ......................................................................
New, updated and deprecated options in Juno for OpenStack Image Service ................
379
380
383
383
389
408
Compute relies on an external image service to store virtual machine images and maintain a
catalog of available images. By default, Compute is configured to use the OpenStack Image
Service (Glance), which is currently the only supported image service.
If your installation requires euca2ools to register new images, you must run the nova-objectstore service. This service provides an Amazon S3 front-end for Glance, which is required by euca2ools.
To customize the Compute Service, use the configuration option settings documented in
Table2.30, Description of glance configuration options [242] and Table2.50, Description of S3 configuration options [254].
You can modify many options in the OpenStack Image Service. The following tables provide
a comprehensive list.
Description
[keystone_authtoken]
admin_password = None
admin_tenant_name = admin
admin_token = None
admin_user = None
auth_admin_prefix =
auth_host = 127.0.0.1
auth_port = 35357
(IntOpt) Port of the admin Identity API endpoint. Deprecated, use identity_uri.
auth_protocol = https
auth_uri = None
auth_version = None
cache = None
372
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
cafile = None
certfile = None
check_revocations_for_cached = False
delay_auth_decision = False
enforce_token_bind = permissive
(StrOpt) Used to control the use and type of token binding. Can be set to: "disabled" to not check token binding.
"permissive" (default) to validate binding information if
the bind type is of a form known to the server and ignore
it if not. "strict" like "permissive" but if the bind type is unknown the token will be rejected. "required" any form of
token binding is needed to be allowed. Finally the name of
a binding method that must be present in tokens.
hash_algorithms = md5
http_connect_timeout = None
http_request_max_retries = 3
identity_uri = None
include_service_catalog = True
(BoolOpt) (optional) indicate whether to set the X-Service-Catalog header. If False, middleware will not ask for
service catalog on token validation and will not set the XService-Catalog header.
insecure = False
keyfile = None
memcache_secret_key = None
memcache_security_strategy = None
(StrOpt) (optional) if defined, indicate whether token data should be authenticated or authenticated and encrypted. Acceptable values are MAC or ENCRYPT. If MAC, token data is authenticated (with HMAC) in the cache. If ENCRYPT, token data is encrypted and authenticated in the
cache. If the value is not one of these options or empty,
auth_token will raise an exception on initialization.
revocation_cache_time = 10
signing_dir = None
373
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
token_cache_time = 300
(IntOpt) In order to prevent excessive effort spent validating tokens, the middleware caches previously-seen tokens
for a configurable duration (in seconds). Set to -1 to disable caching completely.
Description
[DEFAULT]
allow_additional_image_properties = True
(BoolOpt) Whether to allow users to specify image properties beyond what the image schema provides
api_limit_max = 1000
backlog = 4096
(IntOpt) The backlog value that will be used when creating the TCP listener socket.
bind_host = 0.0.0.0
bind_port = None
data_api = glance.db.sqlalchemy.api
image_location_quota = 10
(IntOpt) Maximum number of locations allowed on an image. Negative values evaluate to unlimited.
image_member_quota = 128
image_property_quota = 128
image_tag_quota = 128
limit_param_default = 25
lock_path = None
memcached_servers = None
metadata_encryption_key = None
metadata_source_path = /etc/glance/metadefs/
property_protection_file = None
property_protection_rule_format = roles
show_image_direct_url = False
user_storage_quota = 0
(StrOpt) Set a system wide quota for every user. This value
is the total capacity that a user can use across all storage
systems. A value of 0 means unlimited.Optional unit can
be specified for the value. Accepted units are B, KB, MB,
GB and TB representing Bytes, KiloBytes, MegaBytes, GigaBytes and TeraBytesrespectively. If no unit is specified then
Bytes is assumed. Note that there should not be any space
between value and unit and units are case sensitive.
workers = 4
[glance_store]
os_region_name = None
374
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
[image_format]
container_formats = ami, ari, aki, bare, ovf, ova
[keystone_authtoken]
memcached_servers = None
[task]
eventlet_executor_pool_size = 1000
task_executor = eventlet
task_time_to_live = 48
Description
[DEFAULT]
db_enforce_mysql_charset = True
[database]
backend = sqlalchemy
connection = None
connection_debug = 0
connection_trace = False
db_inc_retry_interval = True
db_max_retries = 20
(IntOpt) Maximum database connection retries before error is raised. Set to -1 to specify an infinite retry count.
db_max_retry_interval = 10
(IntOpt) If db_inc_retry_interval is set, the maximum seconds between database connection retries.
db_retry_interval = 1
idle_timeout = 3600
max_overflow = None
max_pool_size = None
max_retries = 10
min_pool_size = 1
375
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
mysql_sql_mode = TRADITIONAL
pool_timeout = None
retry_interval = 10
slave_connection = None
(StrOpt) The SQLAlchemy connection string to use to connect to the slave database.
sqlite_db = oslo.sqlite
sqlite_synchronous = True
use_db_reconnect = False
Description
[DEFAULT]
cleanup_scrubber = False
cleanup_scrubber_time = 86400
delayed_delete = False
image_cache_dir = None
image_cache_driver = sqlite
image_cache_max_size = 10737418240
image_cache_sqlite_db = cache.db
image_cache_stall_time = 86400
scrub_time = 0
scrubber_datadir = /var/lib/glance/scrubber
(StrOpt) Directory that the scrubber will use to track information about what to delete. Make sure this is set in
glance-api.conf and glance-scrubber.conf.
Description
[DEFAULT]
debug = False
(BoolOpt) Print debugging output (set logging level to DEBUG instead of default WARNING level).
fatal_deprecations = False
376
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
log_config_append = None
log_dir = None
(StrOpt) (Optional) The base directory used for relative -log-file paths.
log_file = None
(StrOpt) (Optional) Name of log file to output to. If no default is set, logging will go to stdout.
log_format = None
(StrOpt) DEPRECATED. A logging.Formatter log message format string which may use any of the available
logging.LogRecord attributes. This option is deprecated. Please use logging_context_format_string and
logging_default_format_string instead.
logging_context_format_string = %(asctime)s.
%(msecs)03d %(process)d %(levelname)s %(name)s
[%(request_id)s %(user_identity)s] %(instance)s
%(message)s
logging_debug_format_suffix = %(funcName)s
%(pathname)s:%(lineno)d
logging_default_format_string = %(asctime)s.%(msecs)03d (StrOpt) Format string to use for log messages without
%(process)d %(levelname)s %(name)s [-] %(instance)s
context.
%(message)s
logging_exception_prefix = %(asctime)s.%(msecs)03d
%(process)d TRACE %(name)s %(instance)s
publish_errors = False
syslog_log_facility = LOG_USER
use_stderr = True
use_syslog = False
use_syslog_rfc_format = False
verbose = False
Description
[DEFAULT]
policy_default_rule = default
policy_file = policy.json
Description
[profiler]
enabled = True
377
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
Description
trace_sqlalchemy = True
juno
Description
[matchmaker_redis]
host = 127.0.0.1
password = None
port = 6379
[matchmaker_ring]
ringfile = /etc/oslo/matchmaker_ring.json
Description
[DEFAULT]
admin_password = None
admin_tenant_name = None
admin_user = None
auth_region = None
auth_strategy = noauth
auth_url = None
registry_client_ca_file = None
registry_client_cert_file = None
registry_client_insecure = False
registry_client_key_file = None
registry_client_protocol = http
registry_client_timeout = 600
(IntOpt) The period of time, in seconds, that the API server will wait for a registry request to complete. A value of 0
implies no timeout.
registry_host = 0.0.0.0
registry_port = 9191
Description
[DEFAULT]
378
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
fake_rabbit = False
pydev_worker_debug_host = None
pydev_worker_debug_port = 5678
Description
[DEFAULT]
admin_role = admin
allow_anonymous_access = False
enable_v1_api = True
enable_v1_registry = True
enable_v2_api = True
enable_v2_registry = True
eventlet_hub = poll
image_size_cap = 1099511627776
location_strategy = location_order
(StrOpt) This value sets what strategy will be used to determine the image location order. Currently two strategies are packaged with Glance 'location_order' and
'store_type'.
max_header_line = 16384
(IntOpt) Maximum line size of message headers to be accepted. max_header_line may need to be increased when
using large tokens (typically those generated by the Keystone v3 API with big service catalogs
owner_is_tenant = True
(BoolOpt) When true, this option sets the owner of an image to be the tenant. Otherwise, the owner of the image
will be the authenticated user issuing the request.
send_identity_headers = False
(BoolOpt) Whether to pass through headers containing user and tenant information when making requests to the
379
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
registry. This allows the registry to use the context middleware without keystonemiddleware's auth_token middleware, removing calls to the keystone auth service. It is recommended that when using this option, secure communication between glance api and glance registry is ensured
by means other than auth_token middleware.
show_multiple_locations = False
(BoolOpt) Whether to include the backend image locations in image properties. Revealing storage location can
be a security risk, so use this setting with caution! The
overrides show_image_direct_url.
tcp_keepidle = 600
use_user_token = True
[glance_store]
default_store = file
[paste_deploy]
config_file = None
flavor = None
(StrOpt) Partial name of a pipeline in your paste configuration file with the service name removed. For example, if
your paste section name is [pipeline:glance-api-keystone]
use the value "keystone"
[store_type_location_strategy]
store_type_preference =
Description
[DEFAULT]
ca_file = None
cert_file = None
key_file = None
(StrOpt) Private key file to use when starting API server securely.
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
[DEFAULT]
kombu_reconnect_delay = 1.0
(FloatOpt) How long to wait before reconnecting in response to an AMQP consumer cancel notification.
kombu_ssl_ca_certs =
kombu_ssl_certfile =
kombu_ssl_keyfile =
kombu_ssl_version =
rabbit_ha_queues = False
rabbit_host = localhost
rabbit_hosts = $rabbit_host:$rabbit_port
rabbit_login_method = AMQPLAIN
rabbit_max_retries = 0
(IntOpt) Maximum number of RabbitMQ connection retries. Default is 0 (infinite retry count).
rabbit_password = guest
rabbit_port = 5672
rabbit_retry_backoff = 2
rabbit_retry_interval = 1
rabbit_use_ssl = False
rabbit_userid = guest
rabbit_virtual_host = /
Description
[DEFAULT]
qpid_heartbeat = 60
qpid_hostname = localhost
qpid_hosts = $qpid_hostname:$qpid_port
qpid_password =
qpid_port = 5672
qpid_protocol = tcp
qpid_receiver_capacity = 1
qpid_sasl_mechanisms =
qpid_tcp_nodelay = True
qpid_topology_version = 1
381
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
ker federation to work. Users should update to version 2
when they are able to take everything down, as it requires
a clean break.
qpid_username =
Description
[DEFAULT]
rpc_zmq_bind_address = *
rpc_zmq_contexts = 1
rpc_zmq_host = localhost
rpc_zmq_ipc_dir = /var/run/openstack
rpc_zmq_matchmaker =
(StrOpt) MatchMaker driver.
oslo.messaging._drivers.matchmaker.MatchMakerLocalhost
rpc_zmq_port = 9501
rpc_zmq_topic_backlog = None
Description
[DEFAULT]
amqp_auto_delete = False
amqp_durable_queues = False
control_exchange = openstack
default_publisher_id = image.localhost
notification_driver = []
notification_topics = notifications
transport_url = None
Description
[DEFAULT]
allowed_rpc_exception_modules =
(ListOpt) Modules of exceptions that are permitted to be
openstack.common.exception, glance.common.exception, recreatedupon receiving exception data from an rpc call.
exceptions
matchmaker_heartbeat_freq = 300
matchmaker_heartbeat_ttl = 600
rpc_backend = rabbit
rpc_cast_timeout = 30
382
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
Description
rpc_conn_pool_size = 30
rpc_response_timeout = 60
rpc_thread_pool_size = 64
juno
2.
In this command, ubuntu.iso is the name for the ISO image after it is loaded to the
Image Service, and ubuntu-13.04-server-amd64.iso is the name of the source
ISO image.
3.
In this command, ubuntu.iso is the ISO image, and instance_name is the name of
the new instance.
383
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
[glance_store]
cinder_api_insecure = False
cinder_ca_certificates_file = None
cinder_catalog_info = volume:cinder:publicURL
(StrOpt) Info to match when looking for cinder in the service catalog. Format is : separated values of the form:
<service_type>:<service_name>:<endpoint_type>
cinder_endpoint_template = None
(StrOpt) Override service catalog lookup with template for cinder endpoint e.g. http://localhost:8776/v1/
%(project_id)s
cinder_http_retries = 3
Description
[glance_store]
filesystem_store_datadir = None
filesystem_store_datadirs = None
filesystem_store_file_perm = 0
filesystem_store_metadata_file = None
Description
[glance_store]
mongodb_store_db = None
mongodb_store_uri = None
(StrOpt) Hostname or IP address of the instance to connect to, or a mongodb URI, or a list of hostnames / mongodb URIs. If host is an IPv6 literal it must be enclosed in '['
and ']' characters following the RFC2732 URL syntax (e.g.
'[::1]' for localhost)
Description
[glance_store]
rbd_store_ceph_conf = /etc/ceph/ceph.conf
(StrOpt) Ceph configuration file path. If <None>, librados will locate the default config. If using cephx authen-
384
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
tication, this file should include a reference to the right
keyring in a client.<USER> section
rbd_store_chunk_size = 8
rbd_store_pool = images
rbd_store_user = None
Description
[glance_store]
s3_store_access_key = None
s3_store_bucket = None
s3_store_bucket_url_format = subdomain
(StrOpt) The S3 calling format used to determine the bucket. Either subdomain or path can be used.
s3_store_create_bucket_on_put = False
s3_store_host = None
s3_store_object_buffer_dir = None
s3_store_secret_key = None
Description
[glance_store]
sheepdog_store_address = localhost
sheepdog_store_chunk_size = 64
sheepdog_store_port = 7000
Description
[DEFAULT]
default_swift_reference = ref1
(StrOpt) The reference to the default swift account/backing store parameters to use for adding new images.
swift_store_auth_address = None
swift_store_config_file = None
swift_store_key = None
swift_store_user = None
(StrOpt) The user to authenticate against the Swift authentication service (deprecated)
[glance_store]
385
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
default_swift_reference = ref1
(StrOpt) The reference to the default swift account/backing store parameters to use for adding new images.
swift_enable_snet = False
swift_store_admin_tenants =
swift_store_auth_address = None
swift_store_auth_insecure = False
swift_store_auth_version = 2
swift_store_config_file = None
swift_store_container = glance
swift_store_create_container_on_put = False
swift_store_endpoint_type = publicURL
(StrOpt) A string giving the endpoint type of the swift service to use (publicURL, adminURL or internalURL). This setting is only used if swift_store_auth_version is 2.
swift_store_key = None
swift_store_large_object_chunk_size = 200
swift_store_large_object_size = 5120
swift_store_multi_tenant = False
swift_store_region = None
swift_store_retry_get_count = 0
(IntOpt) The number of times a Swift download will be retried before the request fails.
swift_store_service_type = object-store
swift_store_ssl_compression = True
swift_store_user = None
(StrOpt) The user to authenticate against the Swift authentication service (deprecated)
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Note
You must configure any configured Image Service data stores for the Compute
service.
You can specify vCenter data stores directly by using the data store name or Storage Policy
Based Management (SPBM), which requires vCenter Server 5.5 or later. For details, see the
section called Configure vCenter data stores for the back end [388].
Note
If you intend to use multiple data stores for the back end, use the SPBM feature.
In the DEFAULT section, set the default_store parameter to vsphere, as shown in this
code sample:
[DEFAULT]
# Which back end scheme should Glance use by default is not specified
# in a request to add a new image to Glance? Known schemes are determined
# by the known_stores option below.
# Default: 'file'
default_store = vsphere
The following table describes the parameters in the VMware Datastore Store Options section:
Description
[glance_store]
vmware_api_insecure = False
vmware_api_retry_count = 10
vmware_datacenter_path = ha-datacenter
vmware_datastore_name = None
vmware_server_host = None
vmware_server_password = None
vmware_server_username = None
vmware_store_image_dir = /openstack_glance
(StrOpt) The name of the directory where the glance images will be stored in the VMware datastore.
vmware_task_poll_interval = 5
387
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Note
SPBM requires vCenter Server 5.5 or later.
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
2.
3.
In vCenter, use tagging to identify the data stores and define a storage policy:
a.
b.
Apply the tag to the data stores to be used by the SPBM policy.
c.
Create a tag-based storage policy that uses one or more tags to identify a set of
data stores.
Note
For details about creating tags in vSphere, see the vSphere documentation.
For details about storage policies in vSphere, see the vSphere documentation.
2.
3.
4.
5.
Note
If you do not set this parameter, the storage policy cannot be used to place
images in the data store.
Complete the other vCenter configuration parameters as appropriate.
glance-api.conf
The configuration file for the Image Service API is found in the glance-api.conf file.
This file must be modified after installation.
[DEFAULT]
# Show more verbose log output (sets INFO log level output)
#verbose = False
389
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
# Maximum image size (in bytes) that may be uploaded through the
# Glance API server. Defaults to 1 TB.
# WARNING: this value should only be increased after careful consideration
# and must be set to a value under 8 EB (9223372036854775808).
#image_size_cap = 1099511627776
# Address to bind the API server
bind_host = 0.0.0.0
# Port the bind the API server to
bind_port = 9292
# Log to this file. Make sure you do not set the same log file for both the
API
# and registry servers!
#
# If `log_file` is omitted and `use_syslog` is false, then log messages are
# sent to stdout as a fallback.
log_file = /var/log/glance/api.log
# Backlog requests when creating socket
backlog = 4096
# TCP_KEEPIDLE value in seconds when creating socket.
# Not supported on OS X.
#tcp_keepidle = 600
# API to use for accessing data. Default value points to sqlalchemy
# package, it is also possible to use: glance.db.registry.api
# data_api = glance.db.sqlalchemy.api
# Number of Glance API worker processes to start.
# On machines with more than one CPU increasing this value
# may improve performance (especially if using SSL with
# compression turned on). It is typically recommended to set
# this value to the number of CPUs present on your machine.
workers = 1
390
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
391
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
392
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
393
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
# the defaults)
rabbit_host = localhost
rabbit_port = 5672
rabbit_use_ssl = false
rabbit_userid = guest
rabbit_password = guest
rabbit_virtual_host = /
rabbit_notification_exchange = glance
rabbit_notification_topic = notifications
rabbit_durable_queues = False
# Configuration options if sending notifications via Qpid (these are
# the defaults)
qpid_notification_exchange = glance
qpid_notification_topic = notifications
qpid_hostname = localhost
qpid_port = 5672
qpid_username =
qpid_password =
qpid_sasl_mechanisms =
qpid_reconnect_timeout = 0
qpid_reconnect_limit = 0
qpid_reconnect_interval_min = 0
qpid_reconnect_interval_max = 0
qpid_reconnect_interval = 0
qpid_heartbeat = 5
# Set to 'ssl' to enable SSL
qpid_protocol = tcp
qpid_tcp_nodelay = True
# ============ Filesystem Store Options ========================
# Directory that the Filesystem backend store
# writes image data to
filesystem_store_datadir = /var/lib/glance/images/
#
#
#
#
#
#
394
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
# Valid versions are '2' for keystone and '1' for swauth and rackspace
swift_store_auth_version = 2
# Address where the Swift authentication service lives
# Valid schemes are 'http://' and 'https://'
# If no scheme specified, default to 'https://'
# For swauth, use something like '127.0.0.1:8080/v1.0/'
swift_store_auth_address = 127.0.0.1:5000/v2.0/
# User to authenticate against the Swift authentication service
# If you use Swift authentication service, set it to 'account':'user'
# where 'account' is a Swift storage account and 'user'
# is a user in that account
swift_store_user = jdoe:jdoe
# Auth key for the user authenticating against the
# Swift authentication service
swift_store_key = a86850deb2742ec3cb41518e26aa2d89
# Container within the account that the account should use
# for storing images in Swift
swift_store_container = glance
# Do we create the container if it does not exist?
swift_store_create_container_on_put = False
# What size, in MB, should Glance start chunking image files
# and do a large object manifest in Swift? By default, this is
# the maximum object size in Swift, which is 5GB
swift_store_large_object_size = 5120
# When doing a large object manifest, what size, in MB, should
# Glance write chunks to Swift? This amount of data is written
# to a temporary disk buffer during the process of chunking
# the image file, and the default is 200MB
swift_store_large_object_chunk_size = 200
# Whether to use ServiceNET to communicate with the Swift storage servers.
# (If you aren't RACKSPACE, leave this False!)
#
# To use ServiceNET for authentication, prefix hostname of
# `swift_store_auth_address` with 'snet-'.
# Ex. https://example.com/v1.0/ -> https://snet-example.com/v1.0/
swift_enable_snet = False
# If set to True enables multi-tenant storage mode which causes Glance images
# to be stored in tenant specific Swift accounts.
#swift_store_multi_tenant = False
# A list of swift ACL strings that will be applied as both read and
# write ACLs to the containers created by Glance in multi-tenant
# mode. This grants the specified tenants/users read and write access
# to all newly created image objects. The standard swift ACL string
# formats are allowed, including:
# <tenant_id>:<username>
# <tenant_name>:<username>
# *:<username>
# Multiple ACLs can be combined using a comma separated list, for
# example: swift_store_admin_tenants = service:glance,*:admin
#swift_store_admin_tenants =
395
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
# The region of the swift endpoint to be used for single tenant. This setting
# is only necessary if the tenant has multiple swift endpoints.
#swift_store_region =
# If set to False, disables SSL layer compression of https swift requests.
# Setting to 'False' may improve performance for images which are already
# in a compressed format, eg qcow2. If set to True, enables SSL layer
# compression (provided it is supported by the target swift proxy).
#swift_store_ssl_compression = True
# The number of times a Swift download will be retried before the
# request fails
#swift_store_retry_get_count = 0
# ============ S3 Store Options =============================
# Address where the S3 authentication service lives
# Valid schemes are 'http://' and 'https://'
# If no scheme specified, default to 'http://'
s3_store_host = 127.0.0.1:8080/v1.0/
# User to authenticate against the S3 authentication service
s3_store_access_key = <20-char AWS access key>
# Auth key for the user authenticating against the
# S3 authentication service
s3_store_secret_key = <40-char AWS secret key>
# Container within the account that the account should use
# for storing images in S3. Note that S3 has a flat namespace,
# so you need a unique bucket name for your glance images. An
# easy way to do this is append your AWS access key to "glance".
# S3 buckets in AWS *must* be lowercased, so remember to lowercase
# your AWS access key if you use it in your bucket name below!
s3_store_bucket = <lowercased 20-char aws access key>glance
# Do we create the bucket if it does not exist?
s3_store_create_bucket_on_put = False
# When sending images to S3, the data will first be written to a
# temporary buffer on disk. By default the platform's temporary directory
# will be used. If required, an alternative directory can be specified here.
#s3_store_object_buffer_dir = /path/to/dir
# When forming a bucket url, boto will either set the bucket name as the
# subdomain or as the first token of the path. Amazon's S3 service will
# accept it as the subdomain, but Swift's S3 middleware requires it be
# in the path. Set this to 'path' or 'subdomain' - defaults to 'subdomain'.
#s3_store_bucket_url_format = subdomain
# ============ RBD Store Options =============================
# Ceph configuration file path
# If using cephx authentication, this file should
# include a reference to the right keyring
# in a client.<USER> section
#rbd_store_ceph_conf = /etc/ceph/ceph.conf
# RADOS user to authenticate as (only applicable if using cephx)
396
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
397
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
# should be `ha-datacenter`.
#vmware_datacenter_path = <None>
# Datastore associated with the datacenter (string value)
#vmware_datastore_name = <None>
# The number of times we retry on failures
# e.g., socket error, etc (integer value)
#vmware_api_retry_count = 10
# The interval used for polling remote tasks
# invoked on VMware ESX/VC server in seconds (integer value)
#vmware_task_poll_interval = 5
# Absolute path of the folder containing the images in the datastore
# (string value)
#vmware_store_image_dir = /openstack_glance
# Allow to perform insecure SSL requests to the target system (boolean value)
#vmware_api_insecure = False
# ============ Delayed Delete Options =============================
# Turn on/off delayed delete
delayed_delete = False
# Delayed delete time in seconds
scrub_time = 43200
# Directory that the scrubber will use to remind itself of what to delete
# Make sure this is also set in glance-scrubber.conf
scrubber_datadir = /var/lib/glance/scrubber
# =============== Quota Options ==================================
# The maximum number of image members allowed per image
#image_member_quota = 128
# The maximum number of image properties allowed per image
#image_property_quota = 128
# The maximum number of tags allowed per image
#image_tag_quota = 128
# The maximum number of locations allowed per image
#image_location_quota = 10
# Set a system wide quota for every user. This value is the total number
# of bytes that a user can use across all storage systems. A value of
# 0 means unlimited.
#user_storage_quota = 0
# =============== Image Cache Options =============================
# Base directory that the Image Cache uses
image_cache_dir = /var/lib/glance/image-cache/
# =============== Manager Options =================================
# DEPRECATED. TO BE REMOVED IN THE JUNO RELEASE.
398
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
399
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
400
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
#flavor=
[store_type_location_strategy]
# The scheme list to use to get store preference order. The scheme must be
# registered by one of the stores defined by the 'known_stores' config option.
# This option will be applied when you using 'store_type' option as image
# location strategy defined by the 'location_strategy' config option.
#store_type_preference =
glance-registry.conf
Configuration for the Image Service's registry, which stores the metadata about images, is
found in the glance-registry.conf file.
This file must be modified after installation.
[DEFAULT]
# Show more verbose log output (sets INFO log level output)
#verbose = False
# Show debugging output in logs (sets DEBUG log level output)
#debug = False
# Address to bind the registry server
bind_host = 0.0.0.0
# Port the bind the registry server to
bind_port = 9191
# Log to this file. Make sure you do not set the same log file for both the
API
# and registry servers!
#
# If `log_file` is omitted and `use_syslog` is false, then log messages are
# sent to stdout as a fallback.
log_file = /var/log/glance/registry.log
# Backlog requests when creating socket
backlog = 4096
# TCP_KEEPIDLE value in seconds when creating socket.
# Not supported on OS X.
#tcp_keepidle = 600
# API to use for accessing data. Default value points to sqlalchemy
# package.
#data_api = glance.db.sqlalchemy.api
# Enable Registry API versions individually or simultaneously
#enable_v1_registry = True
#enable_v2_registry = True
# Limit the api to return `param_limit_max` items in a call to a container. If
# a larger `limit` query param is provided, it will be reduced to this value.
api_limit_max = 1000
# If a `limit` query param is not provided in an api request, it will
# default to `limit_param_default`
limit_param_default = 25
401
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
402
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
# value)
# Deprecated group/name - [DEFAULT]/sql_idle_timeout
# Deprecated group/name - [DATABASE]/sql_idle_timeout
# Deprecated group/name - [sql]/idle_timeout
#idle_timeout = 3600
# Minimum number of SQL connections to keep open in a pool
# (integer value)
# Deprecated group/name - [DEFAULT]/sql_min_pool_size
# Deprecated group/name - [DATABASE]/sql_min_pool_size
#min_pool_size = 1
# Maximum number of SQL connections to keep open in a pool
# (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_pool_size
# Deprecated group/name - [DATABASE]/sql_max_pool_size
#max_pool_size = <None>
# Maximum db
# implies an
# Deprecated
# Deprecated
#max_retries
403
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
#db_inc_retry_interval = True
# max seconds between db connection retries, if
# db_inc_retry_interval is enabled (integer value)
#db_max_retry_interval = 10
# maximum db connection retries before error is raised.
# (setting -1 implies an infinite retry count) (integer value)
#db_max_retries = 20
[keystone_authtoken]
auth_host = 127.0.0.1
auth_port = 35357
auth_protocol = http
admin_tenant_name = %SERVICE_TENANT_NAME%
admin_user = %SERVICE_USER%
admin_password = %SERVICE_PASSWORD%
[paste_deploy]
# Name of the paste configuration file that defines the available pipelines
#config_file = glance-registry-paste.ini
# Partial name of a pipeline in your paste configuration file with the
# service name removed. For example, if your paste section name is
# [pipeline:glance-registry-keystone], you would configure the flavor below
# as 'keystone'.
#flavor=
glance-api-paste.ini
Configuration for the Image Service's API middleware pipeline is found in the glanceapi-paste.ini file.
You should not need to modify this file.
# Use this pipeline for no auth or image caching - DEFAULT
[pipeline:glance-api]
pipeline = versionnegotiation unauthenticated-context rootapp
# Use this pipeline for image caching and no auth
[pipeline:glance-api-caching]
pipeline = versionnegotiation unauthenticated-context cache rootapp
# Use this pipeline for caching w/ management interface but no auth
[pipeline:glance-api-cachemanagement]
pipeline = versionnegotiation unauthenticated-context cache cachemanage
rootapp
# Use this pipeline for keystone auth
[pipeline:glance-api-keystone]
pipeline = versionnegotiation authtoken context rootapp
# Use this pipeline for keystone auth with image caching
[pipeline:glance-api-keystone+caching]
pipeline = versionnegotiation authtoken context cache rootapp
# Use this pipeline for keystone auth with caching and cache management
[pipeline:glance-api-keystone+cachemanagement]
pipeline = versionnegotiation authtoken context cache cachemanage rootapp
404
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
# Use this pipeline for authZ only. This means that the registry will treat a
# user as authenticated without making requests to keystone to reauthenticate
# the user.
[pipeline:glance-api-trusted-auth]
pipeline = versionnegotiation context rootapp
# Use this pipeline for authZ only. This means that the registry will treat a
# user as authenticated without making requests to keystone to reauthenticate
# the user and uses cache management
[pipeline:glance-api-trusted-auth+cachemanagement]
pipeline = versionnegotiation context cache cachemanage rootapp
[composite:rootapp]
paste.composite_factory = glance.api:root_app_factory
/: apiversions
/v1: apiv1app
/v2: apiv2app
[app:apiversions]
paste.app_factory = glance.api.versions:create_resource
[app:apiv1app]
paste.app_factory = glance.api.v1.router:API.factory
[app:apiv2app]
paste.app_factory = glance.api.v2.router:API.factory
[filter:versionnegotiation]
paste.filter_factory = glance.api.middleware.
version_negotiation:VersionNegotiationFilter.factory
[filter:cache]
paste.filter_factory = glance.api.middleware.cache:CacheFilter.factory
[filter:cachemanage]
paste.filter_factory = glance.api.middleware.cache_manage:CacheManageFilter.
factory
[filter:context]
paste.filter_factory = glance.api.middleware.context:ContextMiddleware.factory
[filter:unauthenticated-context]
paste.filter_factory = glance.api.middleware.
context:UnauthenticatedContextMiddleware.factory
[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
delay_auth_decision = true
[filter:gzip]
paste.filter_factory = glance.api.middleware.gzip:GzipMiddleware.factory
glance-registry-paste.ini
The Image Service's middleware pipeline for its registry is found in the glance-registry-paste.ini file.
# Use this pipeline for no auth - DEFAULT
405
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
[pipeline:glance-registry]
pipeline = unauthenticated-context registryapp
# Use this pipeline for keystone auth
[pipeline:glance-registry-keystone]
pipeline = authtoken context registryapp
# Use this pipeline for authZ only. This means that the registry will treat a
# user as authenticated without making requests to keystone to reauthenticate
# the user.
[pipeline:glance-registry-trusted-auth]
pipeline = context registryapp
[app:registryapp]
paste.app_factory = glance.registry.api:API.factory
[filter:context]
paste.filter_factory = glance.api.middleware.context:ContextMiddleware.factory
[filter:unauthenticated-context]
paste.filter_factory = glance.api.middleware.
context:UnauthenticatedContextMiddleware.factory
[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
glance-scrubber.conf
glance-scrubber is a utility for the Image Service that cleans up images that have been
deleted; its configuration is stored in the glance-scrubber.conf file.
Multiple instances of glance-scrubber can be run in a single deployment, but only one
of them can be designated as the cleanup_scrubber in the glance-scrubber.conf
file. The cleanup_scrubber coordinates other glance-scrubber instances by maintaining the master queue of images that need to be removed.
[DEFAULT]
# Show more verbose log output (sets INFO log level output)
#verbose = False
# Show debugging output in logs (sets DEBUG log level output)
#debug = False
# Log to this file. Make sure you do not set the same log file for both the
API
# and registry servers!
#
# If `log_file` is omitted and `use_syslog` is false, then log messages are
# sent to stdout as a fallback.
log_file = /var/log/glance/scrubber.log
# Send logs to syslog (/dev/log) instead of to file specified by `log_file`
#use_syslog = False
# Should we run our own loop or rely on cron/scheduler to run us
daemon = False
# Loop time between checking for new items to schedule for delete
wakeup_time = 300
406
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
# Directory that the scrubber will use to remind itself of what to delete
# Make sure this is also set in glance-api.conf
scrubber_datadir = /var/lib/glance/scrubber
# Only one server in your deployment should be designated the cleanup host
cleanup_scrubber = False
# pending_delete items older than this time are candidates for cleanup
cleanup_scrubber_time = 86400
# Address to find the registry server for cleanups
registry_host = 0.0.0.0
# Port the registry server is listening on
registry_port = 9191
#
#
#
#
#
policy.json
The /etc/glance/policy.json file defines additional access controls that apply to the
Image Service.
{
"context_is_admin":
"default": "",
"role:admin",
"add_image": "",
"delete_image": "",
"get_image": "",
"get_images": "",
"modify_image": "",
"publicize_image": "",
"copy_from": "",
"download_image": "",
"upload_image": "",
"delete_image_location": "",
"get_image_location": "",
"set_image_location": "",
407
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
"add_member": "",
"delete_member": "",
"get_member": "",
"get_members": "",
"modify_member": "",
"manage_image_cache": "role:admin",
"get_task": "",
"get_tasks": "",
"add_task": "",
"modify_task": ""
}
(StrOpt) The reference to the default swift account/backing store parameters to use for adding new images.
(StrOpt) The SQLAlchemy connection string to use to connect to the slave database.
(StrOpt) The reference to the default swift account/backing store parameters to use for adding new images.
(StrOpt) The user to authenticate against the Swift authentication service (deprecated)
408
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
[keystone_authtoken] check_revocations_for_cached =
False
[DEFAULT] default_log_levels
amqp=WARN, amqplib=WARN,
amqp=WARN, amqplib=WARN,
boto=WARN, qpid=WARN,
boto=WARN, qpid=WARN,
sqlalchemy=WARN,
sqlalchemy=WARN, suds=INFO,
suds=INFO, iso8601=WARN,
oslo.messaging=INFO,
requests.packages.urllib3.connectionpool=WARN
iso8601=WARN,
requests.packages.urllib3.connectionpool=WARN
[DEFAULT] user_storage_quota
[DEFAULT] workers
[database] sqlite_db
glance.sqlite
oslo.sqlite
[keystone_authtoken]
revocation_cache_time
300
10
Table7.28.Deprecated options
Deprecated option
New Option
[DEFAULT] swift_store_auth_address
[glance_store] swift_store_auth_address
[DEFAULT] filesystem_store_metadata_file
[glance_store] filesystem_store_metadata_file
[DEFAULT] swift_store_key
[glance_store] swift_store_key
[DEFAULT] filesystem_store_datadir
[glance_store] filesystem_store_datadir
[DEFAULT] known_stores
[glance_store] stores
[DEFAULT] default_store
[glance_store] default_store
[DEFAULT] swift_store_user
[glance_store] swift_store_user
[DEFAULT] filesystem_store_datadirs
[glance_store] filesystem_store_datadirs
409
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
8. Networking
Table of Contents
Networking configuration options ...............................................................................
Log files used by Networking ......................................................................................
Networking sample configuration files .........................................................................
New, updated and deprecated options in Juno for OpenStack Networking ..................
410
449
449
466
This chapter explains the OpenStack Networking configuration options. For installation prerequisites, steps, and use cases, see the OpenStack Installation Guide for your distribution
(docs.openstack.org) and Cloud Administrator Guide.
Description
[DEFAULT]
admin_password = None
admin_tenant_name = None
admin_user = None
agent_down_time = 75
api_workers = 0
auth_ca_cert = None
auth_insecure = False
auth_region = None
auth_strategy = keystone
auth_url = None
base_mac = fa:16:3e:00:00:00
(StrOpt) The base MAC address Neutron will use for VIFs
bind_host = 0.0.0.0
bind_port = 9696
ca_certs = None
(StrOpt) CA certificates
check_child_processes = False
check_child_processes_action = respawn
check_child_processes_interval = 60
core_plugin = None
ctl_cert = None
410
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
ctl_privkey = None
dhcp_agent_notification = True
dhcp_agents_per_network = 1
dhcp_confs = $state_path/dhcp
dhcp_delete_namespaces = False
dhcp_domain = openstacklocal
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
dhcp_lease_duration = 86400
endpoint_type = publicURL
force_gateway_on_subnet = True
host = localhost
interface_driver = None
ip_lib_force_root = False
lock_path = None
mac_generation_retries = 16
max_allowed_address_pair = 10
max_dns_nameservers = 5
max_fixed_ips_per_port = 5
max_subnet_host_routes = 20
memcached_servers = None
periodic_fuzzy_delay = 5
(IntOpt) Range of seconds to randomly delay when starting the periodic task scheduler to reduce stampeding. (Disable by setting to 0)
periodic_interval = 40
report_interval = 300
root_helper = sudo
state_path = /var/lib/neutron
[AGENT]
root_helper = sudo
[PROXY]
admin_password = None
admin_tenant_name = None
admin_user = None
auth_region = None
auth_strategy = keystone
auth_url = None
[heleos]
admin_password = None
411
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
[keystone_authtoken]
memcached_servers = None
Networking plug-ins
OpenStack Networking introduces the concept of a plug-in, which is a back-end implementation of the OpenStack Networking API. A plug-in can use a variety of technologies to implement the logical API requests. Some OpenStack Networking plug-ins might use basic Linux VLANs and IP tables, while others might use more advanced technologies, such as L2-inL3 tunneling or OpenFlow. These sections detail the configuration options for the various
plug-ins.
Description
[NOVA]
node_override_vif_802.1qbg =
node_override_vif_802.1qbh =
node_override_vif_binding_failed =
node_override_vif_bridge =
node_override_vif_distributed =
node_override_vif_dvs =
node_override_vif_hostdev =
node_override_vif_hw_veb =
node_override_vif_hyperv =
node_override_vif_ivs =
node_override_vif_midonet =
node_override_vif_mlnx_direct =
node_override_vif_other =
node_override_vif_ovs =
node_override_vif_unbound =
node_override_vif_vrouter =
412
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
vif_type = ovs
[RESTPROXY]
add_meta_server_route = True
auto_sync_on_failure = True
cache_connections = True
consistency_interval = 60
(IntOpt) Time between verifications that the backend controller database is consistent with Neutron. (0 to disable)
neutron_id = neutron-shock
no_ssl_validation = False
server_auth = None
server_ssl = True
server_timeout = 10
servers = localhost:8800
(ListOpt) A comma separated list of Big Switch or Floodlight servers and port numbers. The plugin proxies the requests to the Big Switch/Floodlight server, which performs
the networking configuration. Only oneserver is needed per deployment, but you may wish todeploy multiple
servers to support failover.
ssl_cert_directory = /etc/neutron/plugins/bigswitch/ssl
ssl_sticky = True
sync_data = False
thread_pool_size = 4
[RESTPROXYAGENT]
integration_bridge = br-int
polling_interval = 5
virtual_switch_type = ovs
[ROUTER]
max_router_rules = 200
tenant_default_router_rule = ['*:any:any:permit']
(MultiStrOpt) The default router rules installed in new tenant routers. Repeat the config option for each rule. Format is <tenant>:<source>:<destination>:<action> Use an *
to specify default for all tenants.
413
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
[PHYSICAL_INTERFACE]
physical_interface = eth0
[SWITCH]
address =
ostype = NOS
password =
username =
Description
[CISCO]
model_class =
(StrOpt) Model Class
neutron.plugins.cisco.models.virt_phy_sw_v2.VirtualPhysicalSwitchModelV2
nexus_l3_enable = False
provider_vlan_auto_create = True
provider_vlan_auto_trunk = True
provider_vlan_name_prefix = p-
svi_round_robin = False
vlan_name_prefix = q-
[CISCO_N1K]
bridge_mappings =
default_network_profile = default_network_profile
default_policy_profile = service_profile
enable_tunneling = True
http_pool_size = 4
http_timeout = 15
integration_bridge = br-int
network_node_policy_profile = dhcp_pp
network_vlan_ranges = vlan:1:4095
poll_duration = 60
restrict_policy_profiles = False
tenant_network_type = local
tunnel_bridge = br-tun
vxlan_id_ranges = 5000:10000
[cisco_csr_ipsec]
status_check_interval = 60
414
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
[general]
backlog_processing_interval = 10
cfg_agent_down_time = 60
default_security_group = mgmt_sec_grp
ensure_nova_running = True
l3_admin_tenant = L3AdminTenant
management_network = osn_mgmt_nw
(StrOpt) Name of management network for device configuration. Default value is osn_mgmt_nw
service_vm_config_path = /opt/stack/data/neutron/cisco/config_drive
templates_path = /opt/stack/data/neutron/cisco/templates
[hosting_devices]
csr1kv_booting_time = 420
csr1kv_cfgagent_router_driver =
(StrOpt) Config agent driver for CSR1kv.
neutron.plugins.cisco.cfg_agent.device_drivers.csr1kv.csr1kv_routing_driver.CSR1kvRoutingDriver
csr1kv_configdrive_template = csr1kv_cfg_template
csr1kv_device_driver =
(StrOpt) Hosting device driver for CSR1kv.
neutron.plugins.cisco.l3.hosting_device_drivers.csr1kv_hd_driver.CSR1kvHostingDeviceDriver
csr1kv_flavor = 621
csr1kv_image = csr1kv_openstack_img
csr1kv_password = cisco
csr1kv_plugging_driver =
(StrOpt) Plugging driver for CSR1kv.
neutron.plugins.cisco.l3.plugging_drivers.n1kv_trunking_driver.N1kvTrunkingPlugDriver
csr1kv_username = stack
[ml2_cisco]
svi_round_robin = False
vlan_name_prefix = q-
[n1kv]
management_port_profile = osn_mgmt_pp
t1_network_profile = osn_t1_np
t1_port_profile = osn_t1_pp
t2_network_profile = osn_t2_np
t2_port_profile = osn_t2_pp
Description
[cfg_agent]
device_connection_timeout = 30
415
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
hosting_device_dead_timeout = 300
routing_svc_helper_class =
(StrOpt) Path of the routing service helper class.
neutron.plugins.cisco.cfg_agent.service_helpers.routing_svc_helper.RoutingServiceHelper
rpc_loop_interval = 10
(IntOpt) Interval when the process_services() loop executes in seconds. This is when the config agent lets each
service helper to process its neutron resources.
Description
[AGENT]
enable_metrics_collection = False
local_network_vswitch = private
metrics_max_retries = 100
(IntOpt) Specifies the maximum number of retries to enable Hyper-V's port metrics collection. The agent will try to
enable the feature once every polling_interval period for
at most metrics_max_retries or until it succeedes.
physical_network_vswitch_mappings =
polling_interval = 2
(IntOpt) The number of seconds the agent will wait between polling for local device changes.
[HYPERV]
network_vlan_ranges =
(ListOpt) List of
<physical_network>:<vlan_min>:<vlan_max> or
<physical_network>
tenant_network_type = local
[hyperv]
force_hyperv_utils_v1 = False
Description
[heleos]
admin_username = admin
async_requests = True
dummy_utif_id = None
esm_mgmt = None
inband_id = None
mgmt_id = None
oob_id = None
416
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
Description
resource_pool_id = default
router_image = None
juno
Description
[SDNVE]
base_url = /one/nb/v2/
controller_ips = 127.0.0.1
default_tenant_type = OVERLAY
format = json
info = sdnve_info_string
integration_bridge = None
interface_mappings =
(ListOpt) List of
<physical_network_name>:<interface_name> mappings.
of_signature = SDNVE-OF
out_of_band = True
overlay_signature = SDNVE-OVERLAY
password = admin
port = 8443
reset_bridge = True
use_fake_controller = False
userid = admin
[SDNVE_AGENT]
polling_interval = 2
root_helper = sudo
rpc = True
Description
[LINUX_BRIDGE]
physical_interface_mappings =
[VLANS]
network_vlan_ranges =
(ListOpt) List of
<physical_network>:<vlan_min>:<vlan_max> or
<physical_network>
tenant_network_type = local
[VXLAN]
enable_vxlan = False
417
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
l2_population = False
local_ip =
tos = None
ttl = None
vxlan_group = 224.0.0.1
Description
[ESWITCH]
backoff_rate = 2
(IntOpt) backoff rate multiplier for waiting period between retries for request to daemon, i.e. value of 2 will
double the request timeout each retry
daemon_endpoint = tcp://127.0.0.1:60001
physical_interface_mappings =
request_timeout = 3000
[MLNX]
network_vlan_ranges = default:1:1000
(ListOpt) List of
<physical_network>:<vlan_min>:<vlan_max> or
<physical_network>
physical_network_type = eth
physical_network_type_mappings =
(ListOpt) List of
<physical_network>:<physical_network_type> with
physical_network_type is either eth or ib
tenant_network_type = vlan
Description
[META]
default_flavor =
default_l3_flavor =
extension_map =
l3_plugin_list =
418
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
plugin_list =
rpc_flavor =
supported_extension_aliases =
Description
[ml2]
extension_drivers =
mechanism_drivers =
(ListOpt) An ordered list of networking mechanism driver entrypoints to be loaded from the
neutron.ml2.mechanism_drivers namespace.
tenant_network_types = local
Description
[ml2_type_flat]
419
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
flat_networks =
Description
[ml2_type_gre]
tunnel_id_ranges =
Description
[ml2_type_vlan]
network_vlan_ranges =
(ListOpt) List of
<physical_network>:<vlan_min>:<vlan_max> or
<physical_network> specifying physical_network names
usable for VLAN provider and tenant networks, as well as
ranges of VLAN tags on each available for allocation to
tenant networks.
Description
[ml2_type_vxlan]
vni_ranges =
vxlan_group = None
Description
[ml2_arista]
eapi_host =
eapi_password =
eapi_username =
region_name = RegionOne
420
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
thentication with Keysotne is performed by EOS. This is optional. If not set, a value of "RegionOne" is assumed.
sync_interval = 180
use_fqdn = True
Description
[l3_arista]
l3_sync_interval = 180
(IntOpt) Sync interval in seconds between L3 Service plugin and EOS. This interval defines how often the synchronization is performed. This is an optional field. If not set, a
value of 180 seconds is assumed
mlag_config = False
primary_l3_host =
primary_l3_host_password =
primary_l3_host_username =
secondary_l3_host =
use_vrf = False
Description
[NOVA]
node_override_vif_802.1qbg =
node_override_vif_802.1qbh =
node_override_vif_binding_failed =
node_override_vif_bridge =
node_override_vif_distributed =
421
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
node_override_vif_dvs =
node_override_vif_hostdev =
node_override_vif_hw_veb =
node_override_vif_hyperv =
node_override_vif_ivs =
node_override_vif_midonet =
node_override_vif_mlnx_direct =
node_override_vif_other =
node_override_vif_ovs =
node_override_vif_unbound =
node_override_vif_vrouter =
vif_type = ovs
[RESTPROXY]
add_meta_server_route = True
auto_sync_on_failure = True
cache_connections = True
consistency_interval = 60
(IntOpt) Time between verifications that the backend controller database is consistent with Neutron. (0 to disable)
neutron_id = neutron-shock
no_ssl_validation = False
server_auth = None
server_ssl = True
server_timeout = 10
servers = localhost:8800
(ListOpt) A comma separated list of Big Switch or Floodlight servers and port numbers. The plugin proxies the requests to the Big Switch/Floodlight server, which performs
the networking configuration. Only oneserver is needed per deployment, but you may wish todeploy multiple
servers to support failover.
422
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
ssl_cert_directory = /etc/neutron/plugins/bigswitch/ssl
ssl_sticky = True
sync_data = False
thread_pool_size = 4
[RESTPROXYAGENT]
integration_bridge = br-int
polling_interval = 5
virtual_switch_type = ovs
[ROUTER]
max_router_rules = 200
tenant_default_router_rule = ['*:any:any:permit']
(MultiStrOpt) The default router rules installed in new tenant routers. Repeat the config option for each rule. Format is <tenant>:<source>:<destination>:<action> Use an *
to specify default for all tenants.
Description
[ml2_brocade]
address =
ostype = NOS
osversion = 4.0.0
password = password
physical_networks =
rbridge_id = 1
username = admin
Description
[DEFAULT]
apic_system_id = openstack
[ml2_cisco]
managed_physical_network = None
[ml2_cisco_apic]
apic_agent_poll_interval = 2
apic_agent_report_interval = 30
apic_app_profile_name = ${apic_system_id}_app
apic_domain_name = ${apic_system_id}
423
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
apic_entity_profile = ${apic_system_id}_entity_profile
apic_function_profile =
${apic_system_id}_function_profile
apic_host_uplink_ports =
apic_hosts =
apic_lacp_profile = ${apic_system_id}_lacp_profile
apic_name_mapping = use_name
apic_node_profile = ${apic_system_id}_node_profile
apic_password = None
apic_sync_interval = 0
apic_use_ssl = True
apic_username = None
apic_vlan_ns_name = ${apic_system_id}_vlan_ns
apic_vlan_range = 2:4093
apic_vpc_pairs =
root_helper = sudo /usr/local/bin/neutron-rootwrap /etc/ (StrOpt) Setup root helper as rootwrap or sudo
neutron/rootwrap.conf
Description
[ml2_fslsdn]
crd_api_insecure = False
crd_auth_strategy = keystone
crd_auth_url = http://127.0.0.1:5000/v2.0/
crd_ca_certificates_file = None
crd_password = password
crd_region_name = RegionOne
crd_tenant_name = service
crd_url = http://127.0.0.1:9797
crd_url_timeout = 30
crd_user_name = crd
Description
[ESWITCH]
424
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
vnic_type = mlnx_direct
Description
[ml2_odl]
password = None
session_timeout = 30
timeout = 10
url = None
username = None
Description
[DEFAULT]
ofp_listen_host =
ofp_ssl_listen_port = 6633
ofp_tcp_listen_port = 6633
[AGENT]
dont_fragment = True
(BoolOpt) Set or un-set the don't fragment (DF) bit on outgoing IP packet carrying GRE/VXLAN tunnel.
get_datapath_retry_times = 60
physical_interface_mappings =
Description
[l2pop]
agent_boot_time = 180
Description
[ml2_ncs]
password = None
425
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
Description
timeout = 10
url = None
username = None
juno
Description
[ml2_sriov]
agent_required = False
Description
[MIDONET]
midonet_host_uuid_path = /etc/midolman/host_uuid.properties
midonet_uri = http://localhost:8080/midonet-api
mode = dev
password = passw0rd
project_id = 77777777-7777-7777-7777-777777777777
provider_router_id = None
username = admin
Description
[OFC]
api_max_attempts = 3
cert_file = None
driver = trema
enable_packet_filter = True
host = 127.0.0.1
insecure_ssl = False
key_file = None
path_prefix =
426
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
Description
port = 8888
use_ssl = False
juno
[PROVIDER]
default_router_provider = l3-agent
[fwaas]
driver =
Description
[RESTPROXY]
auth_resource =
base_uri = /
default_floatingip_quota = 254
default_net_partition_name = OpenStackDefaultNetParti- (StrOpt) Default Network partition in which VSD will ortion
chestrate network resources using openstack
organization = system
server = localhost:8800
serverauth = username:password
serverssl = False
[SYNCMANAGER]
enable_sync = False
(BoolOpt) Nuage plugin will sync resources between openstack and VSD
sync_interval = 0
Description
[AGENT]
integration_bridge = br-int
[nvsd]
nvsd_ip = 127.0.0.1
nvsd_passwd = oc123
nvsd_port = 8082
nvsd_retries = 0
nvsd_user = ocplugin
request_timeout = 30
427
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
[CONTRAIL]
api_server_ip = 127.0.0.1
api_server_port = 8082
Description
[DEFAULT]
ovs_integration_bridge = br-int
ovs_use_veth = False
ovs_vsctl_timeout = 10
[AGENT]
arp_responder = False
dont_fragment = True
(BoolOpt) Set or un-set the don't fragment (DF) bit on outgoing IP packet carrying GRE/VXLAN tunnel.
enable_distributed_routing = False
l2_population = False
minimize_polling = True
ovsdb_monitor_respawn_interval = 30
(IntOpt) The number of seconds to wait before respawning the ovsdb monitor after losing communication with it.
tunnel_types =
veth_mtu = None
vxlan_udp_port = 4789
[CISCO_N1K]
local_ip = 10.0.0.3
[OVS]
bridge_mappings =
enable_tunneling = False
int_peer_patch_port = patch-tun
integration_bridge = br-int
local_ip =
network_vlan_ranges =
(ListOpt) List of
<physical_network>:<vlan_min>:<vlan_max> or
<physical_network>.
tenant_network_type = local
428
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
tun_peer_patch_port = patch-int
tunnel_bridge = br-tun
tunnel_id_ranges =
tunnel_type =
use_veth_interconnection = False
Description
[plumgriddirector]
director_server = localhost
director_server_port = 8080
driver = neutron.plugins.plumgrid.drivers.plumlib.Plumlib
password = password
servertimeout = 5
username = username
Description
[DEFAULT]
wsapi_host =
wsapi_port = 8080
[OVS]
openflow_rest_api = 127.0.0.1:8080
ovsdb_interface = None
ovsdb_ip = None
ovsdb_port = 6634
tunnel_interface = None
tunnel_ip = None
tunnel_key_max = 16777215
tunnel_key_min = 1
Description
[SRIOV_NIC]
exclude_devices =
429
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
ed list of virtual functions (BDF format).to exclude from
network_device. The network_device in the mapping
should appear in the physical_device_mappings list.
physical_device_mappings =
Description
[DEFAULT]
default_interface_name = breth0
default_l2_gw_service_uuid = None
default_l3_gw_service_uuid = None
default_service_cluster_uuid = None
default_tz_uuid = None
http_timeout = 75
nsx_controllers = None
nsx_password = admin
nsx_user = admin
redirects = 2
retries = 2
[ESWITCH]
retries = 3
[NSX]
agent_mode = agent
concurrent_connections = 10
default_transport_type = stt
max_lp_per_bridged_ls = 5000
max_lp_per_overlay_ls = 256
metadata_mode = access_network
430
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
is only useful if running on a host that does not support
namespaces otherwise access_network should be used.
nsx_gen_timeout = -1
replication_mode = service
(StrOpt) The default option leverages service nodes to perform packet replication though one could set to this to
'source' to perform replication locally. This is useful if one
does not want to deploy a service node(s). It must be set
to 'service' for leveraging distributed routers.
[NSX_DHCP]
default_lease_time = 43200
domain_name = openstacklocal
extra_domain_name_servers =
[NSX_LSN]
sync_on_missing_data = False
(BoolOpt) Pull LSN information from NSX in case it is missing from the local data store. This is useful to rebuild the
local store in case of server recovery.
[NSX_METADATA]
metadata_server_address = 127.0.0.1
metadata_server_port = 8775
metadata_shared_secret =
[NSX_SYNC]
always_read_status = False
(BoolOpt) Always read operational status from backend on show operations. Enabling this option might slow
down the system.
max_random_sync_delay = 0
min_chunk_size = 500
min_sync_req_delay = 1
state_sync_interval = 10
(IntOpt) Interval in seconds between runs of the state synchronization task. Set it to 0 to disable it
[vcns]
datacenter_moid = None
datastore_id = None
deployment_container_id = None
external_network = None
manager_uri = None
password = default
resource_pool_id = None
task_status_check_interval = 2000
user = admin
431
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Configure RabbitMQ
OpenStack Oslo RPC uses RabbitMQ by default. Use these options to configure the RabbitMQ message system. The rpc_backend option is optional as long as RabbitMQ is the
default messaging system. However, if it is included the configuration, you must set it to
neutron.openstack.common.rpc.impl_kombu.
rpc_backend=neutron.openstack.common.rpc.impl_kombu
Use these options to configure the RabbitMQ messaging system. You can configure messaging communication for different installation scenarios, tune retries
for RabbitMQ, and define the size of the RPC thread pool. To monitor notifications through RabbitMQ, you must set the notification_driver option to
neutron.openstack.common.notifier.rpc_notifier in the neutron.conf file:
Description
[DEFAULT]
kombu_reconnect_delay = 1.0
(FloatOpt) How long to wait before reconnecting in response to an AMQP consumer cancel notification.
kombu_ssl_ca_certs =
kombu_ssl_certfile =
kombu_ssl_keyfile =
kombu_ssl_version =
rabbit_ha_queues = False
rabbit_host = localhost
rabbit_hosts = $rabbit_host:$rabbit_port
rabbit_login_method = AMQPLAIN
rabbit_max_retries = 0
(IntOpt) Maximum number of RabbitMQ connection retries. Default is 0 (infinite retry count).
rabbit_password = guest
rabbit_port = 5672
rabbit_retry_backoff = 2
rabbit_retry_interval = 1
432
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
Description
rabbit_use_ssl = False
rabbit_userid = guest
rabbit_virtual_host = /
juno
Configure Qpid
Use these options to configure the Qpid messaging system for OpenStack Oslo RPC. Qpid is
not the default messaging system, so you must enable it by setting the rpc_backend option in the neutron.conf file:
rpc_backend=neutron.openstack.common.rpc.impl_qpid
This critical option points the compute nodes to the Qpid broker (server). Set the
qpid_hostname option to the host name where the broker runs in the neutron.conf
file.
Note
The --qpid_hostname option accepts a host name or IP address value.
qpid_hostname=hostname.example.com
If the Qpid broker listens on a port other than the AMQP default of 5672, you must set the
qpid_port option to that value:
qpid_port=12345
If you configure the Qpid broker to require authentication, you must add a user name and
password to the configuration:
qpid_username=username
qpid_password=password
By default, TCP is used as the transport. To enable SSL, set the qpid_protocol option:
qpid_protocol=ssl
Use these additional options to configure the Qpid messaging driver for OpenStack Oslo
RPC. These options are used infrequently.
Description
[DEFAULT]
qpid_heartbeat = 60
433
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
qpid_hostname = localhost
qpid_hosts = $qpid_hostname:$qpid_port
qpid_password =
qpid_port = 5672
qpid_protocol = tcp
qpid_receiver_capacity = 1
qpid_sasl_mechanisms =
qpid_tcp_nodelay = True
qpid_topology_version = 1
qpid_username =
Configure ZeroMQ
Use these options to configure the ZeroMQ messaging system for OpenStack Oslo
RPC. ZeroMQ is not the default messaging system, so you must enable it by setting the
rpc_backend option in the neutron.conf file:
Description
[DEFAULT]
rpc_zmq_bind_address = *
rpc_zmq_contexts = 1
rpc_zmq_host = localhost
rpc_zmq_ipc_dir = /var/run/openstack
rpc_zmq_matchmaker =
(StrOpt) MatchMaker driver.
oslo.messaging._drivers.matchmaker.MatchMakerLocalhost
rpc_zmq_port = 9501
rpc_zmq_topic_backlog = None
Configure messaging
Use these common options to configure the RabbitMQ, Qpid, and ZeroMq messaging
drivers:
Description
[DEFAULT]
434
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
matchmaker_heartbeat_freq = 300
matchmaker_heartbeat_ttl = 600
rpc_backend =
neutron.openstack.common.rpc.impl_kombu
rpc_cast_timeout = 30
rpc_conn_pool_size = 30
rpc_response_timeout = 60
rpc_thread_pool_size = 64
rpc_workers = 0
[AGENT]
rpc_support_old_agents = False
Description
[matchmaker_redis]
host = 127.0.0.1
password = None
port = 6379
[matchmaker_ring]
ringfile = /etc/oslo/matchmaker_ring.json
Description
[DEFAULT]
amqp_auto_delete = False
amqp_durable_queues = False
control_exchange = openstack
notification_driver = []
notification_topics = notifications
transport_url = None
Agent
Use the following options to alter agent-related settings.
Description
[DEFAULT]
external_pids = $state_path/external/pids
network_device_mtu = None
435
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
API
Use the following options to alter API-related settings.
Description
[DEFAULT]
allow_bulk = True
allow_pagination = False
allow_sorting = False
api_extensions_path =
api_paste_config = api-paste.ini
backlog = 4096
max_header_line = 16384
max_request_body_size = 114688
pagination_max_limit = -1
(StrOpt) The maximum number of items returned in a single response, value was 'infinite' or negative integer means
no limit
retry_until_window = 30
run_external_periodic_tasks = True
service_plugins =
tcp_keepidle = 600
[service_providers]
service_provider = []
(MultiStrOpt) Defines providers for advanced services using the format: <service_type>:<name>:<driver>[:default]
Token authentication
Use the following options to alter token authentication settings.
Description
[keystone_authtoken]
admin_password = None
admin_tenant_name = admin
admin_token = None
admin_user = None
auth_admin_prefix =
auth_host = 127.0.0.1
436
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
auth_port = 35357
(IntOpt) Port of the admin Identity API endpoint. Deprecated, use identity_uri.
auth_protocol = https
auth_uri = None
auth_version = None
cache = None
cafile = None
certfile = None
check_revocations_for_cached = False
delay_auth_decision = False
enforce_token_bind = permissive
(StrOpt) Used to control the use and type of token binding. Can be set to: "disabled" to not check token binding.
"permissive" (default) to validate binding information if
the bind type is of a form known to the server and ignore
it if not. "strict" like "permissive" but if the bind type is unknown the token will be rejected. "required" any form of
token binding is needed to be allowed. Finally the name of
a binding method that must be present in tokens.
hash_algorithms = md5
http_connect_timeout = None
http_request_max_retries = 3
identity_uri = None
include_service_catalog = True
(BoolOpt) (optional) indicate whether to set the X-Service-Catalog header. If False, middleware will not ask for
service catalog on token validation and will not set the XService-Catalog header.
insecure = False
keyfile = None
memcache_secret_key = None
memcache_security_strategy = None
(StrOpt) (optional) if defined, indicate whether token data should be authenticated or authenticated and encrypted. Acceptable values are MAC or ENCRYPT. If MAC, token data is authenticated (with HMAC) in the cache. If ENCRYPT, token data is encrypted and authenticated in the
437
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
cache. If the value is not one of these options or empty,
auth_token will raise an exception on initialization.
revocation_cache_time = 10
signing_dir = None
token_cache_time = 300
(IntOpt) In order to prevent excessive effort spent validating tokens, the middleware caches previously-seen tokens
for a configurable duration (in seconds). Set to -1 to disable caching completely.
Compute
Use the following options to alter Compute-related settings.
Description
[DEFAULT]
notify_nova_on_port_data_changes = True
notify_nova_on_port_status_changes = True
nova_admin_auth_url = http://localhost:5000/v2.0
nova_admin_password = None
nova_admin_tenant_id = None
nova_admin_username = None
nova_api_insecure = False
nova_ca_certificates_file = None
nova_client_cert =
nova_client_priv_key =
nova_region_name = None
nova_url = http://127.0.0.1:8774/v2
send_events_interval = 2
Database
Use the following options to alter Database-related settings.
Description
[database]
backend = sqlalchemy
438
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
connection = None
connection_debug = 0
connection_trace = False
db_inc_retry_interval = True
db_max_retries = 20
(IntOpt) Maximum database connection retries before error is raised. Set to -1 to specify an infinite retry count.
db_max_retry_interval = 10
(IntOpt) If db_inc_retry_interval is set, the maximum seconds between database connection retries.
db_retry_interval = 1
idle_timeout = 3600
max_overflow = None
max_pool_size = None
max_retries = 10
min_pool_size = 1
mysql_sql_mode = TRADITIONAL
pool_timeout = None
retry_interval = 10
slave_connection = None
(StrOpt) The SQLAlchemy connection string to use to connect to the slave database.
sqlite_db = oslo.sqlite
sqlite_synchronous = True
use_db_reconnect = False
Logging
Use the following options to alter debug settings.
Description
[DEFAULT]
backdoor_port = None
439
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
Description
disable_process_locking = False
juno
DHCP agent
Use the following options to alter Database-related settings.
Description
[DEFAULT]
dnsmasq_config_file =
dnsmasq_dns_servers = None
dnsmasq_lease_max = 16777216
enable_isolated_metadata = False
enable_metadata_network = False
num_sync_threads = 4
resync_interval = 5
use_namespaces = True
Description
[DEFAULT]
dvr_base_mac = fa:16:3f:00:00:00
(StrOpt) The base mac address used for unique DVR instances by Neutron
router_distributed = False
Description
[heleoslb]
admin_password = None
admin_username = None
async_requests = None
dummy_utif_id = None
440
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
esm_mgmt = None
inband_id = None
lb_flavor = small
lb_image = None
mgmt_id = None
oob_id = None
resource_pool_id = None
sync_interval = 60
Firewall-as-a-Service driver
Use the following options in the fwaas_driver.ini file for the FWaaS driver.
Description
[fwaas]
enabled = False
Description
[DEFAULT]
ra_confs = $state_path/ra
L3 agent
Use the following options in the l3_agent.ini file for the L3 agent.
Description
[DEFAULT]
agent_mode = legacy
allow_automatic_l3agent_failover = False
enable_metadata_proxy = True
441
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
external_network_bridge = br-ex
gateway_external_network_id =
ha_confs_path = $state_path/ha_confs
ha_vrrp_advert_int = 2
ha_vrrp_auth_password = None
ha_vrrp_auth_type = PASS
handle_internal_only_routers = True
l3_ha = False
l3_ha_net_cidr = 169.254.192.0/18
max_l3_agents_per_router = 3
min_l3_agents_per_router = 2
router_id =
send_arp_for_ha = 3
Load-Balancer-as-a-Service agent
Use the following options in the lbaas_agent.ini file for the LBaaS agent.
Description
[DEFAULT]
device_driver =
(MultiStrOpt) Drivers used to manage loadbalancing de['neutron.services.loadbalancer.drivers.haproxy.namespace_driver.HaproxyNSDriver']
vices
loadbalancer_pool_scheduler_driver =
(StrOpt) Driver to use for scheduling pool to a default
neutron.services.loadbalancer.agent_scheduler.ChanceScheduler
loadbalancer agent
Description
[haproxy]
loadbalancer_state_path = $state_path/lbaas
send_gratuitous_arp = 3
user_group = nogroup
Description
[netscaler_driver]
netscaler_ncc_password = None
netscaler_ncc_uri = None
442
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
netscaler_ncc_username = None
Description
[radware]
actions_to_skip = setup_l2_l3
(ListOpt) List of actions that are not pushed to the completion queue.
ha_secondary_address = None
l4_action_name = BaseCreate
l4_workflow_name = openstack_l4
service_adc_type = VA
service_adc_version =
service_cache = 20
service_compression_throughput = 100
service_ha_pair = False
service_isl_vlan = -1
service_resource_pool_ids =
service_session_mirroring_enabled = False
service_ssl_throughput = 100
service_throughput = 1000
vdirect_address = None
vdirect_password = radware
vdirect_user = vDirect
Logging
Use the following options to alter logging settings.
Description
[DEFAULT]
debug = False
(BoolOpt) Print debugging output (set logging level to DEBUG instead of default WARNING level).
443
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
fatal_deprecations = False
log_config_append = None
log_dir = None
(StrOpt) (Optional) The base directory used for relative -log-file paths.
log_file = None
(StrOpt) (Optional) Name of log file to output to. If no default is set, logging will go to stdout.
log_format = None
(StrOpt) DEPRECATED. A logging.Formatter log message format string which may use any of the available
logging.LogRecord attributes. This option is deprecated. Please use logging_context_format_string and
logging_default_format_string instead.
logging_context_format_string = %(asctime)s.
%(msecs)03d %(process)d %(levelname)s %(name)s
[%(request_id)s %(user_identity)s] %(instance)s
%(message)s
logging_debug_format_suffix = %(funcName)s
%(pathname)s:%(lineno)d
logging_default_format_string = %(asctime)s.%(msecs)03d (StrOpt) Format string to use for log messages without
%(process)d %(levelname)s %(name)s [-] %(instance)s
context.
%(message)s
logging_exception_prefix = %(asctime)s.%(msecs)03d
%(process)d TRACE %(name)s %(instance)s
publish_errors = False
syslog_log_facility = LOG_USER
use_ssl = False
use_stderr = True
use_syslog = False
use_syslog_rfc_format = False
verbose = False
Metadata Agent
Use the following options in the metadata_agent.ini file for the Metadata agent.
Description
[DEFAULT]
444
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
meta_flavor_driver_mappings = None
(StrOpt) Mapping between flavor and LinuxInterfaceDriver. It is specific to MetaInterfaceDriver used with
admin_user, admin_password, admin_tenant_name,
admin_url, auth_strategy, auth_region and
endpoint_type.
metadata_backlog = 4096
metadata_port = 9697
metadata_proxy_shared_secret =
metadata_proxy_socket = $state_path/metadata_proxy
metadata_workers = 2
nova_metadata_insecure = False
nova_metadata_ip = 127.0.0.1
nova_metadata_port = 8775
nova_metadata_protocol = http
Metering Agent
Use the following options in the metering_agent.ini file for the Metering agent.
Description
[DEFAULT]
driver =
(StrOpt) Metering driver
neutron.services.metering.drivers.noop.noop_driver.NoopMeteringDriver
measure_interval = 30
[AGENT]
report_interval = 30
(FloatOpt) Seconds between nodes reporting state to server; should be less than agent_down_time, best if it is half
or less than agent_down_time.
Policy
Use the following options in the neutron.conf file to change policy settings.
Description
[DEFAULT]
allow_overlapping_ips = False
policy_file = policy.json
Quotas
Use the following options in the neutron.conf file for the quota system.
445
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
[DEFAULT]
max_routes = 30
[QUOTAS]
default_quota = -1
quota_driver = neutron.db.quota_db.DbQuotaDriver
quota_firewall = 1
(IntOpt) Number of firewalls allowed per tenant. A negative value means unlimited.
quota_firewall_policy = 1
quota_firewall_rule = 100
quota_floatingip = 50
quota_health_monitor = -1
quota_member = -1
quota_network = 10
(IntOpt) Number of networks allowed per tenant.A negative value means unlimited.
quota_network_gateway = 5
quota_packet_filter = 100
quota_pool = 10
quota_port = 50
quota_router = 10
(IntOpt) Number of routers allowed per tenant. A negative value means unlimited.
quota_security_group = 10
quota_security_group_rule = 100
quota_subnet = 10
(IntOpt) Number of subnets allowed per tenant, A negative value means unlimited.
quota_vip = 10
Rootwrap
Use the following options in the neutron.conf file for the rootwrap settings
Description
[DEFAULT]
446
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
filters_path = /etc/neutron/rootwrap.d,/usr/share/neutron/rootwrap
List of directories to load filter definitions from (separated by ','). These directories MUST all be only writeable by
root !
exec_dirs = /sbin,/usr/sbin,/bin,/usr/bin
use_syslog = False
syslog_log_facility = syslog
Which syslog facility to use. Valid values include auth, authpriv, syslog, local0, local1... Default value is 'syslog'
syslog_log_level = ERROR
[xenapi]
xenapi_connection_url = <None>
xenapi_connection_username = root
xenapi_connection_password = <None>
Scheduler
Use the following options in the neutron.conf file to change scheduler settings.
Description
[DEFAULT]
network_auto_schedule = True
network_scheduler_driver =
(StrOpt) Driver to use for scheduling network to DHCP
neutron.scheduler.dhcp_agent_scheduler.ChanceScheduler agent
router_auto_schedule = True
router_delete_namespaces = False
router_scheduler_driver =
neutron.scheduler.l3_agent_scheduler.ChanceScheduler
Security Groups
Use the following options in the configuration file for your driver to change security group
settings.
Description
[SECURITYGROUP]
enable_ipset = True
enable_security_group = True
firewall_driver = None
447
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
[DEFAULT]
ssl_ca_file = None
ssl_cert_file = None
ssl_key_file = None
(StrOpt) Private key file to use when starting the server securely
[ssl]
ca_file = None
cert_file = None
key_file = None
(StrOpt) Private key file to use when starting the server securely
Testing
Use the following options to alter testing-related features.
Description
[DEFAULT]
fake_rabbit = False
Description
[vArmour]
director = localhost
director_port = 443
password = varmour
username = varmour
VPN
Use the following options in the vpn_agent.ini file for the VPN agent.
Description
[ipsec]
448
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
Description
config_base_dir = $state_path/ipsec
ipsec_status_check_interval = 60
juno
[openswan]
ipsec_config_template = /usr/lib/python/site-packages/neutron/services/vpn/device_drivers/template/openswan/ipsec.conf.template
ipsec_secret_template = /usr/lib/python/site-packages/neutron/services/vpn/device_drivers/template/openswan/ipsec.secret.template
[vpnagent]
vpn_device_driver =
(MultiStrOpt) The vpn device drivers Neutron will use
['neutron.services.vpn.device_drivers.ipsec.OpenSwanDriver']
Service/interface
dhcp-agent.log
neutron-dhcp-agent
l3-agent.log
neutron-l3-agent
lbaas-agent.log
neutron-lbaas-agent a
linuxbridge-agent.log
neutron-linuxbridge-agent
metadata-agent.log
neutron-metadata-agent
metering-agent.log
neutron-metering-agent
openvswitch-agent.log
neutron-openvswitch-agent
server.log
neutron-server
neutron.conf
Use the neutron.conf file to configure the majority of the OpenStack Networking options.
[DEFAULT]
# Print more verbose output (set logging level to INFO instead of default
WARNING level).
# verbose = False
# Print debugging output (set logging level to DEBUG instead of default
WARNING level).
# debug = False
# Where to store Neutron state files.
# user executing the agent.
449
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
# state_path = /var/lib/neutron
# Where to store lock files
lock_path = $state_path/lock
# log_format = %(asctime)s %(levelname)8s [%(name)s] %(message)s
# log_date_format = %Y-%m-%d %H:%M:%S
#
#
#
#
#
#
use_syslog
->
log_file and log_dir
->
(not log_file) and log_dir
->
use_stderr
->
(not user_stderr) and (not log_file) ->
publish_errors
->
syslog
log_dir/log_file
log_dir/{binary_name}.log
stderr
stdout
notification system
# use_syslog = False
# syslog_log_facility = LOG_USER
# use_stderr = True
# log_file =
# log_dir =
# publish_errors = False
# Address to bind the API server to
# bind_host = 0.0.0.0
# Port the bind the API server to
# bind_port = 9696
# Path to the extensions. Note that this can be a colon-separated list of
# paths. For example:
# api_extensions_path = extensions:/path/to/more/extensions:/even/more/
extensions
# The __path__ of neutron.extensions is appended to this, so if your
# extensions are in there you don't need to specify them here
# api_extensions_path =
# (StrOpt) Neutron core plugin entrypoint to be loaded from the
# neutron.core_plugins namespace. See setup.cfg for the entrypoint names of
the
# plugins included in the neutron source distribution. For compatibility with
# previous versions, the class name of a plugin can be specified instead of
its
# entrypoint name.
#
# core_plugin =
# Example: core_plugin = ml2
#
#
#
#
#
#
#
#
450
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Base MAC
4h octet
randomly
3 octet
base_mac
4 octet
base_mac
451
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
kombu_ssl_version =
SSL key file (valid only if SSL enabled)
kombu_ssl_keyfile =
SSL cert file (valid only if SSL enabled)
kombu_ssl_certfile =
SSL certification authority file (valid only if SSL enabled)
kombu_ssl_ca_certs =
IP address of the RabbitMQ installation
rabbit_host = localhost
Password of the RabbitMQ server
rabbit_password = guest
Port where RabbitMQ server is running/listening
rabbit_port = 5672
RabbitMQ single or HA cluster (host:port pairs i.e: host1:5672, host2:5672)
rabbit_hosts is defaulted to '$rabbit_host:$rabbit_port'
rabbit_hosts = localhost:5672
User ID used for RabbitMQ connections
rabbit_userid = guest
Location of a virtual RabbitMQ installation.
rabbit_virtual_host = /
Maximum retries with trying to connect to RabbitMQ
(the default of 0 implies an infinite retry count)
rabbit_max_retries = 0
RabbitMQ connection retry interval
rabbit_retry_interval = 1
Use HA queues in RabbitMQ (x-ha-policy: all). You need to
wipe RabbitMQ database when changing this option. (boolean value)
rabbit_ha_queues = false
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
QPID
rpc_backend=neutron.openstack.common.rpc.impl_qpid
Qpid broker hostname
qpid_hostname = localhost
Qpid broker port
qpid_port = 5672
Qpid single or HA cluster (host:port pairs i.e: host1:5672, host2:5672)
qpid_hosts is defaulted to '$qpid_hostname:$qpid_port'
qpid_hosts = localhost:5672
Username for qpid connection
qpid_username = ''
Password for qpid connection
qpid_password = ''
Space separated list of SASL mechanisms to use for auth
qpid_sasl_mechanisms = ''
Seconds between connection keepalive heartbeats
qpid_heartbeat = 60
Transport to use, either 'tcp' or 'ssl'
qpid_protocol = tcp
Disable Nagle algorithm
qpid_tcp_nodelay = True
#
#
#
#
#
ZMQ
rpc_backend=neutron.openstack.common.rpc.impl_zmq
ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or IP.
The "host" option should point or resolve to this address.
rpc_zmq_bind_address = *
452
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
453
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
454
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
455
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
# quota_vip = 10
# Number of pools allowed per tenant. A negative value means unlimited.
# quota_pool = 10
#
#
#
#
#
Number of pool members allowed per tenant. A negative value means unlimited.
The default is unlimited because a member is not a real resource consumer
on Openstack. However, on back-end, a member is a resource consumer
and that is the reason why quota is possible.
quota_member = -1
#
#
#
#
#
#
# ===========
[keystone_authtoken]
auth_host = 127.0.0.1
auth_port = 35357
auth_protocol = http
admin_tenant_name = %SERVICE_TENANT_NAME%
admin_user = %SERVICE_USER%
admin_password = %SERVICE_PASSWORD%
signing_dir = $state_path/keystone-signing
[database]
# This line MUST be changed to actually run the plugin.
# Example:
# connection = mysql://root:[email protected]:3306/neutron
# Replace 127.0.0.1 above with the IP address of the database used by the
# main neutron server. (Leave it as is if the database runs on this host.)
# connection = sqlite://
# The SQLAlchemy connection string used to connect to the slave database
# slave_connection =
# Database reconnection retry times - in event connectivity is lost
# set to -1 implies an infinite retry count
# max_retries = 10
456
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
457
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
# service_provider=LOADBALANCER:Embrane:neutron.services.loadbalancer.drivers.
embrane.driver.EmbraneLbaas:default
api-paste.ini
Use the api-paste.ini to configure the OpenStack Networking API.
[composite:neutron]
use = egg:Paste#urlmap
/: neutronversions
/v2.0: neutronapi_v2_0
[composite:neutronapi_v2_0]
use = call:neutron.auth:pipeline_factory
noauth = request_id catch_errors extensions neutronapiapp_v2_0
keystone = request_id catch_errors authtoken keystonecontext extensions
neutronapiapp_v2_0
[filter:request_id]
paste.filter_factory = neutron.openstack.common.middleware.
request_id:RequestIdMiddleware.factory
[filter:catch_errors]
paste.filter_factory = neutron.openstack.common.middleware.
catch_errors:CatchErrorsMiddleware.factory
[filter:keystonecontext]
paste.filter_factory = neutron.auth:NeutronKeystoneContext.factory
[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
[filter:extensions]
paste.filter_factory = neutron.api.
extensions:plugin_aware_extension_middleware_factory
[app:neutronversions]
paste.app_factory = neutron.api.versions:Versions.factory
[app:neutronapiapp_v2_0]
paste.app_factory = neutron.api.v2.router:APIRouter.factory
policy.json
Use the policy.json file to define additional access controls that apply to the OpenStack
Networking service.
{
"context_is_admin": "role:admin",
"admin_or_owner": "rule:context_is_admin or tenant_id:%(tenant_id)s",
"admin_or_network_owner": "rule:context_is_admin or tenant_id:
%(network:tenant_id)s",
"admin_only": "rule:context_is_admin",
458
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
"regular_user": "",
"shared": "field:networks:shared=True",
"shared_firewalls": "field:firewalls:shared=True",
"external": "field:networks:router:external=True",
"default": "rule:admin_or_owner",
"subnets:private:read": "rule:admin_or_owner",
"subnets:private:write": "rule:admin_or_owner",
"subnets:shared:read": "rule:regular_user",
"subnets:shared:write": "rule:admin_only",
"create_subnet": "rule:admin_or_network_owner",
"get_subnet": "rule:admin_or_owner or rule:shared",
"update_subnet": "rule:admin_or_network_owner",
"delete_subnet": "rule:admin_or_network_owner",
"create_network": "",
"get_network": "rule:admin_or_owner or rule:shared or rule:external",
"get_network:router:external": "rule:regular_user",
"get_network:segments": "rule:admin_only",
"get_network:provider:network_type": "rule:admin_only",
"get_network:provider:physical_network": "rule:admin_only",
"get_network:provider:segmentation_id": "rule:admin_only",
"get_network:queue_id": "rule:admin_only",
"create_network:shared": "rule:admin_only",
"create_network:router:external": "rule:admin_only",
"create_network:segments": "rule:admin_only",
"create_network:provider:network_type": "rule:admin_only",
"create_network:provider:physical_network": "rule:admin_only",
"create_network:provider:segmentation_id": "rule:admin_only",
"update_network": "rule:admin_or_owner",
"update_network:segments": "rule:admin_only",
"update_network:shared": "rule:admin_only",
"update_network:provider:network_type": "rule:admin_only",
"update_network:provider:physical_network": "rule:admin_only",
"update_network:provider:segmentation_id": "rule:admin_only",
"delete_network": "rule:admin_or_owner",
"create_port": "",
"create_port:mac_address": "rule:admin_or_network_owner",
"create_port:fixed_ips": "rule:admin_or_network_owner",
"create_port:port_security_enabled": "rule:admin_or_network_owner",
"create_port:binding:host_id": "rule:admin_only",
"create_port:binding:profile": "rule:admin_only",
"create_port:mac_learning_enabled": "rule:admin_or_network_owner",
"get_port": "rule:admin_or_owner",
"get_port:queue_id": "rule:admin_only",
"get_port:binding:vif_type": "rule:admin_only",
"get_port:binding:vif_details": "rule:admin_only",
"get_port:binding:host_id": "rule:admin_only",
"get_port:binding:profile": "rule:admin_only",
"update_port": "rule:admin_or_owner",
"update_port:fixed_ips": "rule:admin_or_network_owner",
"update_port:port_security_enabled": "rule:admin_or_network_owner",
"update_port:binding:host_id": "rule:admin_only",
"update_port:binding:profile": "rule:admin_only",
"update_port:mac_learning_enabled": "rule:admin_or_network_owner",
"delete_port": "rule:admin_or_owner",
"create_router:external_gateway_info:enable_snat": "rule:admin_only",
459
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
"update_router:external_gateway_info:enable_snat": "rule:admin_only",
"create_firewall": "",
"get_firewall": "rule:admin_or_owner",
"create_firewall:shared": "rule:admin_only",
"get_firewall:shared": "rule:admin_only",
"update_firewall": "rule:admin_or_owner",
"delete_firewall": "rule:admin_or_owner",
"create_firewall_policy": "",
"get_firewall_policy": "rule:admin_or_owner or rule:shared_firewalls",
"create_firewall_policy:shared": "rule:admin_or_owner",
"update_firewall_policy": "rule:admin_or_owner",
"delete_firewall_policy": "rule:admin_or_owner",
"create_firewall_rule": "",
"get_firewall_rule": "rule:admin_or_owner or rule:shared_firewalls",
"update_firewall_rule": "rule:admin_or_owner",
"delete_firewall_rule": "rule:admin_or_owner",
"create_qos_queue": "rule:admin_only",
"get_qos_queue": "rule:admin_only",
"update_agent": "rule:admin_only",
"delete_agent": "rule:admin_only",
"get_agent": "rule:admin_only",
"create_dhcp-network": "rule:admin_only",
"delete_dhcp-network": "rule:admin_only",
"get_dhcp-networks": "rule:admin_only",
"create_l3-router": "rule:admin_only",
"delete_l3-router": "rule:admin_only",
"get_l3-routers": "rule:admin_only",
"get_dhcp-agents": "rule:admin_only",
"get_l3-agents": "rule:admin_only",
"get_loadbalancer-agent": "rule:admin_only",
"get_loadbalancer-pools": "rule:admin_only",
"create_router": "rule:regular_user",
"get_router": "rule:admin_or_owner",
"update_router:add_router_interface": "rule:admin_or_owner",
"update_router:remove_router_interface": "rule:admin_or_owner",
"delete_router": "rule:admin_or_owner",
"create_floatingip": "rule:regular_user",
"update_floatingip": "rule:admin_or_owner",
"delete_floatingip": "rule:admin_or_owner",
"get_floatingip": "rule:admin_or_owner",
"create_network_profile": "rule:admin_only",
"update_network_profile": "rule:admin_only",
"delete_network_profile": "rule:admin_only",
"get_network_profiles": "",
"get_network_profile": "",
"update_policy_profiles": "rule:admin_only",
"get_policy_profiles": "",
"get_policy_profile": "",
"create_metering_label": "rule:admin_only",
"delete_metering_label": "rule:admin_only",
460
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
"get_metering_label": "rule:admin_only",
"create_metering_label_rule": "rule:admin_only",
"delete_metering_label_rule": "rule:admin_only",
"get_metering_label_rule": "rule:admin_only",
"get_service_provider": "rule:regular_user",
"get_lsn": "rule:admin_only",
"create_lsn": "rule:admin_only"
}
rootwrap.conf
Use the rootwrap.conf file to define configuration values used by the rootwrap script
when the OpenStack Networking service must escalate its privileges to those of the root user.
# Configuration for neutron-rootwrap
# This file should be owned by (and only-writeable by) the root user
[DEFAULT]
# List of directories to load filter definitions from (separated by ',').
# These directories MUST all be only writeable by root !
filters_path=/etc/neutron/rootwrap.d,/usr/share/neutron/rootwrap,/etc/quantum/
rootwrap.d,/usr/share/quantum/rootwrap
# List of directories to search executables in, in case filters do not
# explicitely specify a full path (separated by ',')
# If not specified, defaults to system PATH environment variable.
# These directories MUST all be only writeable by root !
exec_dirs=/sbin,/usr/sbin,/bin,/usr/bin
# Enable logging to syslog
# Default value is False
use_syslog=False
# Which syslog facility to use.
# Valid values include auth, authpriv, syslog, local0, local1...
# Default value is 'syslog'
syslog_log_facility=syslog
# Which messages to log.
# INFO means log all usage
# ERROR means only log unsuccessful attempts
syslog_log_level=ERROR
[xenapi]
# XenAPI configuration is only required by the L2 agent if it is to
# target a XenServer/XCP compute host's dom0.
xenapi_connection_url=<None>
xenapi_connection_username=root
xenapi_connection_password=<None>
461
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
dhcp_agent.ini
[DEFAULT]
# Show debugging output in log (sets DEBUG log level output)
# debug = False
#
#
#
#
The DHCP agent will resync its state with Neutron to recover from any
transient notification or rpc errors. The interval is number of
seconds between attempts.
resync_interval = 5
# The DHCP agent requires an interface driver be set. Choose the one that best
# matches your plugin.
# interface_driver =
# Example of interface_driver option for OVS based plugins(OVS, Ryu, NEC, NVP,
# BigSwitch/Floodlight)
# interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
# Name of Open vSwitch bridge to use
# ovs_integration_bridge = br-int
#
#
#
#
The DHCP server can assist with providing metadata support on isolated
networks. Setting this value to True will cause the DHCP server to append
specific host routes to the DHCP request. The metadata service will only
be activated when the subnet does not contain any router port. The guest
instance must be configured to request host routes via DHCP (Option 121).
enable_isolated_metadata = False
#
#
#
#
#
#
#
462
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
# Number of threads to use during sync process. Should not exceed connection
# pool size configured on server.
# num_sync_threads = 4
# Location to store DHCP server config files
# dhcp_confs = $state_path/dhcp
# Domain to use for building the hostnames
# dhcp_domain = openstacklocal
# Override the default dnsmasq settings with this file
# dnsmasq_config_file =
# Comma-separated list of DNS servers which will be used by dnsmasq
# as forwarders.
# dnsmasq_dns_servers =
# Limit number of leases to prevent a denial-of-service.
# dnsmasq_lease_max = 16777216
# Location to DHCP lease relay UNIX domain socket
# dhcp_lease_relay_socket = $state_path/dhcp/lease_relay
# Location of Metadata Proxy UNIX domain socket
# metadata_proxy_socket = $state_path/metadata_proxy
#
#
#
#
#
#
#
l3_agent.ini
[DEFAULT]
# Show debugging output in log (sets DEBUG log level output)
# debug = False
# L3 requires that an interface driver be set. Choose the one that best
# matches your plugin.
# interface_driver =
# Example of interface_driver option for OVS based plugins (OVS, Ryu, NEC)
# that supports L3 agent
# interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
#
#
#
#
463
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
#
#
#
#
#
Indicates that this L3 agent should also handle routers that do not have
an external network gateway configured. This option should be True only
for a single agent in a Neutron deployment, and may be False for all agents
if all routers must have an external network gateway
handle_internal_only_routers = True
#
#
#
#
Name of bridge used for external network traffic. This should be set to
empty value for the linux bridge. when this parameter is set, each L3 agent
can be associated with no more than one external network.
external_network_bridge = br-ex
464
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
lbaas_agent.ini
[DEFAULT]
# Show debugging output in log (sets DEBUG log level output).
# debug = False
#
#
#
#
The LBaaS agent will resync its state with Neutron to recover from any
transient notification or rpc errors. The interval is number of
seconds between attempts.
periodic_interval = 10
# LBaas requires an interface driver be set. Choose the one that best
# matches your plugin.
# interface_driver =
# Example of interface_driver option for OVS based plugins (OVS, Ryu, NEC,
NVP,
# BigSwitch/Floodlight)
# interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
#
#
#
#
metadata_agent.ini
[DEFAULT]
# Show debugging output in log (sets DEBUG log level output)
465
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
# debug = True
# The Neutron user information for accessing the Neutron API.
auth_url = http://localhost:5000/v2.0
auth_region = RegionOne
# Turn off verification of the certificate for ssl
# auth_insecure = False
# Certificate Authority public key (CA cert) file for ssl
# auth_ca_cert =
admin_tenant_name = %SERVICE_TENANT_NAME%
admin_user = %SERVICE_USER%
admin_password = %SERVICE_PASSWORD%
# Network service endpoint type to pull from the keystone catalog
# endpoint_type = adminURL
# IP address used by Nova metadata server
# nova_metadata_ip = 127.0.0.1
# TCP Port used by Nova metadata server
# nova_metadata_port = 8775
#
#
#
#
When proxying metadata requests, Neutron signs the Instance-ID header with a
shared secret to prevent spoofing. You may select any string for a secret,
but it must match here and in the configuration used by the Nova Metadata
Server. NOTE: Nova uses a different key:
neutron_metadata_proxy_shared_secret
# metadata_proxy_shared_secret =
# Location of Metadata Proxy UNIX domain socket
# metadata_proxy_socket = $state_path/metadata_proxy
# Number of separate worker processes for metadata server
# metadata_workers = 0
# Number of backlog requests to configure the metadata server socket with
# metadata_backlog = 128
#
#
#
#
#
#
#
[DEFAULT] agent_down_time = 75
466
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
[DEFAULT] check_child_processes_interval = 60
[DEFAULT] dhcp_agents_per_network = 1
(StrOpt) The base mac address used for unique DVR instances by Neutron
[DEFAULT] gateway_external_network_id =
[DEFAULT] ha_vrrp_advert_int = 2
(FloatOpt) How long to wait before reconnecting in response to an AMQP consumer cancel notification.
[DEFAULT] loadbalancer_pool_scheduler_driver =
(StrOpt) Driver to use for scheduling pool to a default
neutron.services.loadbalancer.agent_scheduler.ChanceScheduler
loadbalancer agent
[DEFAULT] max_l3_agents_per_router = 3
[DEFAULT] max_routes = 30
[DEFAULT] min_l3_agents_per_router = 2
[DEFAULT] network_scheduler_driver =
(StrOpt) Driver to use for scheduling network to DHCP
neutron.scheduler.dhcp_agent_scheduler.ChanceScheduler agent
[DEFAULT] nova_api_insecure = False
[DEFAULT] nova_client_cert =
[DEFAULT] nova_client_priv_key =
467
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
[DEFAULT] qpid_receiver_capacity = 1
[DEFAULT] router_id =
[DEFAULT] router_scheduler_driver =
neutron.scheduler.l3_agent_scheduler.ChanceScheduler
[DEFAULT] send_arp_for_ha = 3
(BoolOpt) Set or un-set the don't fragment (DF) bit on outgoing IP packet carrying GRE/VXLAN tunnel.
[AGENT] physical_interface_mappings = []
[CISCO_N1K] http_pool_size = 4
[HYPERV] network_vlan_ranges = []
(ListOpt) List of
<physical_network>:<vlan_min>:<vlan_max> or
<physical_network>
[NOVA] node_override_vif_distributed = []
[NOVA] node_override_vif_dvs = []
[NOVA] node_override_vif_hw_veb = []
[NOVA] node_override_vif_vrouter = []
468
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
[NSX_DHCP] extra_domain_name_servers = []
(BoolOpt) Pull LSN information from NSX in case it is missing from the local data store. This is useful to rebuild the
local store in case of server recovery.
[NSX_METADATA] metadata_shared_secret =
[QUOTAS] quota_firewall = 1
(IntOpt) Number of firewalls allowed per tenant. A negative value means unlimited.
[QUOTAS] quota_firewall_policy = 1
[QUOTAS] quota_floatingip = 50
[QUOTAS] quota_health_monitor = -1
[QUOTAS] quota_member = -1
[QUOTAS] quota_network_gateway = 5
[QUOTAS] quota_pool = 10
[QUOTAS] quota_router = 10
(IntOpt) Number of routers allowed per tenant. A negative value means unlimited.
[QUOTAS] quota_security_group = 10
[QUOTAS] quota_vip = 10
[SRIOV_NIC] exclude_devices = []
[SRIOV_NIC] physical_device_mappings = []
469
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
[SWITCH] address =
[SWITCH] password =
[SWITCH] username =
(BoolOpt) Nuage plugin will sync resources between openstack and VSD
[SYNCMANAGER] sync_interval = 0
[cfg_agent] device_connection_timeout = 30
[cfg_agent] routing_svc_helper_class =
(StrOpt) Path of the routing service helper class.
neutron.plugins.cisco.cfg_agent.service_helpers.routing_svc_helper.RoutingServiceHelper
[cfg_agent] rpc_loop_interval = 10
(IntOpt) Interval when the process_services() loop executes in seconds. This is when the config agent lets each
service helper to process its neutron resources.
[general] backlog_processing_interval = 10
[general] cfg_agent_down_time = 60
(StrOpt) Name of management network for device configuration. Default value is osn_mgmt_nw
[haproxy] send_gratuitous_arp = 3
[hosting_devices] csr1kv_cfgagent_router_driver =
(StrOpt) Config agent driver for CSR1kv.
neutron.plugins.cisco.cfg_agent.device_drivers.csr1kv.csr1kv_routing_driver.CSR1kvRoutingDriver
[hosting_devices] csr1kv_configdrive_template =
csr1kv_cfg_template
470
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
[hosting_devices] csr1kv_device_driver =
(StrOpt) Hosting device driver for CSR1kv.
neutron.plugins.cisco.l3.hosting_device_drivers.csr1kv_hd_driver.CSR1kvHostingDeviceDriver
[hosting_devices] csr1kv_flavor = 621
[hosting_devices] csr1kv_plugging_driver =
(StrOpt) Plugging driver for CSR1kv.
neutron.plugins.cisco.l3.plugging_drivers.n1kv_trunking_driver.N1kvTrunkingPlugDriver
[hosting_devices] csr1kv_username = stack
[keystone_authtoken] check_revocations_for_cached =
False
(IntOpt) Sync interval in seconds between L3 Service plugin and EOS. This interval defines how often the synchronization is performed. This is an optional field. If not set, a
value of 180 seconds is assumed
[l3_arista] primary_l3_host =
[l3_arista] primary_l3_host_password =
[l3_arista] primary_l3_host_username =
[l3_arista] secondary_l3_host =
[ml2] extension_drivers = []
[ml2_brocade] rbridge_id = 1
[ml2_cisco_apic] apic_agent_poll_interval = 2
[ml2_cisco_apic] apic_agent_report_interval = 30
[ml2_cisco_apic] apic_app_profile_name =
${apic_system_id}_app
471
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
[ml2_cisco_apic] apic_entity_profile =
${apic_system_id}_entity_profile
[ml2_cisco_apic] apic_function_profile =
${apic_system_id}_function_profile
[ml2_cisco_apic] apic_host_uplink_ports = []
[ml2_cisco_apic] apic_hosts = []
[ml2_cisco_apic] apic_lacp_profile =
${apic_system_id}_lacp_profile
[ml2_cisco_apic] apic_node_profile =
${apic_system_id}_node_profile
[ml2_cisco_apic] apic_sync_interval = 0
[ml2_cisco_apic] apic_vlan_ns_name =
${apic_system_id}_vlan_ns
[ml2_cisco_apic] apic_vpc_pairs = []
[ml2_fslsdn] crd_url_timeout = 30
472
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
[plumgriddirector] driver =
neutron.plugins.plumgrid.drivers.plumlib.Plumlib
[plumgriddirector] servertimeout = 5
(ListOpt) List of actions that are not pushed to the completion queue.
[radware] l2_l3_ctor_params = {'ha_network_name': 'HANetwork', 'service': '_REPLACE_', 'ha_ip_pool_name': 'default', 'twoleg_enabled': '_REPLACE_', 'allocate_ha_ips':
True, 'allocate_ha_vrrp': True}
[radware] service_adc_type = VA
[radware] service_adc_version =
[radware] service_cache = 20
[radware] service_isl_vlan = -1
[radware] service_resource_pool_ids = []
[vpnagent] vpn_device_driver =
(MultiStrOpt) The vpn device drivers Neutron will use
['neutron.services.vpn.device_drivers.ipsec.OpenSwanDriver']
473
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
[DEFAULT] control_exchange
neutron
openstack
[DEFAULT] default_log_levels
amqp=WARN, amqplib=WARN,
boto=WARN, qpid=WARN,
sqlalchemy=WARN, suds=INFO,
iso8601=WARN
amqp=WARN, amqplib=WARN,
boto=WARN, qpid=WARN,
sqlalchemy=WARN, suds=INFO,
oslo.messaging=INFO,
iso8601=WARN,
requests.packages.urllib3.connectionpool=WARN
[DEFAULT] endpoint_type
adminURL
publicURL
True
[DEFAULT] http_timeout
10
75
[DEFAULT] metadata_backlog
128
4096
[DEFAULT] metadata_workers
[DEFAULT] rpc_zmq_matchmaker
neutron.openstack.common.rpc.matchmaker.MatchMakerLocalhost
oslo.messaging._drivers.matchmaker.MatchMakerLocalhost
[CISCO_N1K] poll_duration
10
60
[NOVA] vif_types
[SDNVE] default_tenant_type
OF
OVERLAY
[database] connection
sqlite://
None
[database] max_overflow
20
None
[database] max_pool_size
10
None
[database] pool_timeout
10
None
[database] slave_connection
[keystone_authtoken]
revocation_cache_time
None
300
10
Table8.76.Deprecated options
Deprecated option
New Option
[rpc_notifier2] topics
[DEFAULT] notification_topics
474
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
9. Object Storage
Table of Contents
Introduction to Object Storage ....................................................................................
Object Storage general service configuration ...............................................................
Object server configuration .........................................................................................
Object expirer configuration ........................................................................................
Container server configuration ....................................................................................
Container sync realms configuration ............................................................................
Container reconciler configuration ...............................................................................
Account server configuration .......................................................................................
Proxy server configuration ...........................................................................................
Proxy server memcache configuration ..........................................................................
Rsyncd configuration ...................................................................................................
Configure Object Storage features ..............................................................................
New, updated and deprecated options in Juno for OpenStack Object Storage ..............
475
475
477
486
489
496
497
500
506
522
522
523
539
OpenStack Object Storage uses multiple configuration files for multiple services and background daemons, and paste.deploy to manage server configurations. Default configuration options appear in the [DEFAULT] section. You can override the default values by setting values in the other sections.
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Object Storage uses paste.deploy to manage server configurations. Read more at http://
pythonpaste.org/deploy/.
Default configuration options are set in the `[DEFAULT]` section, and any options specified
there can be overridden in any of the other sections when the syntax set option_name
= value is in place.
Configuration for servers and daemons can be expressed together in the same file for each
type of server, or separately. If a required section for the service trying to start is missing,
there will be an error. Sections not used by the service are ignored.
Consider the example of an Object Storage node. By convention configuration for the object-server, object-updater, object-replicator, and object-auditor exist
in a single file /etc/swift/object-server.conf:
[DEFAULT]
[pipeline:main]
pipeline = object-server
[app:object-server]
use = egg:swift#object
[object-replicator]
reclaim_age = 259200
[object-updater]
[object-auditor]
If you omit the object-auditor section, this file cannot be used as the configuration path
when starting the swift-object-auditor daemon:
$ swift-object-auditor /etc/swift/object-server.conf
Unable to find object-auditor config section in /etc/swift/object-server.conf
If the configuration path is a directory instead of a file, all of the files in the directory with
the file extension ".conf" will be combined to generate the configuration object which is delivered to the Object Storage service. This is referred to generally as "directory-based configuration".
Directory-based configuration leverages ConfigParser's native multi-file support. Files ending in ".conf" in the given directory are parsed in lexicographical order. File names starting
with '.' are ignored. A mixture of file and directory configuration paths is not supported - if
the configuration path is a file, only that file will be parsed.
The Object Storage service management tool swift-init has adopted the convention of
looking for /etc/swift/{type}-server.conf.d/ if the file /etc/swift/{type}server.conf file does not exist.
476
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
When using directory-based configuration, if the same option under the same section appears more than once in different files, the last value parsed is said to override previous occurrences. You can ensure proper override precedence by prefixing the files in the configuration directory with numerical values, as in the following example file layout:
/etc/swift/
default.base
object-server.conf.d/
000_default.conf -> ../default.base
001_default-override.conf
010_server.conf
020_replicator.conf
030_updater.conf
040_auditor.conf
You can inspect the resulting combined configuration object using the swift-config command-line tool.
All the services of an Object Store deployment share a common configuration
in the [swift-hash] section of the /etc/swift/swift.conf file. The
swift_hash_path_suffix and swift_hash_path_prefix values must be identical
on all the nodes.
Description
swift_hash_path_prefix = changeme
swift_hash_path_suffix = changeme
477
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
backlog = 4096
bind_ip = 0.0.0.0
bind_port = 6000
bind_timeout = 30
client_timeout = 60
conn_timeout = 0.5
devices = /srv/node
disable_fallocate = false
Disable "fast fail" fallocate checks if the underlying filesystem does not support it.
disk_chunk_size = 65536
eventlet_debug = false
expiring_objects_account_name = expiring_objects
expiring_objects_container_divisor = 86400
fallocate_reserve = 0
log_address = /dev/log
log_custom_handlers =
log_facility = LOG_LOCAL0
log_level = INFO
Logging level
log_max_line_length = 0
log_name = swift
log_statsd_default_sample_rate = 1.0
log_statsd_host = localhost
log_statsd_metric_prefix =
log_statsd_port = 8125
log_statsd_sample_rate_factor = 1.0
log_udp_host =
log_udp_port = 514
max_clients = 1024
Maximum number of clients one worker can process simultaneously Lowering the number of clients handled per
worker, and raising the number of workers can lessen the
impact that a CPU intensive, or blocking, request can have
on other requests served by the same worker. If the maximum number of clients is set to one, then a given worker will not perform another call while processing, allowing
other workers a chance to process it.
mount_check = true
Whether or not check if the devices are mounted to prevent accidentally writing to the root device
478
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
network_chunk_size = 65536
node_timeout = 3
swift_dir = /etc/swift
user = swift
User to run as
workers = auto
Description
auto_create_account_prefix = .
keep_cache_private = false
keep_cache_size = 5424880
max_upload_time = 86400
mb_per_sync = 512
replication_concurrency = 4
Set to restrict the number of concurrent incoming REPLICATION requests; set to 0 for unlimited
replication_failure_ratio = 1.0
If the value of failures / successes of REPLICATION subrequests exceeds this ratio, the overall REPLICATION request
will be aborted
replication_failure_threshold = 100
replication_lock_timeout = 15
Number of seconds to wait for an existing replication device lock before giving up.
replication_one_per_device = True
Restricts incoming REPLICATION requests to one per device, replication_currency above allowing. This can help
control I/O to each device, but you may wish to set this to
False to allow multiple REPLICATION requests (up to the
above replication_concurrency setting) per device.
replication_server = false
Log level
slow = 0
splice = no
threads_per_disk = 0
use = egg:swift#object
479
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
Description
concurrency = 1
daemonize = on
handoff_delete = auto
handoffs_first = False
http_timeout = 60
lockup_timeout = 1800
log_address = /dev/log
log_facility = LOG_LOCAL0
log_level = INFO
Logging level
log_name = object-replicator
reclaim_age = 604800
recon_cache_path = /var/cache/swift
ring_check_interval = 15
rsync_bwlimit = 0
rsync_error_log_line_length = 0
rsync_io_timeout = 30
rsync_timeout = 900
run_pause = 30
stats_interval = 300
sync_method = rsync
vm_test_mode = no
Description
concurrency = 1
interval = 300
log_address = /dev/log
log_facility = LOG_LOCAL0
480
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
Description
log_level = INFO
Logging level
log_name = object-updater
recon_cache_path = /var/cache/swift
slowdown = 0.01
juno
Description
bytes_per_second = 10000000
Maximum bytes audited per second. Should be tuned according to individual system specs. 0 is unlimited. mounted to prevent accidentally writing to the root device process simultaneously (it will actually accept(2) N + 1). Setting this to one (1) will only handle one request at a time,
without accepting another request concurrently. By increasing the number of workers to a much higher value,
one can reduce the impact of slow file system operations
in one request from negatively impacting other requests.
underlying filesystem does not support it. to setup custom
log handlers. bytes you'd like fallocate to reserve, whether
there is space for the given file size or not. This is useful for
systems that behave badly when they completely run out
of space; you can make the services pretend they're out of
space early. container server. For most cases, this should
be `egg:swift#container`.
concurrency = 1
disk_chunk_size = 65536
files_per_second = 20
Maximum files audited per second. Should be tuned according to individual system specs. 0 is unlimited.
log_address = /dev/log
log_facility = LOG_LOCAL0
log_level = INFO
Logging level
log_name = object-auditor
log_time = 3600
object_size_stats =
recon_cache_path = /var/cache/swift
zero_byte_files_per_second = 50
Description
disable_path =
use = egg:swift#healthcheck
Description
recon_cache_path = /var/cache/swift
recon_lock_path = /var/lock
481
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
Description
use = egg:swift#recon
juno
Description
dump_interval = 5.0
dump_timestamp = false
flush_at_shutdown = false
log_filename_prefix = /tmp/log/swift/profile/default.profile
path = /__profile__
profile_module = eventlet.green.profile
unwind = false
use = egg:swift#xprofile
482
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
log_statsd_port = 8125
log_statsd_default_sample_rate = 1.0
log_statsd_sample_rate_factor = 1.0
log_statsd_metric_prefix =
eventlet_debug = false
You can set fallocate_reserve to the number of bytes you'd like fallocate to
reserve, whether there is space for the given file size or not.
fallocate_reserve = 0
Time to wait while attempting to connect to another backend node.
conn_timeout = 0.5
Time to wait while sending each chunk of data to another backend node.
node_timeout = 3
Time to wait while receiving each chunk of data from a client or another
backend node.
client_timeout = 60
network_chunk_size = 65536
disk_chunk_size = 65536
[pipeline:main]
pipeline = healthcheck recon object-server
[app:object-server]
use = egg:swift#object
# You can override the default log routing for this app here:
# set log_name = object-server
# set log_facility = LOG_LOCAL0
# set log_level = INFO
# set log_requests = true
# set log_address = /dev/log
#
# max_upload_time = 86400
# slow = 0
#
# Objects smaller than this are not evicted from the buffercache once read
# keep_cache_size = 5424880
#
# If true, objects for authenticated GET requests may be kept in buffer cache
# if small enough
# keep_cache_private = false
#
# on PUTs, sync data every n MB
# mb_per_sync = 512
#
# Comma separated list of headers that can be set in metadata on an object.
# This list is in addition to X-Object-Meta-* headers and cannot include
# Content-Type, etag, Content-Length, or deleted
# allowed_headers = Content-Disposition, Content-Encoding, X-Delete-At, XObject-Manifest, X-Static-Large-Object
#
# auto_create_account_prefix = .
#
# A value of 0 means "don't use thread pools". A reasonable starting point is
# 4.
# threads_per_disk = 0
#
# Configure parameter for creating specific server
483
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
[filter:healthcheck]
use = egg:swift#healthcheck
# An optional filesystem path, which if present, will cause the healthcheck
# URL to return "503 Service Unavailable" with a body of "DISABLED BY FILE"
# disable_path =
[filter:recon]
use = egg:swift#recon
#recon_cache_path = /var/cache/swift
#recon_lock_path = /var/lock
[object-replicator]
# You can override the default log routing for this app here (don't use set!):
# log_name = object-replicator
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
#
# vm_test_mode = no
# daemonize = on
# run_pause = 30
# concurrency = 1
# stats_interval = 300
#
# The sync method to use; default is rsync but you can use ssync to try the
# EXPERIMENTAL all-swift-code-no-rsync-callouts method. Once ssync is verified
# as having performance comparable to, or better than, rsync, we plan to
# deprecate rsync so we can move on with more features for replication.
# sync_method = rsync
484
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
485
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
# slowdown = 0.01
#
# recon_cache_path = /var/cache/swift
[object-auditor]
# You can override the default log routing for this app here (don't use set!):
# log_name = object-auditor
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
#
# files_per_second = 20
# bytes_per_second = 10000000
# log_time = 3600
# zero_byte_files_per_second = 50
# recon_cache_path = /var/cache/swift
#
#
#
#
Takes a comma separated list of ints. If set, the object auditor will
increment a counter for every object whose size is <= to the given break
points and report the result after a full scan.
object_size_stats =
Description
log_address = /dev/log
log_custom_handlers =
log_facility = LOG_LOCAL0
log_level = INFO
Logging level
log_max_line_length = 0
log_name = swift
log_statsd_default_sample_rate = 1.0
log_statsd_host = localhost
log_statsd_metric_prefix =
log_statsd_port = 8125
log_statsd_sample_rate_factor = 1.0
log_udp_host =
log_udp_port = 514
swift_dir = /etc/swift
user = swift
User to run as
486
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
use = egg:swift#proxy
Description
use = egg:swift#memcache
Description
use = egg:swift#catch_errors
Description
access_log_address = /dev/log
access_log_facility = LOG_LOCAL0
access_log_headers = false
access_log_headers_only =
access_log_level = INFO
access_log_name = swift
access_log_statsd_default_sample_rate = 1.0
access_log_statsd_host = localhost
access_log_statsd_metric_prefix =
access_log_statsd_port = 8125
access_log_statsd_sample_rate_factor = 1.0
access_log_udp_host =
access_log_udp_port = 514
log_statsd_valid_http_methods =
GET,HEAD,POST,PUT,DELETE,COPY,OPTIONS
reveal_sensitive_prefix = 16
The X-Auth-Token is sensitive data. If revealed to an unauthorised person, they can now make requests against an
account until the token expires. Set reveal_sensitive_prefix
to the number of characters of the token that are logged.
For example reveal_sensitive_prefix = 12 so only first 12
characters of the token are logged. Or, set to 0 to completely remove the token.
use = egg:swift#proxy_logging
487
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
auto_create_account_prefix = .
concurrency = 1
expiring_objects_account_name = expiring_objects
interval = 300
process = 0
(it will actually accept(2) N + 1). Setting this to one (1) will
only handle one request at a time, without accepting another request concurrently.
processes = 0
reclaim_age = 604800
recon_cache_path = /var/cache/swift
report_interval = 300
Description
488
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
#
#
#
#
#
#
#
#
#
#
Description
allowed_sync_hosts = 127.0.0.1
backlog = 4096
bind_ip = 0.0.0.0
bind_port = 6001
bind_timeout = 30
db_preallocation = off
489
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
devices = /srv/node
disable_fallocate = false
Disable "fast fail" fallocate checks if the underlying filesystem does not support it.
eventlet_debug = false
fallocate_reserve = 0
log_address = /dev/log
log_custom_handlers =
log_facility = LOG_LOCAL0
log_level = INFO
Logging level
log_max_line_length = 0
log_name = swift
log_statsd_default_sample_rate = 1.0
log_statsd_host = localhost
log_statsd_metric_prefix =
log_statsd_port = 8125
log_statsd_sample_rate_factor = 1.0
log_udp_host =
log_udp_port = 514
max_clients = 1024
Maximum number of clients one worker can process simultaneously Lowering the number of clients handled per
worker, and raising the number of workers can lessen the
impact that a CPU intensive, or blocking, request can have
on other requests served by the same worker. If the maximum number of clients is set to one, then a given worker will not perform another call while processing, allowing
other workers a chance to process it.
mount_check = true
Whether or not check if the devices are mounted to prevent accidentally writing to the root device
swift_dir = /etc/swift
user = swift
User to run as
workers = auto
Description
allow_versions = false
auto_create_account_prefix = .
conn_timeout = 0.5
490
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
node_timeout = 3
replication_server = false
Log level
use = egg:swift#container
Description
Description
concurrency = 8
conn_timeout = 0.5
interval = 30
log_address = /dev/log
log_facility = LOG_LOCAL0
log_level = INFO
Logging level
log_name = container-replicator
max_diffs = 100
node_timeout = 10
per_diff = 1000
reclaim_age = 604800
recon_cache_path = /var/cache/swift
run_pause = 30
vm_test_mode = no
Description
account_suppression_time = 60
Seconds to suppress updating an account that has generated an error (timeout, not yet found, etc.)
concurrency = 4
conn_timeout = 0.5
interval = 300
491
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
Description
log_address = /dev/log
log_facility = LOG_LOCAL0
log_level = INFO
Logging level
log_name = container-updater
node_timeout = 3
recon_cache_path = /var/cache/swift
slowdown = 0.01
juno
Description
containers_per_second = 200
interval = 1800
log_address = /dev/log
log_facility = LOG_LOCAL0
log_level = INFO
Logging level
log_name = container-auditor
recon_cache_path = /var/cache/swift
Description
container_time = 60
interval = 300
log_address = /dev/log
log_facility = LOG_LOCAL0
log_level = INFO
Logging level
log_name = container-sync
sync_proxy = http://10.1.1.1:8888,http://10.1.1.2:8888
Description
disable_path =
use = egg:swift#healthcheck
492
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
recon_cache_path = /var/cache/swift
use = egg:swift#recon
Description
dump_interval = 5.0
dump_timestamp = false
flush_at_shutdown = false
log_filename_prefix = /tmp/log/swift/profile/default.profile
path = /__profile__
profile_module = eventlet.green.profile
unwind = false
use = egg:swift#xprofile
493
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
log_custom_handlers =
If set, log_udp_host will override log_address
log_udp_host =
log_udp_port = 514
You can enable StatsD logging here:
log_statsd_host = localhost
log_statsd_port = 8125
log_statsd_default_sample_rate = 1.0
log_statsd_sample_rate_factor = 1.0
log_statsd_metric_prefix =
If you don't mind the extra disk space usage in overhead, you can turn this
on to preallocate disk space with SQLite databases to decrease
fragmentation.
# db_preallocation = off
#
# eventlet_debug = false
#
# You can set fallocate_reserve to the number of bytes you'd like fallocate to
# reserve, whether there is space for the given file size or not.
# fallocate_reserve = 0
[pipeline:main]
pipeline = healthcheck recon container-server
[app:container-server]
use = egg:swift#container
# You can override the default log routing for this app here:
# set log_name = container-server
# set log_facility = LOG_LOCAL0
# set log_level = INFO
# set log_requests = true
# set log_address = /dev/log
#
# node_timeout = 3
# conn_timeout = 0.5
# allow_versions = false
# auto_create_account_prefix = .
#
# Configure parameter for creating specific server
# To handle all verbs, including replication verbs, do not specify
# "replication_server" (this is the default). To only handle replication,
# set to a True value (e.g. "True" or "1"). To handle only non-replication
# verbs, set to "False". Unless you have a separate replication network, you
# should not specify any value for "replication_server".
# replication_server = false
[filter:healthcheck]
use = egg:swift#healthcheck
# An optional filesystem path, which if present, will cause the healthcheck
# URL to return "503 Service Unavailable" with a body of "DISABLED BY FILE"
# disable_path =
[filter:recon]
use = egg:swift#recon
#recon_cache_path = /var/cache/swift
[container-replicator]
494
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
You can override the default log routing for this app here (don't use set!):
log_name = container-replicator
log_facility = LOG_LOCAL0
log_level = INFO
log_address = /dev/log
vm_test_mode = no
per_diff = 1000
max_diffs = 100
concurrency = 8
interval = 30
node_timeout = 10
conn_timeout = 0.5
The replicator also performs reclamation
reclaim_age = 604800
Time in seconds to wait between replication passes
run_pause = 30
recon_cache_path = /var/cache/swift
[container-updater]
# You can override the default log routing for this app here (don't use set!):
# log_name = container-updater
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
#
# interval = 300
# concurrency = 4
# node_timeout = 3
# conn_timeout = 0.5
#
# slowdown will sleep that amount between containers
# slowdown = 0.01
#
# Seconds to suppress updating an account that has generated an error
# account_suppression_time = 60
#
# recon_cache_path = /var/cache/swift
[container-auditor]
# You can override the default log routing for this app here (don't use set!):
# log_name = container-auditor
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
#
# Will audit each container at most once per interval
# interval = 1800
#
# containers_per_second = 200
# recon_cache_path = /var/cache/swift
[container-sync]
# You can override the default log routing for this app here (don't use set!):
# log_name = container-sync
# log_facility = LOG_LOCAL0
# log_level = INFO
495
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
# log_address = /dev/log
#
# If you need to use an HTTP Proxy, set it here; defaults to no proxy.
# You can also set this to a comma separated list of HTTP Proxies and they
will
# be randomly used (simple load balancing).
# sync_proxy = http://10.1.1.1:8888,http://10.1.1.2:8888
#
# Will sync each container at most once per interval
# interval = 300
#
# Maximum amount of time to spend syncing each container per pass
# container_time = 60
Description
mtime_check_interval = 300
Description
cluster_name1 = https://host1/v1/
cluster_name2 = https://host2/v1/
key = realm1key
key2 = realm1key2
Description
cluster_name3 = https://host3/v1/
cluster_name4 = https://host4/v1/
key = realm2key
key2 = realm2key2
[DEFAULT]
The number of seconds between checking the modified time of this config file
for changes and therefore reloading it.
mtime_check_interval = 300
496
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
#
#
#
#
#
#
#
#
#
#
#
[realm1]
key = realm1key
key2 = realm1key2
cluster_name1 = https://host1/v1/
cluster_name2 = https://host2/v1/
#
#
#
#
#
#
#
#
Each section name is the name of a sync realm. A sync realm is a set of
clusters that have agreed to allow container syncing with each other. Realm
names will be considered case insensitive.
[realm2]
key = realm2key
key2 = realm2key2
cluster_name3 = https://host3/v1/
cluster_name4 = https://host4/v1/
The key is the overall cluster-to-cluster key used in combination with the
external users' key that they set on their containers' X-Container-Sync-Key
metadata header values. These keys will be used to sign each request the
container sync daemon makes and used to validate each incoming container
sync
# request.
#
# The key2 is optional and is an additional key incoming requests will be
# checked against. This is so you can rotate keys if you wish; you move the
# existing key to key2 and make a new key value.
#
# Any values in the realm section whose names begin with cluster_ will
indicate
# the name and endpoint of a cluster and will be used by external users in
# their containers' X-Container-Sync-To metadata header values with the format
# "realm_name/cluster_name/container_name". Realm and cluster names are
# considered case insensitive.
#
# The endpoint is what the container sync daemon will use when sending out
# requests to that cluster. Keep in mind this endpoint must be reachable by
all
# container servers, since that is where the container sync daemon runs. Note
# the the endpoint ends with /v1/ and that the container sync daemon will then
# add the account/container/obj name after that.
#
# Distribute this container-sync-realms.conf file to all your proxy servers
# and container servers.
Description
log_address = /dev/log
497
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
log_custom_handlers =
log_facility = LOG_LOCAL0
log_level = INFO
Logging level
log_name = swift
log_statsd_default_sample_rate = 1.0
log_statsd_host = localhost
log_statsd_metric_prefix =
log_statsd_port = 8125
log_statsd_sample_rate_factor = 1.0
log_udp_host =
log_udp_port = 514
swift_dir = /etc/swift
user = swift
User to run as
Description
use = egg:swift#proxy
Description
interval = 30
reclaim_age = 604800
request_tries = 3
Description
use = egg:swift#memcache
Description
use = egg:swift#catch_errors
Description
use = egg:swift#proxy_logging
498
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
499
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
backlog = 4096
bind_ip = 0.0.0.0
bind_port = 6002
bind_timeout = 30
db_preallocation = off
devices = /srv/node
disable_fallocate = false
Disable "fast fail" fallocate checks if the underlying filesystem does not support it.
eventlet_debug = false
fallocate_reserve = 0
log_address = /dev/log
log_custom_handlers =
log_facility = LOG_LOCAL0
log_level = INFO
Logging level
log_max_line_length = 0
log_name = swift
log_statsd_default_sample_rate = 1.0
log_statsd_host = localhost
log_statsd_metric_prefix =
log_statsd_port = 8125
log_statsd_sample_rate_factor = 1.0
500
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
log_udp_host =
log_udp_port = 514
max_clients = 1024
Maximum number of clients one worker can process simultaneously Lowering the number of clients handled per
worker, and raising the number of workers can lessen the
impact that a CPU intensive, or blocking, request can have
on other requests served by the same worker. If the maximum number of clients is set to one, then a given worker will not perform another call while processing, allowing
other workers a chance to process it.
mount_check = true
Whether or not check if the devices are mounted to prevent accidentally writing to the root device
swift_dir = /etc/swift
user = swift
User to run as
workers = auto
Description
auto_create_account_prefix = .
replication_server = false
Log level
use = egg:swift#account
Description
Description
concurrency = 8
conn_timeout = 0.5
error_suppression_interval = 60
Time in seconds that must elapse since the last error for a
node to be considered no longer error limited
error_suppression_limit = 10
interval = 30
501
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
log_address = /dev/log
log_facility = LOG_LOCAL0
log_level = INFO
Logging level
log_name = account-replicator
max_diffs = 100
node_timeout = 10
per_diff = 1000
reclaim_age = 604800
recon_cache_path = /var/cache/swift
run_pause = 30
vm_test_mode = no
Description
accounts_per_second = 200
interval = 1800
log_address = /dev/log
log_facility = LOG_LOCAL0
log_level = INFO
Logging level
log_name = account-auditor
recon_cache_path = /var/cache/swift
Description
concurrency = 25
conn_timeout = 0.5
delay_reaping = 0
interval = 3600
log_address = /dev/log
log_facility = LOG_LOCAL0
log_level = INFO
Logging level
log_name = account-reaper
node_timeout = 10
reap_warn_after = 2592000
502
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
disable_path =
use = egg:swift#healthcheck
Description
recon_cache_path = /var/cache/swift
use = egg:swift#recon
Description
dump_interval = 5.0
dump_timestamp = false
flush_at_shutdown = false
log_filename_prefix = /tmp/log/swift/profile/default.profile
path = /__profile__
profile_module = eventlet.green.profile
unwind = false
use = egg:swift#xprofile
503
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
If you don't mind the extra disk space usage in overhead, you can turn this
on to preallocate disk space with SQLite databases to decrease
fragmentation.
# db_preallocation = off
#
# eventlet_debug = false
#
# You can set fallocate_reserve to the number of bytes you'd like fallocate to
# reserve, whether there is space for the given file size or not.
# fallocate_reserve = 0
[pipeline:main]
pipeline = healthcheck recon account-server
[app:account-server]
use = egg:swift#account
# You can override the default log routing for this app here:
# set log_name = account-server
# set log_facility = LOG_LOCAL0
# set log_level = INFO
# set log_requests = true
# set log_address = /dev/log
#
# auto_create_account_prefix = .
#
# Configure parameter for creating specific server
# To handle all verbs, including replication verbs, do not specify
# "replication_server" (this is the default). To only handle replication,
# set to a True value (e.g. "True" or "1"). To handle only non-replication
# verbs, set to "False". Unless you have a separate replication network, you
# should not specify any value for "replication_server".
# replication_server = false
[filter:healthcheck]
use = egg:swift#healthcheck
# An optional filesystem path, which if present, will cause the healthcheck
# URL to return "503 Service Unavailable" with a body of "DISABLED BY FILE"
# disable_path =
[filter:recon]
use = egg:swift#recon
# recon_cache_path = /var/cache/swift
[account-replicator]
# You can override the default log routing for this app here (don't use set!):
504
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
log_name = account-replicator
log_facility = LOG_LOCAL0
log_level = INFO
log_address = /dev/log
vm_test_mode = no
per_diff = 1000
max_diffs = 100
concurrency = 8
interval = 30
How long without an error before a node's error count is reset. This will
also be how long before a node is reenabled after suppression is triggered.
error_suppression_interval = 60
How many errors can accumulate before a node is temporarily ignored.
error_suppression_limit = 10
node_timeout = 10
conn_timeout = 0.5
The replicator also performs reclamation
reclaim_age = 604800
Time in seconds to wait between replication passes
run_pause = 30
recon_cache_path = /var/cache/swift
[account-auditor]
# You can override the default log routing for this app here (don't use set!):
# log_name = account-auditor
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
#
# Will audit each account at most once per interval
# interval = 1800
#
# log_facility = LOG_LOCAL0
# log_level = INFO
# accounts_per_second = 200
# recon_cache_path = /var/cache/swift
[account-reaper]
# You can override the default log routing for this app here (don't use set!):
# log_name = account-reaper
# log_facility = LOG_LOCAL0
# log_level = INFO
# log_address = /dev/log
#
# concurrency = 25
# interval = 3600
# node_timeout = 10
# conn_timeout = 0.5
#
# Normally, the reaper begins deleting account information for deleted
accounts
# immediately; you can set this to delay its work however. The value is in
# seconds; 2592000 = 30 days for example.
505
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
delay_reaping = 0
If the account fails to be be reaped due to a persistent error, the
account reaper will log a message such as:
Account <name> has not been reaped since <date>
You can search logs for this message if space is not being reclaimed
after you delete account(s).
Default is 2592000 seconds (30 days). This is in addition to any time
requested by delay_reaping.
reap_warn_after = 2592000
Description
admin_key = secret_admin_key
backlog = 4096
bind_ip = 0.0.0.0
bind_port = 8080
bind_timeout = 30
cert_file = /etc/swift/proxy.crt
client_timeout = 60
cors_allow_origin =
eventlet_debug = false
expiring_objects_account_name = expiring_objects
expiring_objects_container_divisor = 86400
expose_info = true
key_file = /etc/swift/proxy.key
log_address = /dev/log
log_custom_handlers =
log_facility = LOG_LOCAL0
log_headers = false
log_level = INFO
Logging level
506
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
log_max_line_length = 0
log_name = swift
log_statsd_default_sample_rate = 1.0
log_statsd_host = localhost
log_statsd_metric_prefix =
log_statsd_port = 8125
log_statsd_sample_rate_factor = 1.0
log_udp_host =
log_udp_port = 514
max_clients = 1024
Maximum number of clients one worker can process simultaneously Lowering the number of clients handled per
worker, and raising the number of workers can lessen the
impact that a CPU intensive, or blocking, request can have
on other requests served by the same worker. If the maximum number of clients is set to one, then a given worker will not perform another call while processing, allowing
other workers a chance to process it.
strict_cors_mode = True
swift_dir = /etc/swift
trans_id_suffix =
user = swift
User to run as
workers = auto
Description
account_autocreate = false
allow_account_management = false
auto_create_account_prefix = .
client_chunk_size = 65536
conn_timeout = 0.5
deny_host_headers =
error_suppression_interval = 60
Time in seconds that must elapse since the last error for a
node to be considered no longer error limited
error_suppression_limit = 10
log_handoffs = true
max_containers_per_account = 0
max_containers_whitelist =
507
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
max_large_object_get_time = 86400
node_timeout = 10
object_chunk_size = 65536
object_post_as_copy = true
post_quorum_timeout = 0.5
put_queue_depth = 10
recheck_account_existence = 60
recheck_container_existence = 60
recoverable_node_timeout = node_timeout
request_node_count = 2 * replicas
* replicas Set to the number of nodes to contact for a normal request. You can use '* replicas' at the end to have it
use the number given times the number of replicas for the
ring being used for the request. conf file for values will only be shown to the list of swift_owners. The exact default
definition of a swift_owner is headers> up to the auth system in use, but usually indicates administrative responsibilities. paste.deploy to use for auth. To use tempauth set to:
`egg:swift#tempauth` each request
Log level
sorting_method = shuffle
the sample These are the headers whose conf file for values will only be shown to the list of swift_owners. The exact default definition of a swift_owner is headers> up to
the auth system in use, but usually indicates administrative
responsibilities. paste.deploy to use for auth. To use tempauth set to: `egg:swift#tempauth` each request
timing_expiry = 300
use = egg:swift#proxy
write_affinity = r1, r2
write_affinity_node_count = 2 * replicas
Description
pipeline = catch_errors gatekeeper healthcheck proxylogging cache container_sync bulk tempurl ratelimit tempauth container-quotas account-quotas slo dlo proxy-logging proxy-server
508
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
use = egg:swift#account_quotas
Description
admin_password = password
admin_tenant_name = service
admin_user = swift
auth_host = keystonehost
auth_port = 35357
auth_protocol = http
auth_uri = http://keystonehost:5000/
cache = swift.cache
delay_auth_decision = 1
include_service_catalog = False
Description
memcache_max_connections = 2
memcache_serialization_support = 2
memcache_servers = 127.0.0.1:11211
Log level
use = egg:swift#memcache
Description
Log level
use = egg:swift#catch_errors
509
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
allow_full_urls = true
current = //REALM/CLUSTER
use = egg:swift#container_sync
Description
max_get_time = 86400
rate_limit_after_segment = 10
rate_limit_segments_per_sec = 1
use = egg:swift#dlo
Description
Log level
use = egg:swift#gatekeeper
Description
disable_path =
use = egg:swift#healthcheck
Description
allow_names_in_acls = true
default_domain_id = default
reseller_admin_role = ResellerAdmin
use = egg:swift#keystoneauth
510
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
list_endpoints_path = /endpoints/
use = egg:swift#list_endpoints
Description
access_log_address = /dev/log
access_log_facility = LOG_LOCAL0
access_log_headers = false
access_log_headers_only =
access_log_level = INFO
access_log_name = swift
access_log_statsd_default_sample_rate = 1.0
access_log_statsd_host = localhost
access_log_statsd_metric_prefix =
access_log_statsd_port = 8125
access_log_statsd_sample_rate_factor = 1.0
access_log_udp_host =
access_log_udp_port = 514
log_statsd_valid_http_methods =
GET,HEAD,POST,PUT,DELETE,COPY,OPTIONS
reveal_sensitive_prefix = 16
The X-Auth-Token is sensitive data. If revealed to an unauthorised person, they can now make requests against an
account until the token expires. Set reveal_sensitive_prefix
to the number of characters of the token that are logged.
For example reveal_sensitive_prefix = 12 so only first 12
characters of the token are logged. Or, set to 0 to completely remove the token.
use = egg:swift#proxy_logging
Description
allow_overrides = true
auth_prefix = /auth/
The HTTP request path prefix for the auth service. Swift itself reserves anything beginning with the letter `v`.
reseller_prefix = AUTH
Log level
511
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
storage_url_scheme = default
token_life = 86400
use = egg:swift#tempauth
user_test_tester3 = testing3
Description
dump_interval = 5.0
dump_timestamp = false
flush_at_shutdown = false
log_filename_prefix = /tmp/log/swift/profile/default.profile
path = /__profile__
profile_module = eventlet.green.profile
unwind = false
use = egg:swift#xprofile
Default is empty,
#
#
#
#
512
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
workers = auto
Maximum concurrent requests per worker
max_clients = 1024
Set the following two lines to enable SSL. This is for testing only.
cert_file = /etc/swift/proxy.crt
key_file = /etc/swift/proxy.key
expiring_objects_container_divisor = 86400
expiring_objects_account_name = expiring_objects
You can specify default log routing here if you want:
log_name = swift
log_facility = LOG_LOCAL0
log_level = INFO
log_headers = false
log_address = /dev/log
This optional suffix (default is empty) that would be appended to the swift
transaction
# id allows one to easily figure out from which cluster that X-Trans-Id
belongs to.
# This is very useful when one is managing more than one swift cluster.
# trans_id_suffix =
#
# comma separated list of functions to call to setup custom log handlers.
# functions get passed: conf, name, log_to_console, log_route, fmt, logger,
# adapted_logger
# log_custom_handlers =
#
# If set, log_udp_host will override log_address
# log_udp_host =
# log_udp_port = 514
#
# You can enable StatsD logging here:
# log_statsd_host = localhost
# log_statsd_port = 8125
# log_statsd_default_sample_rate = 1.0
# log_statsd_sample_rate_factor = 1.0
# log_statsd_metric_prefix =
#
# Use a comma separated list of full url (http://foo.bar:1234,https://foo.bar)
# cors_allow_origin =
#
# client_timeout = 60
# eventlet_debug = false
[pipeline:main]
pipeline = catch_errors gatekeeper healthcheck proxy-logging cache
container_sync bulk tempurl slo dlo ratelimit tempauth container-quotas
account-quotas proxy-logging proxy-server
[app:proxy-server]
use = egg:swift#proxy
# You can override the default log routing for this app here:
# set log_name = proxy-server
# set log_facility = LOG_LOCAL0
# set log_level = INFO
# set log_address = /dev/log
513
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
log_handoffs = true
recheck_account_existence = 60
recheck_container_existence = 60
object_chunk_size = 8192
client_chunk_size = 8192
How long the proxy server will wait on responses from the a/c/o servers.
node_timeout = 10
How long the proxy server will wait for an initial response and to read a
chunk of data from the object servers while serving GET / HEAD requests.
Timeouts from these requests can be recovered from so setting this to
something lower than node_timeout would provide quicker error recovery
while allowing for a longer timeout for non-recoverable requests (PUTs).
Defaults to node_timeout, should be overriden if node_timeout is set to a
high number to prevent client timeouts from firing before the proxy server
has a chance to retry.
recoverable_node_timeout = node_timeout
conn_timeout = 0.5
How long to wait for requests to finish after a quorum has been established.
post_quorum_timeout = 0.5
How long without an error before a node's error count is reset. This will
also be how long before a node is reenabled after suppression is triggered.
error_suppression_interval = 60
How many errors can accumulate before a node is temporarily ignored.
error_suppression_limit = 10
If set to 'true' any authorized user may create and delete accounts; if
'false' no one, even authorized, can.
allow_account_management = false
514
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
515
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
[filter:tempauth]
use = egg:swift#tempauth
# You can override the default log routing for this filter here:
# set log_name = tempauth
# set log_facility = LOG_LOCAL0
# set log_level = INFO
# set log_headers = false
# set log_address = /dev/log
#
# The reseller prefix will verify a token begins with this prefix before even
# attempting to validate it. Also, with authorization, only Swift storage
# accounts with this prefix will be authorized by this middleware. Useful if
# multiple auth systems are in use for one Swift cluster.
# reseller_prefix = AUTH
#
# The auth prefix will cause requests beginning with this prefix to be routed
# to the auth subsystem, for granting tokens, etc.
# auth_prefix = /auth/
# token_life = 86400
#
# This allows middleware higher in the WSGI pipeline to override auth
# processing, useful for middleware such as tempurl and formpost. If you know
# you're not going to use such middleware and you want a bit of extra
security,
# you can set this to false.
# allow_overrides = true
#
# This specifies what scheme to return with storage urls:
# http, https, or default (chooses based on what the server is running as)
# This can be useful with an SSL load balancer in front of a non-SSL server.
# storage_url_scheme = default
#
# Lastly, you need to list all the accounts/users you want here. The format
is:
#
user_<account>_<user> = <key> [group] [group] [...] [storage_url]
# or if you want underscores in <account> or <user>, you can base64 encode
them
# (with no equal signs) and use this format:
#
user64_<account_b64>_<user_b64> = <key> [group] [group] [...]
[storage_url]
# There are special groups of:
#
.reseller_admin = can do anything to any account for this auth
#
.admin = can do anything within the account
# If neither of these groups are specified, the user can only access
containers
# that have been explicitly allowed for them by a .admin or .reseller_admin.
# The trailing optional storage_url allows you to specify an alternate url to
# hand back to the user upon authentication. If not specified, this defaults
to
# $HOST/v1/<reseller_prefix>_<account> where $HOST will do its best to resolve
# to what the requester would need to use to reach this host.
# Here are example entries, required for running the tests:
user_admin_admin = admin .admin .reseller_admin
user_test_tester = testing .admin
user_test2_tester2 = testing2 .admin
516
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
user_test_tester3 = testing3
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
[filter:healthcheck]
use = egg:swift#healthcheck
# An optional filesystem path, which if present, will cause the healthcheck
# URL to return "503 Service Unavailable" with a body of "DISABLED BY FILE".
# This facility may be used to temporarily remove a Swift node from a load
# balancer pool during maintenance or upgrade (remove the file to allow the
# node back into the load balancer pool).
# disable_path =
[filter:cache]
use = egg:swift#memcache
# You can override the default log routing for this filter here:
# set log_name = cache
# set log_facility = LOG_LOCAL0
# set log_level = INFO
# set log_headers = false
# set log_address = /dev/log
#
# If not set here, the value for memcache_servers will be read from
# memcache.conf (see memcache.conf-sample) or lacking that file, it will
# default to the value below. You can specify multiple servers separated with
# commas, as in: 10.1.2.3:11211,10.1.2.4:11211
# memcache_servers = 127.0.0.1:11211
#
# Sets how memcache values are serialized and deserialized:
# 0 = older, insecure pickle serialization
# 1 = json serialization but pickles can still be read (still insecure)
# 2 = json serialization only (secure and the default)
517
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
[filter:ratelimit]
use = egg:swift#ratelimit
# You can override the default log routing for this filter here:
# set log_name = ratelimit
# set log_facility = LOG_LOCAL0
# set log_level = INFO
# set log_headers = false
# set log_address = /dev/log
#
# clock_accuracy should represent how accurate the proxy servers' system
clocks
# are with each other. 1000 means that all the proxies' clock are accurate to
# each other within 1 millisecond. No ratelimit should be higher than the
# clock accuracy.
# clock_accuracy = 1000
#
# max_sleep_time_seconds = 60
#
# log_sleep_time_seconds of 0 means disabled
# log_sleep_time_seconds = 0
#
# allows for slow rates (e.g. running up to 5 sec's behind) to catch up.
# rate_buffer_seconds = 5
#
# account_ratelimit of 0 means disabled
# account_ratelimit = 0
# these are comma separated lists of account names
# account_whitelist = a,b
# account_blacklist = c,d
# with container_limit_x = r
# for containers of size x limit write requests per second to r. The
container
# rate will be linearly interpolated from the values given. With the values
# below, a container of size 5 will get a rate of 75.
# container_ratelimit_0 = 100
# container_ratelimit_10 = 50
# container_ratelimit_50 = 20
# Similarly to the above container-level write limits, the following will
limit
# container GET (listing) requests.
# container_listing_ratelimit_0 = 100
# container_listing_ratelimit_10 = 50
# container_listing_ratelimit_50 = 20
[filter:domain_remap]
use = egg:swift#domain_remap
518
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
You
set
set
set
set
set
October 7, 2014
juno
can override the default log routing for this filter here:
log_name = domain_remap
log_facility = LOG_LOCAL0
log_level = INFO
log_headers = false
log_address = /dev/log
storage_domain = example.com
path_root = v1
reseller_prefixes = AUTH
[filter:catch_errors]
use = egg:swift#catch_errors
# You can override the default log routing for this filter here:
# set log_name = catch_errors
# set log_facility = LOG_LOCAL0
# set log_level = INFO
# set log_headers = false
# set log_address = /dev/log
[filter:cname_lookup]
# Note: this middleware requires python-dnspython
use = egg:swift#cname_lookup
# You can override the default log routing for this filter here:
# set log_name = cname_lookup
# set log_facility = LOG_LOCAL0
# set log_level = INFO
# set log_headers = false
# set log_address = /dev/log
#
# Specify the storage_domain that match your cloud, multiple domains
# can be specified separated by a comma
# storage_domain = example.com
#
# lookup_depth = 1
# Note: Put staticweb just after your auth filter(s) in the pipeline
[filter:staticweb]
use = egg:swift#staticweb
# Note: Put tempurl before dlo, slo and your auth filter(s) in the pipeline
[filter:tempurl]
use = egg:swift#tempurl
# The methods allowed with Temp URLs.
# methods = GET HEAD PUT
#
# The headers to remove from incoming requests. Simply a whitespace delimited
# list of header names and names can optionally end with '*' to indicate a
# prefix match. incoming_allow_headers is a list of exceptions to these
# removals.
# incoming_remove_headers = x-timestamp
#
# The headers allowed as exceptions to incoming_remove_headers. Simply a
# whitespace delimited list of header names and names can optionally end with
# '*' to indicate a prefix match.
# incoming_allow_headers =
#
# The headers to remove from outgoing responses. Simply a whitespace delimited
# list of header names and names can optionally end with '*' to indicate a
# prefix match. outgoing_allow_headers is a list of exceptions to these
519
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
removals.
outgoing_remove_headers = x-object-meta-*
The headers allowed as exceptions to outgoing_remove_headers. Simply a
whitespace delimited list of header names and names can optionally end with
'*' to indicate a prefix match.
outgoing_allow_headers = x-object-meta-public-*
# Note: Put formpost just before your auth filter(s) in the pipeline
[filter:formpost]
use = egg:swift#formpost
# Note: Just needs to be placed before the proxy-server in the pipeline.
[filter:name_check]
use = egg:swift#name_check
# forbidden_chars = '"`<>
# maximum_length = 255
# forbidden_regexp = /\./|/\.\./|/\.$|/\.\.$
[filter:list-endpoints]
use = egg:swift#list_endpoints
# list_endpoints_path = /endpoints/
[filter:proxy-logging]
use = egg:swift#proxy_logging
# If not set, logging directives from [DEFAULT] without "access_" will be used
# access_log_name = swift
# access_log_facility = LOG_LOCAL0
# access_log_level = INFO
# access_log_address = /dev/log
#
# If set, access_log_udp_host will override access_log_address
# access_log_udp_host =
# access_log_udp_port = 514
#
# You can use log_statsd_* from [DEFAULT] or override them here:
# access_log_statsd_host = localhost
# access_log_statsd_port = 8125
# access_log_statsd_default_sample_rate = 1.0
# access_log_statsd_sample_rate_factor = 1.0
# access_log_statsd_metric_prefix =
# access_log_headers = false
#
# If access_log_headers is True and access_log_headers_only is set only
# these headers are logged. Multiple headers can be defined as comma separated
# list like this: access_log_headers_only = Host, X-Object-Meta-Mtime
# access_log_headers_only =
#
# By default, the X-Auth-Token is logged. To obscure the value,
# set reveal_sensitive_prefix to the number of characters to log.
# For example, if set to 12, only the first 12 characters of the
# token appear in the log. An unauthorized access of the log file
# won't allow unauthorized usage of the token. However, the first
# 12 or so characters is unique enough that you can trace/debug
# token usage. Set to 0 to suppress the token completely (replaced
# by '...' in the log).
# Note: reveal_sensitive_prefix will not affect the value
# logged with access_log_headers=True.
# reveal_sensitive_prefix = 8192
#
520
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
# What HTTP methods are allowed for StatsD logging (comma-sep); request
methods
# not in this list will have "BAD_METHOD" for the <verb> portion of the
metric.
# log_statsd_valid_http_methods = GET,HEAD,POST,PUT,DELETE,COPY,OPTIONS
#
# Note: The double proxy-logging in the pipeline is not a mistake. The
# left-most proxy-logging is there to log requests that were handled in
# middleware and never made it through to the right-most middleware (and
# proxy server). Double logging is prevented for normal requests. See
# proxy-logging docs.
# Note: Put before both ratelimit and auth in the pipeline.
[filter:bulk]
use = egg:swift#bulk
# max_containers_per_extraction = 10000
# max_failed_extractions = 1000
# max_deletes_per_request = 10000
# max_failed_deletes = 1000
#
#
#
#
# Note: The following parameter is used during a bulk delete of objects and
# their container. This would frequently fail because it is very likely
# that all replicated objects have not been deleted by the time the middleware
got a
# successful response. It can be configured the number of retries. And the
# number of seconds to wait between each retry will be 1.5**retry
# delete_container_retry_count = 0
# Note: Put after auth in the pipeline.
[filter:container-quotas]
use = egg:swift#container_quotas
# Note: Put before both ratelimit and auth in the pipeline.
[filter:slo]
use = egg:swift#slo
# max_manifest_segments = 1000
# max_manifest_size = 2097152
# min_segment_size = 1048576
# Start rate-limiting SLO segment serving after the Nth segment of a
# segmented object.
# rate_limit_after_segment = 10
#
# Once segment rate-limiting kicks in for an object, limit segments served
# to N per second. 0 means no rate-limiting.
# rate_limit_segments_per_sec = 0
#
# Time limit on GET requests (seconds)
# max_get_time = 86400
# Note: Put before both ratelimit and auth in the pipeline, but after
# gatekeeper, catch_errors, and proxy_logging (the first instance).
# If you don't put it in the pipeline, it will be inserted for you.
[filter:dlo]
use = egg:swift#dlo
521
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
[filter:account-quotas]
use = egg:swift#account_quotas
[filter:gatekeeper]
use = egg:swift#gatekeeper
# You can override the default log routing for this filter here:
# set log_name = gatekeeper
# set log_facility = LOG_LOCAL0
# set log_level = INFO
# set log_headers = false
# set log_address = /dev/log
[filter:container_sync]
use = egg:swift#container_sync
# Set this to false if you want to disallow any full url values to be set for
# any new X-Container-Sync-To headers. This will keep any new full urls from
# coming in, but won't change any existing values already in the cluster.
# Updating those will have to be done manually, as knowing what the true realm
# endpoint should be cannot always be guessed.
# allow_full_urls = true
Description
memcache_max_connections = 2
memcache_serialization_support = 2
memcache_servers = 127.0.0.1:11211
Rsyncd configuration
Find an example rsyncd configuration at etc/rsyncd.conf-sample in the source code
repository.
The available configuration options are:
522
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
max connections = 2
path = /srv/node
Description
max connections = 4
path = /srv/node
Description
max connections = 8
path = /srv/node
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
inet or rack of servers may be designated as a single zone to maintain replica availability if
the cabinet becomes unavailable (for example, due to failure of the top of rack switches or
a dedicated circuit). In very large deployments, such as service provider level deployments,
each zone might have an entirely autonomous switching and power infrastructure, so that
even the loss of an electrical circuit or switching aggregator would result in the loss of a single replica at most.
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
database writes to the account and container SQLite databases. It uses memcached and is
dependent on the proxy servers having highly synchronized time. The rate limits are limited
by the accuracy of the proxy server clocks.
Description
account_blacklist = c,d
account_ratelimit = 0
account_whitelist = a,b
clock_accuracy = 1000
container_listing_ratelimit_0 = 100
container_listing_ratelimit_10 = 50
container_listing_ratelimit_50 = 20
container_ratelimit_0 = 100
container_ratelimit_10 = 50
container_ratelimit_50 = 20
log_sleep_time_seconds = 0
To allow visibility into rate limiting set this value > 0 and all
sleeps greater than the number will be logged.
max_sleep_time_seconds = 60
rate_buffer_seconds = 5
Number of seconds the rate counter can drop and be allowed to catch up (at a faster than listed rate). A larger
number will result in larger spikes in rate but better average accuracy.
Log level
use = egg:swift#ratelimit
with container_limit_x = r
The container rate limits are linearly interpolated from the values given. A sample container
rate limiting could be:
525
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
container_ratelimit_100 = 100
container_ratelimit_200 = 50
container_ratelimit_500 = 20
This would result in:
Rate Limit
0-99
No limiting
100
100
150
75
500
20
1000
20
Health check
Provides an easy way to monitor whether the Object Storage proxy server is alive. If you access the proxy with the path /healthcheck, it responds with OK in the response body,
which monitoring tools can use.
Description
disable_path =
use = egg:swift#healthcheck
Domain remap
Middleware that translates container and account parts of a domain to path parameters
that the proxy server understands.
Description
path_root = v1
Root path
reseller_prefixes = AUTH
Reseller prefix
Log level
storage_domain = example.com
526
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
Description
use = egg:swift#domain_remap
juno
CNAME lookup
Middleware that translates an unknown domain in the host header to something that ends
with the configured storage_domain by looking up the given domain's CNAME record in
DNS.
Description
lookup_depth = 1
Log level
storage_domain = example.com
use = egg:swift#cname_lookup
Temporary URL
Allows the creation of URLs to provide temporary access to objects. For example, a website
may wish to provide a link to download a large object in OpenStack Object Storage, but
the Object Storage account has no public access. The website can generate a URL that provides GET access for a limited time to the resource. When the web browser user clicks on
the link, the browser downloads the object directly from Object Storage, eliminating the
need for the website to act as a proxy for the request. If the user shares the link with all his
friends, or accidentally posts it on a forum, the direct access is limited to the expiration time
set when the website created the link.
A temporary URL is the typical URL associated with an object, with two additional query parameters:
temp_url_sig
A cryptographic signature
temp_url_expires
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Any alteration of the resource path or query arguments results in a 401 Unauthorized error. Similarly, a PUT where GET was the allowed method returns a 401. HEAD is allowed if
GET or PUT is allowed. Using this in combination with browser form post translation middleware could also allow direct-from-browser uploads to specific locations in Object Storage.
Note
Changing the X-Account-Meta-Temp-URL-Key invalidates any previously generated temporary URLs within 60 seconds (the memcache time for the
key). Object Storage supports up to two keys, specified by X-Account-MetaTemp-URL-Key and X-Account-Meta-Temp-URL-Key-2. Signatures are
checked against both keys, if present. This is to allow for key rotation without
invalidating all existing temporary URLs.
Object Storage includes a script called swift-temp-url that generates the query parameters
automatically:
$ bin/swift-temp-url GET 3600 /v1/AUTH_account/container/object mykey
/v1/AUTH_account/container/object?
temp_url_sig=5c4cc8886f36a9d0919d708ade98bf0cc71c9e91&
temp_url_expires=1374497657
Because this command only returns the path, you must prefix the Object Storage host
name (for example, https://swift-cluster.example.com).
528
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
incoming_allow_headers =
incoming_remove_headers = x-timestamp
outgoing_allow_headers = x-object-meta-public-*
outgoing_remove_headers = x-object-meta-*
use = egg:swift#tempurl
Description
forbidden_chars = '"`<>
forbidden_regexp = /\./|/\.\./|/\.$|/\.\.$
maximum_length = 255
529
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
Description
use = egg:swift#name_check
juno
Constraints
To change the OpenStack Object Storage internal limits, update the values in the swiftconstraints section in the swift.conf file. Use caution when you update these values
because they affect the performance in the entire cluster.
Description
account_listing_limit = 10000
container_listing_limit = 10000
max_account_name_length = 256
max_container_name_length = 256
max_file_size = 5368709122
max_header_size = 8192
max_meta_count = 90
max_meta_name_length = 128
max_meta_overall_size = 4096
max_meta_value_length = 256
max_object_name_length = 1024
Cluster health
Use the swift-dispersion-report tool to measure overall cluster health. This tool checks if
a set of deliberately distributed containers and objects are currently in their proper places
within the cluster. For instance, a common deployment has three replicas of each object.
The health of that object can be measured by checking if each replica is in its proper place.
If only 2 of the 3 is in place the objects health can be said to be at 66.66%, where 100%
would be perfect. A single objects health, especially an older object, usually reflects the
health of that entire partition the object is in. If you make enough objects on a distinct
percentage of the partitions in the cluster,you get a good estimate of the overall cluster
health. In practice, about 1% partition coverage seems to balance well between accuracy and the amount of time it takes to gather results. The first thing that needs to be done
to provide this health value is create a new account solely for this usage. Next, you need
to place the containers and objects throughout the system so that they are on distinct
partitions. The swift-dispersion-populate tool does this by making up random container
and object names until they fall on distinct partitions. Last, and repeatedly for the life of
the cluster, you must run the swift-dispersion-report tool to check the health of each of
these containers and objects. These tools need direct access to the entire cluster and to the
ring files (installing them on a proxy server suffices). The swift-dispersion-populate and
530
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
There are also configuration options for specifying the dispersion coverage, which defaults
to 1%, retries, concurrency, and so on. However, the defaults are usually fine. Once the configuration is in place, run swift-dispersion-populate to populate the containers and objects
throughout the cluster. Now that those containers and objects are in place, you can run
swift-dispersion-report to get a dispersion report, or the overall health of the cluster. Here
is an example of a cluster in perfect health:
$ swift-dispersion-report
Queried 2621 containers for dispersion reporting, 19s, 0 retries
100.00% of container copies found (7863 of 7863)
Sample represents 1.00% of the container partition space
Queried 2619 objects for dispersion reporting, 7s, 0 retries
100.00% of object copies found (7857 of 7857)
Sample represents 1.00% of the object partition space
Now, deliberately double the weight of a device in the object ring (with replication turned
off) and re-run the dispersion report to show what impact that has:
$ swift-ring-builder object.builder set_weight d0 200
$ swift-ring-builder object.builder rebalance
...
$ swift-dispersion-report
Queried 2621 containers for dispersion reporting, 8s, 0 retries
100.00% of container copies found (7863 of 7863)
Sample represents 1.00% of the container partition space
Queried 2619 objects for dispersion reporting, 7s, 0 retries
There were 1763 partitions missing one copy.
77.56% of object copies found (6094 of 7857)
Sample represents 1.00% of the object partition space
You can see the health of the objects in the cluster has gone down significantly. Of course,
this test environment has just four devices, in a production environment with many devices
the impact of one device change is much less. Next, run the replicators to get everything
put back into place and then rerun the dispersion report:
... start object replicators and monitor logs until they're caught up ...
$ swift-dispersion-report
Queried 2621 containers for dispersion reporting, 17s, 0 retries
100.00% of container copies found (7863 of 7863)
Sample represents 1.00% of the container partition space
Queried 2619 objects for dispersion reporting, 7s, 0 retries
100.00% of object copies found (7857 of 7857)
Sample represents 1.00% of the object partition space
Alternatively, the dispersion report can also be output in JSON format. This allows it to be
more easily consumed by third-party utilities:
531
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
$ swift-dispersion-report -j
{"object": {"retries:": 0, "missing_two": 0, "copies_found": 7863,
"missing_one": 0,
"copies_expected": 7863, "pct_found": 100.0, "overlapping": 0, "missing_all":
0}, "container":
{"retries:": 0, "missing_two": 0, "copies_found": 12534, "missing_one": 0,
"copies_expected":
12534, "pct_found": 100.0, "overlapping": 15, "missing_all": 0}}
Description
auth_key = testing
auth_url = http://localhost:8080/auth/v1.0
auth_user = test:tester
auth_version = 1.0
concurrency = 25
container_populate = yes
container_report = yes
dispersion_coverage = 1.0
dump_json = no
endpoint_type = publicURL
keystone_api_insecure = no
object_populate = yes
object_report = yes
retries = 5
swift_dir = /etc/swift
Description
max_get_time = 86400
max_manifest_segments = 1000
max_manifest_size = 2097152
min_segment_size = 1048576
rate_limit_after_segment = 10
rate_limit_segments_per_sec = 0
532
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
paste.deploy to use for auth. To use tempauth set to:
`egg:swift#tempauth` each request
use = egg:swift#slo
Container quotas
The container_quotas middleware implements simple quotas that can be imposed on
Object Storage containers by a user with the ability to set container metadata, most likely
the account administrator. This can be useful for limiting the scope of containers that are
delegated to non-admin users, exposed to formpost uploads, or just as a self-imposed sanity check.
Any object PUT operations that exceed these quotas return a 403 response (forbidden).
Quotas are subject to several limitations: eventual consistency, the timeliness of the cached
container_info (60 second TTL by default), and it is unable to reject chunked transfer uploads that exceed the quota (though once the quota is exceeded, new chunked transfers
are refused).
Set quotas by adding meta values to the container. These values are validated when you set
them:
X-Container-Meta-Quota-Bytes: Maximum size of the container, in bytes.
X-Container-Meta-Quota-Count: Maximum object count of the container.
Description
use = egg:swift#container_quotas
Account quotas
The x-account-meta-quota-bytes metadata entry must be requests (PUT, POST) if a
given account quota (in bytes) is exceeded while DELETE requests are still allowed.
The x-account-meta-quota-bytes metadata entry must be set to store and enable
the quota. Write requests to this metadata entry are only permitted for resellers. There
is no account quota limitation on a reseller account even if x-account-meta-quota-bytes is set.
Any object PUT operations that exceed the quota return a 413 response (request entity too
large) with a descriptive body.
The following command uses an admin account that own the Reseller role to set a quota
on the test account:
$ swift -A http://127.0.0.1:8080/auth/v1.0 -U admin:admin -K admin \
--os-storage-url http://127.0.0.1:8080/v1/AUTH_test post -m quota-bytes:10000
533
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Here is the stat listing of an account where quota has been set:
$ swift -A http://127.0.0.1:8080/auth/v1.0 -U test:tester -K testing stat
Account: AUTH_test
Containers: 0
Objects: 0
Bytes: 0
Meta Quota-Bytes: 10000
X-Timestamp: 1374075958.37454
X-Trans-Id: tx602634cf478546a39b1be-0051e6bc7a
Bulk delete
Use bulk-delete to delete multiple files from an account with a single request. Responds
to DELETE requests with a header 'X-Bulk-Delete: true_value'. The body of the DELETE request is a new line-separated list of files to delete. The files listed must be URL encoded and
in the form:
/container_name/obj_name
If all files are successfully deleted (or did not exist), the operation returns HTTPOk. If any
files failed to delete, the operation returns HTTPBadGateway. In both cases, the response
body is a JSON dictionary that shows the number of files that were successfully deleted or
not found. The files that failed are listed.
Description
delete_container_retry_count = 0
max_containers_per_extraction = 10000
max_deletes_per_request = 10000
max_failed_deletes = 1000
max_failed_extractions = 1000
use = egg:swift#bulk
yield_frequency = 10
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Alternatively, if you have configured the Ubuntu Cloud Archive, you may use:
# apt-get install swift-python-s3
To add this middleware to your configuration, add the swift3 middleware in front of the
swauth middleware, and before any other middleware that looks at Object Storage requests (like rate limiting).
Ensure that your proxy-server.conf file contains swift3 in the pipeline and the
[filter:swift3] section, as shown below:
[pipeline:main]
pipeline = healthcheck cache swift3 swauth proxy-server
[filter:swift3]
use = egg:swift3#swift3
Next, configure the tool that you use to connect to the S3 API. For S3curl, for example, you
must add your host IP information by adding your host IP to the @endpoints array (line 33
in s3curl.pl):
my @endpoints = ( '1.2.3.4');
To set up your client, ensure you are using the ec2 credentials, which can be downloaded
from the API Endpoints tab of the dashboard. The host should also point to the Object
Storage node's hostname. It also will have to use the old-style calling format, and not the
535
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
hostname-based container format. Here is an example client setup using the Python boto library on a locally installed all-in-one Object Storage installation.
connection = boto.s3.Connection(
aws_access_key_id='a7811544507ebaf6c9a7a8804f47ea1c',
aws_secret_access_key='a7d8e981-e296-d2ba-cb3b-db7dd23159bd',
port=8080,
host='127.0.0.1',
is_secure=False,
calling_format=boto.s3.connection.OrdinaryCallingFormat())
Drive audit
The swift-drive-audit configuration items reference a script that can be run by using cron to watch for bad drives. If errors are detected, it unmounts the bad drive, so that
OpenStack Object Storage can work around it. It takes the following options:
Description
device_dir = /srv/node
error_limit = 1
log_address = /dev/log
log_facility = LOG_LOCAL0
log_file_pattern = /var/log/kern.*[!.][!g][!z]
log_level = INFO
Logging level
log_max_line_length = 0
minutes = 60
regex_pattern_1 = \berror\b.*\b(dm-[0-9]{1,2}\d?)\b
Form post
Middleware that provides the ability to upload objects to a cluster using an HTML form
POST. The format of the form is:
<![CDATA[
<form action="<swift-url>" method="POST"
enctype="multipart/form-data">
<input type="hidden" name="redirect" value="<redirect-url>" />
<input type="hidden" name="max_file_size" value="<bytes>" />
<input type="hidden" name="max_file_count" value="<count>" />
<input type="hidden" name="expires" value="<unix-timestamp>" />
<input type="hidden" name="signature" value="<hmac>" />
<input type="file" name="file1" /><br />
<input type="submit" />
</form>]]>
The swift-url is the URL to the Object Storage destination, such as: https://swiftcluster.example.com/v1/AUTH_account/container/object_prefix The name
536
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
of each file uploaded is appended to the specified swift-url. So, you can upload directly
to the root of container with a URL like: https://swift-cluster.example.com/v1/
AUTH_account/container/ Optionally, you can include an object prefix to better separate different users uploads, such as: https://swift-cluster.example.com/v1/
AUTH_account/container/object_prefix
Note
The form method must be POST and the enctype must be set as multipart/form-data.
The redirect attribute is the URL to redirect the browser to after the upload completes. The
URL has status and message query parameters added to it, indicating the HTTP status code
for the upload (2xx is success) and a possible message for further information if there was
an error (such as max_file_size exceeded).
The max_file_size attribute must be included and indicates the largest single file upload that can be done, in bytes.
The max_file_count attribute must be included and indicates the maximum number of files that can be uploaded with the form. Include additional <![CDATA[<input
type="file" name="filexx"/>]]> attributes if desired.
The expires attribute is the Unix timestamp before which the form must be submitted before it is invalidated.
The signature attribute is the HMAC-SHA1 signature of the form. This sample Python code
shows how to compute the signature:
import hmac
from hashlib import sha1
from time import time
path = '/v1/account/container/object_prefix'
redirect = 'https://myserver.com/some-page'
max_file_size = 104857600
max_file_count = 10
expires = int(time() + 600)
key = 'mykey'
hmac_body = '%s\n%s\n%s\n%s\n%s' % (path, redirect,
max_file_size, max_file_count, expires)
signature = hmac.new(key, hmac_body, sha1).hexdigest()
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
use = egg:swift#formpost
Description
use = egg:swift#staticweb
538
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
http://10.1.1.1:6000/sda1/2/a/c2
http://10.1.1.1:6000/sda1/2/a
container-reconciler.conf: [DEFAULT]
log_statsd_default_sample_rate = 1.0
container-reconciler.conf: [DEFAULT] log_statsd_host = lo- (StrOpt) If not set, the StatsD feature is disabled.
calhost
container-reconciler.conf: [DEFAULT]
log_statsd_metric_prefix =
container-reconciler.conf: [DEFAULT]
log_statsd_sample_rate_factor = 1.0
539
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
container-reconciler.conf: [container-reconciler]
reclaim_age = 604800
container-reconciler.conf: [container-reconciler]
request_tries = 3
container-server.conf: [DEFAULT] log_max_line_length = 0 (StrOpt) Caps the length of log lines to the value given; no
limit if set to 0, the default.
container-server.conf: [filter-xprofile] dump_interval = 5.0
container-server.conf: [filter-xprofile] flush_at_shutdown = (StrOpt) No help text available for this option.
false
container-server.conf: [filter-xprofile] log_filename_prefix
= /tmp/log/swift/profile/default.profile
object-expirer.conf: [filter-proxy-logging]
access_log_address = /dev/log
object-expirer.conf: [filter-proxy-logging]
access_log_facility = LOG_LOCAL0
object-expirer.conf: [filter-proxy-logging]
access_log_headers = false
object-expirer.conf: [filter-proxy-logging]
access_log_headers_only =
object-expirer.conf: [filter-proxy-logging] access_log_name (StrOpt) No help text available for this option.
= swift
object-expirer.conf: [filter-proxy-logging]
access_log_statsd_default_sample_rate = 1.0
object-expirer.conf: [filter-proxy-logging]
access_log_statsd_host = localhost
object-expirer.conf: [filter-proxy-logging]
access_log_statsd_metric_prefix =
540
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
object-expirer.conf: [filter-proxy-logging]
access_log_statsd_port = 8125
object-expirer.conf: [filter-proxy-logging]
access_log_statsd_sample_rate_factor = 1.0
object-expirer.conf: [filter-proxy-logging]
access_log_udp_host =
object-expirer.conf: [filter-proxy-logging]
access_log_udp_port = 514
object-expirer.conf: [filter-proxy-logging]
reveal_sensitive_prefix = 16
object-server.conf: [filter-xprofile] dump_timestamp = false (StrOpt) No help text available for this option.
object-server.conf: [filter-xprofile] flush_at_shutdown =
false
object-server.conf: [filter-xprofile] use = egg:swift#xprofile (StrOpt) Entry point of paste.deploy in the server
object-server.conf: [object-auditor] concurrency = 1
proxy-server.conf: [filter-keystoneauth]
allow_names_in_acls = true
proxy-server.conf: [filter-keystoneauth] default_domain_id (StrOpt) No help text available for this option.
= default
541
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
proxy-server.conf: [filter-xprofile] dump_timestamp = false (StrOpt) No help text available for this option.
proxy-server.conf: [filter-xprofile] flush_at_shutdown =
false
dispersion.conf: [dispersion]
auth_version
2.0
1.0
drive-audit.conf: [drive-audit]
log_file_pattern
/var/log/kern*
/var/log/kern.*[!.][!g][!z]
object-expirer.conf: [pipeline-main]
pipeline
proxy-server.conf: [DEFAULT]
bind_port
80
8080
proxy-server.conf: [DEFAULT]
disallowed_sections
container_quotas, tempurl
container_quotas, tempurl,
bulk_delete.max_failed_deletes
proxy-server.conf: [app-proxy-server]
client_chunk_size
8192
65536
proxy-server.conf: [app-proxy-server]
object_chunk_size
8192
65536
8192
16
proxy-server.conf: [filter-tempurl]
methods
proxy-server.conf: [pipeline-main]
pipeline
542
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
10. Orchestration
Table of Contents
Configure APIs ............................................................................................................
Configure Clients .........................................................................................................
Configure the RPC messaging system ...........................................................................
New, updated and deprecated options in Juno for Orchestration .................................
548
551
554
557
The Orchestration service is designed to manage the lifecycle of infrastructure and applications within OpenStack clouds. Its various agents and services are configured in the /etc/
heat/heat.conf file.
To install Orchestration, see the OpenStack Installation Guide for your distribution
(docs.openstack.org).
The following tables provide a comprehensive list of the Orchestration configuration options.
Description
[keystone_authtoken]
admin_password = None
admin_tenant_name = admin
admin_token = None
admin_user = None
auth_admin_prefix =
auth_host = 127.0.0.1
auth_port = 35357
(IntOpt) Port of the admin Identity API endpoint. Deprecated, use identity_uri.
auth_protocol = https
auth_uri = None
auth_version = None
cache = None
cafile = None
certfile = None
check_revocations_for_cached = False
543
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
delay_auth_decision = False
enforce_token_bind = permissive
(StrOpt) Used to control the use and type of token binding. Can be set to: "disabled" to not check token binding.
"permissive" (default) to validate binding information if
the bind type is of a form known to the server and ignore
it if not. "strict" like "permissive" but if the bind type is unknown the token will be rejected. "required" any form of
token binding is needed to be allowed. Finally the name of
a binding method that must be present in tokens.
hash_algorithms = md5
http_connect_timeout = None
http_request_max_retries = 3
identity_uri = None
include_service_catalog = True
(BoolOpt) (optional) indicate whether to set the X-Service-Catalog header. If False, middleware will not ask for
service catalog on token validation and will not set the XService-Catalog header.
insecure = False
keyfile = None
memcache_secret_key = None
memcache_security_strategy = None
(StrOpt) (optional) if defined, indicate whether token data should be authenticated or authenticated and encrypted. Acceptable values are MAC or ENCRYPT. If MAC, token data is authenticated (with HMAC) in the cache. If ENCRYPT, token data is encrypted and authenticated in the
cache. If the value is not one of these options or empty,
auth_token will raise an exception on initialization.
revocation_cache_time = 10
signing_dir = None
token_cache_time = 300
(IntOpt) In order to prevent excessive effort spent validating tokens, the middleware caches previously-seen tokens
for a configurable duration (in seconds). Set to -1 to disable caching completely.
544
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
[DEFAULT]
deferred_auth_method = password
environment_dir = /etc/heat/environment.d
event_purge_batch_size = 10
(IntOpt) Controls how many events will be pruned whenever a stack's events exceed max_events_per_stack. Set
this lower to keep more events at the expense of more frequent purges.
host = localhost
instance_driver = heat.engine.nova
instance_user = ec2-user
keystone_backend =
heat.common.heat_keystoneclient.KeystoneClientV3
lock_path = None
memcached_servers = None
periodic_interval = 60
[keystone_authtoken]
memcached_servers = None
[revision]
heat_revision = unknown
(StrOpt) Heat build revision. If you would prefer to manage your build revision separately, you can move this section to a different file and add it as another config option.
Description
[DEFAULT]
auth_encryption_key = notgood but just long enough i
think
Description
[database]
backend = sqlalchemy
connection = None
connection_debug = 0
connection_trace = False
545
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
db_inc_retry_interval = True
db_max_retries = 20
(IntOpt) Maximum database connection retries before error is raised. Set to -1 to specify an infinite retry count.
db_max_retry_interval = 10
(IntOpt) If db_inc_retry_interval is set, the maximum seconds between database connection retries.
db_retry_interval = 1
idle_timeout = 3600
max_overflow = None
max_pool_size = None
max_retries = 10
min_pool_size = 1
mysql_sql_mode = TRADITIONAL
pool_timeout = None
retry_interval = 10
slave_connection = None
(StrOpt) The SQLAlchemy connection string to use to connect to the slave database.
sqlite_db = oslo.sqlite
sqlite_synchronous = True
use_db_reconnect = False
Description
[DEFAULT]
backdoor_port = None
disable_process_locking = False
Description
[DEFAULT]
loadbalancer_template = None
546
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
[DEFAULT]
debug = False
(BoolOpt) Print debugging output (set logging level to DEBUG instead of default WARNING level).
fatal_deprecations = False
log_config_append = None
log_dir = None
(StrOpt) (Optional) The base directory used for relative -log-file paths.
log_file = None
(StrOpt) (Optional) Name of log file to output to. If no default is set, logging will go to stdout.
log_format = None
(StrOpt) DEPRECATED. A logging.Formatter log message format string which may use any of the available
logging.LogRecord attributes. This option is deprecated. Please use logging_context_format_string and
logging_default_format_string instead.
logging_context_format_string = %(asctime)s.
%(msecs)03d %(process)d %(levelname)s %(name)s
[%(request_id)s %(user_identity)s] %(instance)s
%(message)s
logging_debug_format_suffix = %(funcName)s
%(pathname)s:%(lineno)d
logging_default_format_string = %(asctime)s.%(msecs)03d (StrOpt) Format string to use for log messages without
%(process)d %(levelname)s %(name)s [-] %(instance)s
context.
%(message)s
logging_exception_prefix = %(asctime)s.%(msecs)03d
%(process)d TRACE %(name)s %(instance)s
publish_errors = False
syslog_log_facility = LOG_USER
use_stderr = True
use_syslog = False
use_syslog_rfc_format = False
verbose = False
547
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
[DEFAULT]
max_events_per_stack = 1000
max_nested_stack_depth = 3
max_resources_per_stack = 1000
max_stacks_per_tenant = 100
max_template_size = 524288
Description
[matchmaker_redis]
host = 127.0.0.1
password = None
port = 6379
[matchmaker_ring]
ringfile = /etc/oslo/matchmaker_ring.json
Description
[DEFAULT]
fake_rabbit = False
Configure APIs
The following options allow configuration of the APIs that Orchestration supports. Currently this includes compatibility APIs for CloudFormation and CloudWatch and a native API.
Description
[DEFAULT]
action_retry_limit = 5
heat_metadata_server_url =
heat_stack_user_role = heat_stack_user
heat_waitcondition_server_url =
heat_watch_server_url =
max_json_body_size = 1048576
num_engine_workers = 1
policy_default_rule = default
policy_file = policy.json
548
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
secure_proxy_ssl_header = X-Forwarded-Proto
stack_action_timeout = 3600
stack_domain_admin = None
stack_domain_admin_password = None
stack_user_domain_id = None
stack_user_domain_name = None
(StrOpt) Keystone domain name which contains heat template-defined users. If `stack_user_domain_id` option is
set, this option is ignored.
trusts_delegated_roles = heat_stack_owner
[auth_password]
allowed_auth_uris =
multi_cloud = False
[ec2authtoken]
allowed_auth_uris =
auth_uri = None
multi_cloud = False
[heat_api]
backlog = 4096
bind_host = 0.0.0.0
bind_port = 8004
cert_file = None
key_file = None
max_header_line = 16384
(IntOpt) Maximum line size of message headers to be accepted. max_header_line may need to be increased when
using large tokens (typically those generated by the Keystone v3 API with big service catalogs).
workers = 0
[paste_deploy]
api_paste_config = api-paste.ini
flavor = None
[ssl]
ca_file = None
cert_file = None
key_file = None
(StrOpt) Private key file to use when starting the server securely.
549
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
[DEFAULT]
instance_connection_https_validate_certificates = 1
instance_connection_is_secure = 0
[heat_api_cfn]
backlog = 4096
bind_host = 0.0.0.0
bind_port = 8000
cert_file = None
key_file = None
max_header_line = 16384
(IntOpt) Maximum line size of message headers to be accepted. max_header_line may need to be increased when
using large tokens (typically those generated by the Keystone v3 API with big service catalogs).
workers = 0
[ssl]
ca_file = None
cert_file = None
key_file = None
(StrOpt) Private key file to use when starting the server securely.
Description
[DEFAULT]
enable_cloud_watch_lite = True
heat_watch_server_url =
[heat_api_cloudwatch]
backlog = 4096
bind_host = 0.0.0.0
bind_port = 8003
cert_file = None
key_file = None
max_header_line = 16384
(IntOpt) Maximum line size of message headers to be accepted. max_header_line may need to be increased when
using large tokens (typically those generated by the Keystone v3 API with big service catalogs.)
workers = 0
550
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
[ssl]
ca_file = None
cert_file = None
key_file = None
(StrOpt) Private key file to use when starting the server securely.
Description
[DEFAULT]
heat_metadata_server_url =
Description
[DEFAULT]
heat_waitcondition_server_url =
Configure Clients
The following options allow configuration of the clients that Orchestration uses to talk to
other services.
Description
[DEFAULT]
region_name_for_services = None
[clients]
ca_file = None
cert_file = None
endpoint_type = publicURL
insecure = False
key_file = None
Description
[DEFAULT]
cloud_backend = heat.engine.clients.OpenStackClients
Description
[clients_ceilometer]
551
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
ca_file = None
cert_file = None
endpoint_type = publicURL
insecure = False
key_file = None
Description
[clients_cinder]
ca_file = None
cert_file = None
endpoint_type = publicURL
http_log_debug = False
insecure = False
key_file = None
Description
[clients_glance]
ca_file = None
cert_file = None
endpoint_type = publicURL
insecure = False
key_file = None
Description
[clients_heat]
ca_file = None
cert_file = None
endpoint_type = publicURL
insecure = False
key_file = None
url = None
552
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
[clients_keystone]
ca_file = None
cert_file = None
endpoint_type = publicURL
insecure = False
key_file = None
Description
[clients_neutron]
ca_file = None
cert_file = None
endpoint_type = publicURL
insecure = False
key_file = None
Description
[clients_nova]
ca_file = None
cert_file = None
endpoint_type = publicURL
http_log_debug = False
insecure = False
key_file = None
Description
[clients_swift]
ca_file = None
cert_file = None
endpoint_type = publicURL
insecure = False
key_file = None
553
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
[clients_trove]
ca_file = None
cert_file = None
endpoint_type = publicURL
insecure = False
key_file = None
Configure RabbitMQ
OpenStack Oslo RPC uses RabbitMQ by default. Use these options to configure the RabbitMQ message system. The rpc_backend option is optional as long as RabbitMQ is the default messaging system. However, if it is included in the configuration, you must set it to
heat.openstack.common.rpc.impl_kombu.
rpc_backend = heat.openstack.common.rpc.impl_kombu
Use these options to configure the RabbitMQ messaging system. You can configure messaging communication for different installation scenarios, tune retries
for RabbitMQ, and define the size of the RPC thread pool. To monitor notifications through RabbitMQ, you must set the notification_driver option to
heat.openstack.common.notifier.rpc_notifier in the heat.conf file:
Description
[DEFAULT]
kombu_reconnect_delay = 1.0
(FloatOpt) How long to wait before reconnecting in response to an AMQP consumer cancel notification.
kombu_ssl_ca_certs =
kombu_ssl_certfile =
kombu_ssl_keyfile =
kombu_ssl_version =
rabbit_ha_queues = False
554
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
rabbit_host = localhost
rabbit_hosts = $rabbit_host:$rabbit_port
rabbit_login_method = AMQPLAIN
rabbit_max_retries = 0
(IntOpt) Maximum number of RabbitMQ connection retries. Default is 0 (infinite retry count).
rabbit_password = guest
rabbit_port = 5672
rabbit_retry_backoff = 2
rabbit_retry_interval = 1
rabbit_use_ssl = False
rabbit_userid = guest
rabbit_virtual_host = /
Configure Qpid
Use these options to configure the Qpid messaging system for OpenStack Oslo RPC. Qpid is
not the default messaging system, so you must enable it by setting the rpc_backend option in the heat.conf file:
rpc_backend=heat.openstack.common.rpc.impl_qpid
This critical option points the compute nodes to the Qpid broker (server). Set the
qpid_hostname option to the host name where the broker runs in the heat.conf file.
Note
The qpid_hostname option accepts a host name or IP address value.
qpid_hostname = hostname.example.com
If the Qpid broker listens on a port other than the AMQP default of 5672, you must set the
qpid_port option to that value:
qpid_port = 12345
If you configure the Qpid broker to require authentication, you must add a user name and
password to the configuration:
qpid_username = username
qpid_password = password
By default, TCP is used as the transport. To enable SSL, set the qpid_protocol option:
qpid_protocol = ssl
Use these additional options to configure the Qpid messaging driver for OpenStack Oslo
RPC. These options are used infrequently.
555
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
[DEFAULT]
qpid_heartbeat = 60
qpid_hostname = localhost
qpid_hosts = $qpid_hostname:$qpid_port
qpid_password =
qpid_port = 5672
qpid_protocol = tcp
qpid_receiver_capacity = 1
qpid_sasl_mechanisms =
qpid_tcp_nodelay = True
qpid_topology_version = 1
qpid_username =
Configure ZeroMQ
Use these options to configure the ZeroMQ messaging system for OpenStack Oslo
RPC. ZeroMQ is not the default messaging system, so you must enable it by setting the
rpc_backend option in the heat.conf file:
Description
[DEFAULT]
rpc_zmq_bind_address = *
rpc_zmq_contexts = 1
rpc_zmq_host = localhost
rpc_zmq_ipc_dir = /var/run/openstack
rpc_zmq_matchmaker =
(StrOpt) MatchMaker driver.
oslo.messaging._drivers.matchmaker.MatchMakerLocalhost
rpc_zmq_port = 9501
rpc_zmq_topic_backlog = None
Configure messaging
Use these common options to configure the RabbitMQ, Qpid, and ZeroMq messaging
drivers:
556
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
[DEFAULT]
amqp_auto_delete = False
amqp_durable_queues = False
control_exchange = openstack
default_notification_level = INFO
default_publisher_id = None
list_notifier_drivers = None
notification_driver = []
notification_topics = notifications
transport_url = None
Description
[DEFAULT]
engine_life_check_timeout = 2
matchmaker_heartbeat_freq = 300
matchmaker_heartbeat_ttl = 600
rpc_backend = heat.openstack.common.rpc.impl_kombu
rpc_cast_timeout = 30
rpc_conn_pool_size = 30
rpc_response_timeout = 60
rpc_thread_pool_size = 64
Description
[DEFAULT]
onready = None
(StrOpt) Deprecated.
[DEFAULT] action_retry_limit = 5
557
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
[DEFAULT] cloud_backend =
heat.engine.clients.OpenStackClients
(FloatOpt) How long to wait before reconnecting in response to an AMQP consumer cancel notification.
[DEFAULT] num_engine_workers = 1
[DEFAULT] qpid_receiver_capacity = 1
(StrOpt) Keystone domain name which contains heat template-defined users. If `stack_user_domain_id` option is
set, this option is ignored.
[database] db_max_retries = 20
(IntOpt) Maximum database connection retries before error is raised. Set to -1 to specify an infinite retry count.
[database] db_max_retry_interval = 10
(IntOpt) If db_inc_retry_interval is set, the maximum seconds between database connection retries.
[database] db_retry_interval = 1
558
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
[keystone_authtoken] check_revocations_for_cached =
False
[DEFAULT] control_exchange
heat
openstack
[DEFAULT] default_log_levels
amqp=WARN, amqplib=WARN,
boto=WARN, qpid=WARN,
sqlalchemy=WARN, suds=INFO,
iso8601=WARN
amqp=WARN, amqplib=WARN,
boto=WARN, qpid=WARN,
sqlalchemy=WARN, suds=INFO,
oslo.messaging=INFO,
iso8601=WARN,
requests.packages.urllib3.connectionpool=WARN
[DEFAULT] list_notifier_drivers
heat.openstack.common.notifier.no_op_notifier
None
[DEFAULT] rpc_zmq_matchmaker
heat.openstack.common.rpc.matchmaker.MatchMakerLocalhost
oslo.messaging._drivers.matchmaker.MatchMakerLocalhost
[database] connection
sqlite:////home/gpocentek/Workspace/OpenStack/openstack-doc-tools/
autogenerate_config_docs/sources/
heat/heat/openstack/common/db/
$sqlite_db
[database] slave_connection
[keystone_authtoken]
revocation_cache_time
None
None
300
10
Table10.35.Deprecated options
Deprecated option
New Option
[DEFAULT] db_backend
[database] backend
[DEFAULT] stack_user_domain
[DEFAULT] stack_user_domain_id
[rpc_notifier2] topics
[DEFAULT] notification_topics
559
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
11. Telemetry
Table of Contents
Telemetry sample configuration files ........................................................................... 572
New, updated and deprecated options in Juno for Telemetry ...................................... 592
The Telemetry service collects measurements within OpenStack. Its various agents and services are configured in the /etc/ceilometer/ceilometer.conf file.
To install Telemetry, see the OpenStack Installation Guide for your distribution
(docs.openstack.org).
The following tables provide a comprehensive list of the Telemetry configuration options.
Description
[alarm]
evaluation_interval = 60
(IntOpt) Period of evaluation cycle, should be >= than configured pipeline interval for collection of underlying metrics.
notifier_rpc_topic = alarm_notifier
partition_rpc_topic = alarm_partition_coordination
project_alarm_quota = None
record_history = True
rest_notifier_certificate_file =
rest_notifier_certificate_key =
rest_notifier_max_retries = 0
rest_notifier_ssl_verify = True
user_alarm_quota = None
Description
[DEFAULT]
amqp_auto_delete = False
amqp_durable_queues = False
control_exchange = openstack
notification_driver = []
notification_topics = notifications
560
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
transport_url = None
Description
[DEFAULT]
api_paste_config = api_paste.ini
pipeline_cfg_file = pipeline.yaml
policy_default_rule = default
policy_file = policy.json
reserved_metadata_length = 256
reserved_metadata_namespace = metering.
[api]
enable_reverse_dns_lookup = False
host = 0.0.0.0
pecan_debug = False
port = 8777
Description
[service_credentials]
insecure = False
os_auth_url = http://localhost:5000/v2.0
os_cacert = None
os_endpoint_type = publicURL
os_password = admin
os_region_name = None
os_tenant_id =
os_tenant_name = admin
os_username = ceilometer
Description
[keystone_authtoken]
admin_password = None
admin_tenant_name = admin
admin_token = None
561
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
lation, or otherwise bypassing the normal authentication
process. This option should not be used, use `admin_user`
and `admin_password` instead.
admin_user = None
auth_admin_prefix =
auth_host = 127.0.0.1
auth_port = 35357
(IntOpt) Port of the admin Identity API endpoint. Deprecated, use identity_uri.
auth_protocol = https
auth_uri = None
auth_version = None
cache = None
cafile = None
certfile = None
check_revocations_for_cached = False
delay_auth_decision = False
enforce_token_bind = permissive
(StrOpt) Used to control the use and type of token binding. Can be set to: "disabled" to not check token binding.
"permissive" (default) to validate binding information if
the bind type is of a form known to the server and ignore
it if not. "strict" like "permissive" but if the bind type is unknown the token will be rejected. "required" any form of
token binding is needed to be allowed. Finally the name of
a binding method that must be present in tokens.
hash_algorithms = md5
http_connect_timeout = None
http_request_max_retries = 3
identity_uri = None
include_service_catalog = True
(BoolOpt) (optional) indicate whether to set the X-Service-Catalog header. If False, middleware will not ask for
service catalog on token validation and will not set the XService-Catalog header.
insecure = False
562
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
keyfile = None
memcache_secret_key = None
memcache_security_strategy = None
(StrOpt) (optional) if defined, indicate whether token data should be authenticated or authenticated and encrypted. Acceptable values are MAC or ENCRYPT. If MAC, token data is authenticated (with HMAC) in the cache. If ENCRYPT, token data is encrypted and authenticated in the
cache. If the value is not one of these options or empty,
auth_token will raise an exception on initialization.
revocation_cache_time = 10
signing_dir = None
token_cache_time = 300
(IntOpt) In order to prevent excessive effort spent validating tokens, the middleware caches previously-seen tokens
for a configurable duration (in seconds). Set to -1 to disable caching completely.
Description
[DEFAULT]
collector_workers = 1
[collector]
requeue_sample_on_dispatcher_error = False
udp_address = 0.0.0.0
udp_port = 4952
[dispatcher_file]
backup_count = 0
file_path = None
max_bytes = 0
Description
[DEFAULT]
host = localhost
lock_path = None
memcached_servers = None
notification_workers = 1
(IntOpt) Number of workers for notification service. A single notification agent is enabled by default.
563
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
rootwrap_config = /etc/ceilometer/rootwrap.conf
[central]
partitioning_group_prefix = None
[compute]
workload_partitioning = False
(BoolOpt) Enable work-load partitioning, allowing multiple compute agents to be run simultaneously.
[coordination]
backend_url = None
(StrOpt) The backend URL to use for distributed coordination. If left empty, per-deployment central agent and perhost compute agent won't do workload partitioning and
will only function correctly if a single instance of that service is running.
heartbeat = 1.0
[keystone_authtoken]
memcached_servers = None
Description
[DEFAULT]
database_connection = None
[database]
alarm_connection = None
backend = sqlalchemy
connection = None
connection_debug = 0
connection_trace = False
db_inc_retry_interval = True
db_max_retries = 20
(IntOpt) Maximum database connection retries before error is raised. Set to -1 to specify an infinite retry count.
db_max_retry_interval = 10
(IntOpt) If db_inc_retry_interval is set, the maximum seconds between database connection retries.
db_retry_interval = 1
idle_timeout = 3600
max_overflow = None
max_pool_size = None
max_retries = 10
564
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
metering_connection = None
min_pool_size = 1
mysql_sql_mode = TRADITIONAL
pool_timeout = None
retry_interval = 10
slave_connection = None
(StrOpt) The SQLAlchemy connection string to use to connect to the slave database.
sqlite_db = oslo.sqlite
sqlite_synchronous = True
time_to_live = -1
use_db_reconnect = False
use_tpool = False
Description
[DEFAULT]
backdoor_port = None
disable_process_locking = False
nova_http_log_debug = False
Description
[event]
definitions_cfg_file = event_definitions.yaml
drop_unmatched_notifications = False
(BoolOpt) Drop notifications if no event definition matches. (Otherwise, we convert them with just the default
traits)
[notification]
ack_on_event_error = True
store_events = False
565
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
[DEFAULT]
cinder_control_exchange = cinder
glance_control_exchange = glance
heat_control_exchange = heat
ironic_exchange = ironic
keystone_control_exchange = keystone
neutron_control_exchange = neutron
nova_control_exchange = nova
sahara_control_exchange = sahara
sample_source = openstack
trove_control_exchange = trove
Description
[DEFAULT]
glance_page_size = 0
Description
[DEFAULT]
hypervisor_inspector = libvirt
libvirt_type = kvm
libvirt_uri =
Description
[ipmi]
node_manager_init_retry = 3
Description
[DEFAULT]
debug = False
(BoolOpt) Print debugging output (set logging level to DEBUG instead of default WARNING level).
566
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
urllib3.connectionpool=WARN, websocket=WARN,
keystonemiddleware=WARN, routes.middleware=WARN,
stevedore=WARN
fatal_deprecations = False
fatal_exception_format_errors = False
instance_name_template = instance-%08x
instance_usage_audit_period = month
log_config_append = None
log_dir = None
(StrOpt) (Optional) The base directory used for relative -log-file paths.
log_file = None
(StrOpt) (Optional) Name of log file to output to. If no default is set, logging will go to stdout.
log_format = None
(StrOpt) DEPRECATED. A logging.Formatter log message format string which may use any of the available
logging.LogRecord attributes. This option is deprecated. Please use logging_context_format_string and
logging_default_format_string instead.
logging_context_format_string = %(asctime)s.
%(msecs)03d %(process)d %(levelname)s %(name)s
[%(request_id)s %(user_identity)s] %(instance)s
%(message)s
logging_debug_format_suffix = %(funcName)s
%(pathname)s:%(lineno)d
logging_default_format_string = %(asctime)s.%(msecs)03d (StrOpt) Format string to use for log messages without
%(process)d %(levelname)s %(name)s [-] %(instance)s
context.
%(message)s
logging_exception_prefix = %(asctime)s.%(msecs)03d
%(process)d TRACE %(name)s %(instance)s
publish_errors = False
syslog_log_facility = LOG_USER
use_stderr = True
use_syslog = False
use_syslog_rfc_format = False
verbose = False
567
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
[DEFAULT]
enable_new_services = True
monkey_patch = False
monkey_patch_modules =
nova.api.ec2.cloud:nova.notifications.notify_decorator,
nova.compute.api:nova.notifications.notify_decorator
network_api_class = nova.network.api.API
password_length = 12
snapshot_name_template = snapshot-%s
Description
[cells]
bandwidth_update_interval = 600
call_timeout = 60
capabilities = hypervisor=xenserver;kvm, os=linux;windows (ListOpt) Key/Multi-value list with the capabilities of the
cell
cell_type = compute
enable = False
manager = nova.cells.manager.CellsManager
mute_child_interval = 300
(IntOpt) Number of seconds after which a lack of capability and capacity updates signals the child cell is to be treated as a mute.
name = nova
reserve_percent = 10.0
topic = cells
[upgrade_levels]
cells = None
Description
[DEFAULT]
qpid_heartbeat = 60
qpid_hostname = localhost
qpid_hosts = $qpid_hostname:$qpid_port
qpid_password =
qpid_port = 5672
qpid_protocol = tcp
qpid_receiver_capacity = 1
568
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
qpid_sasl_mechanisms =
qpid_tcp_nodelay = True
qpid_topology_version = 1
qpid_username =
Description
[DEFAULT]
kombu_reconnect_delay = 1.0
(FloatOpt) How long to wait before reconnecting in response to an AMQP consumer cancel notification.
kombu_ssl_ca_certs =
kombu_ssl_certfile =
kombu_ssl_keyfile =
kombu_ssl_version =
rabbit_ha_queues = False
rabbit_host = localhost
rabbit_hosts = $rabbit_host:$rabbit_port
rabbit_login_method = AMQPLAIN
rabbit_max_retries = 0
(IntOpt) Maximum number of RabbitMQ connection retries. Default is 0 (infinite retry count).
rabbit_password = guest
rabbit_port = 5672
rabbit_retry_backoff = 2
rabbit_retry_interval = 1
rabbit_use_ssl = False
rabbit_userid = guest
rabbit_virtual_host = /
Description
[matchmaker_redis]
host = 127.0.0.1
password = None
port = 6379
569
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
[matchmaker_ring]
ringfile = /etc/oslo/matchmaker_ring.json
Description
[DEFAULT]
filters_path = /etc/ceilometer/rootwrap.d,/usr/share/
ceilometer/rootwrap
List of directories to load filter definitions from (separated by ','). These directories MUST all be only writeable by
root !
exec_dirs = /sbin,/usr/sbin,/bin,/usr/bin
use_syslog = False
syslog_log_facility = syslog
Which syslog facility to use. Valid values include auth, authpriv, syslog, user0, user1... Default value is 'syslog'
syslog_log_level = ERROR
Description
[DEFAULT]
dispatcher = ['database']
matchmaker_heartbeat_freq = 300
matchmaker_heartbeat_ttl = 600
rpc_backend =
ceilometer.openstack.common.rpc.impl_kombu
rpc_cast_timeout = 30
rpc_conn_pool_size = 30
rpc_response_timeout = 60
rpc_thread_pool_size = 64
[notification]
messaging_urls = []
[publisher]
metering_secret = change this or be hacked
[publisher_notifier]
metering_driver = messagingv2
metering_topic = metering
[publisher_rpc]
metering_topic = metering
570
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
[service_types]
glance = image
kwapi = energy
neutron = network
nova = compute
swift = object-store
Description
[DEFAULT]
reseller_prefix = AUTH_
Description
[DEFAULT]
fake_rabbit = False
Description
[hardware]
readonly_user_name = ro_snmp_user
readonly_user_password = password
url_scheme = snmp://
Description
[vmware]
api_retry_count = 10
host_ip =
host_password =
host_username =
task_poll_interval = 0.5
wsdl_location = None
Description
[xenapi]
571
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Description
connection_password = None
connection_url = None
connection_username = root
login_timeout = 10
Description
[DEFAULT]
rpc_zmq_bind_address = *
rpc_zmq_contexts = 1
rpc_zmq_host = localhost
rpc_zmq_ipc_dir = /var/run/openstack
rpc_zmq_matchmaker =
(StrOpt) MatchMaker driver.
oslo.messaging._drivers.matchmaker.MatchMakerLocalhost
rpc_zmq_port = 9501
rpc_zmq_topic_backlog = None
ceilometer.conf
The configuration for the Telemetry services and agents is found in the
ceilometer.conf file.
This file must be modified after installation.
[DEFAULT]
#
# Options defined in ceilometer.middleware
#
# Exchanges name to listen for notifications. (multi valued)
#http_control_exchanges=nova
#http_control_exchanges=glance
#http_control_exchanges=neutron
#http_control_exchanges=cinder
#
# Options defined in ceilometer.pipeline
572
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
#
# Configuration file for pipeline definition. (string value)
#pipeline_cfg_file=pipeline.yaml
#
# Options defined in ceilometer.sample
#
# Source for samples emitted on this instance. (string value)
# Deprecated group/name - [DEFAULT]/counter_source
#sample_source=openstack
#
# Options defined in ceilometer.service
#
# Name of this node, which must be valid in an AMQP key. Can
# be an opaque identifier. For ZeroMQ only, must be a valid
# host name, FQDN, or IP address. (string value)
#host=ceilometer
# Dispatcher to process data. (multi valued)
#dispatcher=database
# Number of workers for collector service. A single
# collector is enabled by default. (integer value)
#collector_workers=1
# Number of workers for notification service. A single
# notification agent is enabled by default. (integer value)
#notification_workers=1
#
# Options defined in ceilometer.api.app
#
# The strategy to use for auth: noauth or keystone. (string
# value)
#auth_strategy=keystone
# Deploy the deprecated v1 API. (boolean value)
#enable_v1_api=true
#
# Options defined in ceilometer.compute.notifications
#
# Exchange name for Nova notifications. (string value)
#nova_control_exchange=nova
#
# Options defined in ceilometer.compute.util
#
573
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
#
# Options defined in ceilometer.compute.virt.inspector
#
# Inspector to use for inspecting the hypervisor layer.
# (string value)
#hypervisor_inspector=libvirt
#
# Options defined in ceilometer.compute.virt.libvirt.inspector
#
# Libvirt domain type (valid options are: kvm, lxc, qemu, uml,
# xen). (string value)
#libvirt_type=kvm
# Override the default libvirt URI (which is dependent on
# libvirt_type). (string value)
#libvirt_uri=
#
# Options defined in ceilometer.image.notifications
#
# Exchange name for Glance notifications. (string value)
#glance_control_exchange=glance
#
# Options defined in ceilometer.network.notifications
#
# Exchange name for Neutron notifications. (string value)
# Deprecated group/name - [DEFAULT]/quantum_control_exchange
#neutron_control_exchange=neutron
#
# Options defined in ceilometer.objectstore.swift
#
# Swift reseller prefix. Must be on par with reseller_prefix
# in proxy-server.conf. (string value)
#reseller_prefix=AUTH_
#
# Options defined in ceilometer.openstack.common.db.sqlalchemy.session
#
574
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
#
# Options defined in ceilometer.openstack.common.eventlet_backdoor
#
# Enable eventlet backdoor. Acceptable values are 0, <port>,
# and <start>:<end>, where 0 results in listening on a random
# tcp port number; <port> results in listening on the
# specified port number (and not enabling backdoor if that
# port is in use); and <start>:<end> results in listening on
# the smallest unused port number within the specified range
# of port numbers. The chosen port is displayed in the
# service's log file. (string value)
#backdoor_port=<None>
#
# Options defined in ceilometer.openstack.common.lockutils
#
# Whether to disable inter-process locks. (boolean value)
#disable_process_locking=false
# Directory to use for lock files. (string value)
#lock_path=<None>
#
# Options defined in ceilometer.openstack.common.log
#
# Print debugging output (set logging level to DEBUG instead
# of default WARNING level). (boolean value)
#debug=false
# Print more verbose output (set logging level to INFO instead
# of default WARNING level). (boolean value)
#verbose=false
# Log output to standard error (boolean value)
#use_stderr=true
# Format string to use for log messages with context (string
# value)
#logging_context_format_string=%(asctime)s.%(msecs)03d %(process)d
%(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s
%(message)s
# Format string to use for log messages without context
# (string value)
#logging_default_format_string=%(asctime)s.%(msecs)03d %(process)d
%(levelname)s %(name)s [-] %(instance)s%(message)s
# Data to append to log format when level is DEBUG (string
575
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
# value)
#logging_debug_format_suffix=%(funcName)s %(pathname)s:%(lineno)d
# Prefix each line of exception output with this format
# (string value)
#logging_exception_prefix=%(asctime)s.%(msecs)03d %(process)d TRACE %(name)s
%(instance)s
# List of logger=LEVEL pairs (list value)
#default_log_levels=amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=
WARN,suds=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN
# Publish error events (boolean value)
#publish_errors=false
# Make deprecations fatal (boolean value)
#fatal_deprecations=false
# If an instance is passed with the log message, format it
# like this (string value)
#instance_format="[instance: %(uuid)s] "
# If an instance UUID is passed with the log message, format
# it like this (string value)
#instance_uuid_format="[instance: %(uuid)s] "
# The name of logging configuration file. It does not disable
# existing loggers, but just appends specified logging
# configuration to any other existing logging options. Please
# see the Python logging module documentation for details on
# logging configuration files. (string value)
# Deprecated group/name - [DEFAULT]/log_config
#log_config_append=<None>
# DEPRECATED. A logging.Formatter log message format string
# which may use any of the available logging.LogRecord
# attributes. This option is deprecated. Please use
# logging_context_format_string and
# logging_default_format_string instead. (string value)
#log_format=<None>
# Format string for %%(asctime)s in log records. Default:
# %(default)s (string value)
#log_date_format=%Y-%m-%d %H:%M:%S
# (Optional) Name of log file to output to. If no default is
# set, logging will go to stdout. (string value)
# Deprecated group/name - [DEFAULT]/logfile
#log_file=<None>
# (Optional) The base directory used for relative --log-file
# paths (string value)
# Deprecated group/name - [DEFAULT]/logdir
#log_dir=<None>
# Use syslog for logging. Existing syslog format is DEPRECATED
# during I, and then will be changed in J to honor RFC5424
# (boolean value)
#use_syslog=false
576
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
#
# Options defined in ceilometer.openstack.common.middleware.sizelimit
#
# The maximum body size per request, in bytes (integer value)
# Deprecated group/name - [DEFAULT]/osapi_max_request_body_size
#max_request_body_size=114688
#
# Options defined in ceilometer.openstack.common.notifier.api
#
# Driver or drivers to handle sending notifications (multi
# valued)
#notification_driver=
# Default notification level for outgoing notifications
# (string value)
#default_notification_level=INFO
# Default publisher_id for outgoing notifications (string
# value)
#default_publisher_id=<None>
#
# Options defined in ceilometer.openstack.common.notifier.rpc_notifier
#
# AMQP topic used for OpenStack notifications (list value)
#notification_topics=notifications
#
# Options defined in ceilometer.openstack.common.policy
#
# JSON file containing policy (string value)
#policy_file=policy.json
# Rule enforced when requested rule is not found (string
# value)
#policy_default_rule=default
#
# Options defined in ceilometer.openstack.common.rpc
#
577
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
#
# Options defined in ceilometer.openstack.common.rpc.amqp
#
# Use durable queues in amqp. (boolean value)
# Deprecated group/name - [DEFAULT]/rabbit_durable_queues
#amqp_durable_queues=false
# Auto-delete queues in amqp. (boolean value)
#amqp_auto_delete=false
#
# Options defined in ceilometer.openstack.common.rpc.impl_kombu
#
# If SSL is enabled, the SSL version to use. Valid values are
# TLSv1, SSLv23 and SSLv3. SSLv2 might be available on some
# distributions. (string value)
#kombu_ssl_version=
# SSL key file (valid only if SSL enabled) (string value)
#kombu_ssl_keyfile=
# SSL cert file (valid only if SSL enabled) (string value)
#kombu_ssl_certfile=
# SSL certification authority file (valid only if SSL enabled)
# (string value)
#kombu_ssl_ca_certs=
578
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
#
# Options defined in ceilometer.openstack.common.rpc.impl_qpid
#
# Qpid broker hostname (string value)
#qpid_hostname=localhost
# Qpid broker port (integer value)
#qpid_port=5672
# Qpid HA cluster host:port pairs (list value)
#qpid_hosts=$qpid_hostname:$qpid_port
# Username for qpid connection (string value)
#qpid_username=
# Password for qpid connection (string value)
#qpid_password=
579
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
#
# Options defined in ceilometer.openstack.common.rpc.impl_zmq
#
# ZeroMQ bind address. Should be a wildcard (*), an ethernet
# interface, or IP. The "host" option should point or resolve
# to this address. (string value)
#rpc_zmq_bind_address=*
# MatchMaker driver (string value)
#rpc_zmq_matchmaker=ceilometer.openstack.common.rpc.matchmaker.
MatchMakerLocalhost
# ZeroMQ receiver listening port (integer value)
#rpc_zmq_port=9501
# Number of ZeroMQ contexts, defaults to 1 (integer value)
#rpc_zmq_contexts=1
# Maximum number of ingress messages to locally buffer per
# topic. Default is unlimited. (integer value)
#rpc_zmq_topic_backlog=<None>
# Directory for holding IPC sockets (string value)
#rpc_zmq_ipc_dir=/var/run/openstack
# Name of this node. Must be a valid hostname, FQDN, or IP
# address. Must match "host" option, if running Nova. (string
# value)
#rpc_zmq_host=ceilometer
#
# Options defined in ceilometer.openstack.common.rpc.matchmaker
#
580
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
#
# Options defined in ceilometer.orchestration.notifications
#
# Exchange name for Heat notifications (string value)
#heat_control_exchange=heat
#
# Options defined in ceilometer.storage
#
# DEPRECATED - Database connection string. (string value)
#database_connection=<None>
#
# Options defined in ceilometer.storage.sqlalchemy.models
#
# MySQL engine to use. (string value)
#mysql_engine=InnoDB
#
# Options defined in ceilometer.volume.notifications
#
# Exchange name for Cinder notifications. (string value)
#cinder_control_exchange=cinder
[alarm]
#
# Options defined in ceilometer.cli
#
# Class to launch as alarm evaluation service. (string value)
#evaluation_service=ceilometer.alarm.service.SingletonAlarmService
#
# Options defined in ceilometer.alarm.notifier.rest
#
# SSL Client certificate for REST notifier. (string value)
#rest_notifier_certificate_file=
# SSL Client private key for REST notifier. (string value)
#rest_notifier_certificate_key=
# Whether to verify the SSL Server certificate when calling
581
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
#
# Options defined in ceilometer.alarm.rpc
#
# The topic that ceilometer uses for alarm notifier messages.
# (string value)
#notifier_rpc_topic=alarm_notifier
# The topic that ceilometer uses for alarm partition
# coordination messages. (string value)
#partition_rpc_topic=alarm_partition_coordination
#
# Options defined in ceilometer.alarm.service
#
# Period of evaluation cycle, should be >= than configured
# pipeline interval for collection of underlying metrics.
# (integer value)
# Deprecated group/name - [alarm]/threshold_evaluation_interval
#evaluation_interval=60
#
# Options defined in ceilometer.api.controllers.v2
#
# Record alarm change events. (boolean value)
#record_history=true
[api]
#
# Options defined in ceilometer.api
#
# The port for the ceilometer API server. (integer value)
# Deprecated group/name - [DEFAULT]/metering_api_port
#port=8777
# The listen IP for the ceilometer API server. (string value)
#host=0.0.0.0
[collector]
#
# Options defined in ceilometer.collector
#
# Address to which the UDP socket is bound. Set to an empty
# string to disable. (string value)
#udp_address=0.0.0.0
582
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
[database]
#
# Options defined in ceilometer.openstack.common.db.api
#
# The backend to use for db (string value)
# Deprecated group/name - [DEFAULT]/db_backend
#backend=sqlalchemy
#
# Options defined in ceilometer.openstack.common.db.sqlalchemy.session
#
# The SQLAlchemy connection string used to connect to the
# database (string value)
# Deprecated group/name - [DEFAULT]/sql_connection
# Deprecated group/name - [DATABASE]/sql_connection
# Deprecated group/name - [sql]/connection
#connection=sqlite:////ceilometer/openstack/common/db/$sqlite_db
# The SQLAlchemy connection string used to connect to the
# slave database (string value)
#slave_connection=
# Timeout before idle sql
# value)
# Deprecated group/name # Deprecated group/name # Deprecated group/name #idle_timeout=3600
583
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
#
# Options defined in ceilometer.storage
#
# Number of seconds that samples are kept in the database for
# (<= 0 means forever). (integer value)
#time_to_live=-1
[dispatcher_file]
#
# Options defined in ceilometer.dispatcher.file
#
# Name and the location of the file to record meters. (string
# value)
#file_path=<None>
# The max size of the file. (integer value)
#max_bytes=0
# The max number of the files to keep. (integer value)
#backup_count=0
[event]
#
# Options defined in ceilometer.event.converter
#
# Configuration file for event definitions. (string value)
#definitions_cfg_file=event_definitions.yaml
# Drop notifications if no event definition matches.
# (Otherwise, we convert them with just the default traits)
584
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
# (boolean value)
#drop_unmatched_notifications=false
[keystone_authtoken]
#
# Options defined in keystoneclient.middleware.auth_token
#
# Prefix to prepend at the beginning of the path (string
# value)
#auth_admin_prefix=
# Host providing the admin Identity API endpoint (string
# value)
#auth_host=127.0.0.1
# Port of the admin Identity API endpoint (integer value)
#auth_port=35357
# Protocol of the admin Identity API endpoint(http or https)
# (string value)
#auth_protocol=https
# Complete public Identity API endpoint (string value)
#auth_uri=<None>
# API version of the admin Identity API endpoint (string
# value)
#auth_version=<None>
# Do not handle authorization requests within the middleware,
# but delegate the authorization decision to downstream WSGI
# components (boolean value)
#delay_auth_decision=false
# Request timeout value for communicating with Identity API
# server. (boolean value)
#http_connect_timeout=<None>
# How many times are we trying to reconnect when communicating
# with Identity API Server. (integer value)
#http_request_max_retries=3
# Allows to pass in the name of a fake http_handler callback
# function used instead of httplib.HTTPConnection or
# httplib.HTTPSConnection. Useful for unit testing where
# network is not available. (string value)
#http_handler=<None>
# Single shared secret with the Keystone configuration used
# for bootstrapping a Keystone installation, or otherwise
# bypassing the normal authentication process. (string value)
#admin_token=<None>
# Keystone account username (string value)
#admin_user=<None>
# Keystone account password (string value)
585
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
#admin_password=<None>
# Keystone service account tenant name to validate user tokens
# (string value)
#admin_tenant_name=admin
# Env key for the swift cache (string value)
#cache=<None>
# Required if Keystone server requires client certificate
# (string value)
#certfile=<None>
# Required if Keystone server requires client certificate
# (string value)
#keyfile=<None>
# A PEM encoded Certificate Authority to use when verifying
# HTTPS connections. Defaults to system CAs. (string value)
#cafile=<None>
# Verify HTTPS connections. (boolean value)
#insecure=false
# Directory used to cache files related to PKI tokens (string
# value)
#signing_dir=<None>
# If defined, the memcache server(s) to use for caching (list
# value)
# Deprecated group/name - [DEFAULT]/memcache_servers
#memcached_servers=<None>
# In order to prevent excessive requests and validations, the
# middleware uses an in-memory cache for the tokens the
# Keystone API returns. This is only valid if memcache_servers
# is defined. Set to -1 to disable caching completely.
# (integer value)
#token_cache_time=300
# Value only used for unit testing (integer value)
#revocation_cache_time=1
# (optional) if defined, indicate whether token data should be
# authenticated or authenticated and encrypted. Acceptable
# values are MAC or ENCRYPT. If MAC, token data is
# authenticated (with HMAC) in the cache. If ENCRYPT, token
# data is encrypted and authenticated in the cache. If the
# value is not one of these options or empty, auth_token will
# raise an exception on initialization. (string value)
#memcache_security_strategy=<None>
# (optional, mandatory if memcache_security_strategy is
# defined) this string is used for key derivation. (string
# value)
#memcache_secret_key=<None>
# (optional) indicate whether to set the X-Service-Catalog
# header. If False, middleware will not ask for service
# catalog on token validation and will not set the X-Service-
586
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
[matchmaker_redis]
#
# Options defined in ceilometer.openstack.common.rpc.matchmaker_redis
#
# Host to locate redis (string value)
#host=127.0.0.1
# Use this port to connect to redis host. (integer value)
#port=6379
# Password for Redis server. (optional) (string value)
#password=<None>
[matchmaker_ring]
#
# Options defined in ceilometer.openstack.common.rpc.matchmaker_ring
#
# Matchmaker ring file (JSON) (string value)
# Deprecated group/name - [DEFAULT]/matchmaker_ringfile
#ringfile=/etc/oslo/matchmaker_ring.json
[notification]
#
# Options defined in ceilometer.notification
#
# Acknowledge message when event persistence fails. (boolean
# value)
#ack_on_event_error=true
# Save event details. (boolean value)
#store_events=false
[publisher]
#
# Options defined in ceilometer.publisher.utils
587
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
#
# Secret value for signing metering messages. (string value)
# Deprecated group/name - [DEFAULT]/metering_secret
# Deprecated group/name - [publisher_rpc]/metering_secret
#metering_secret=change this or be hacked
[publisher_rpc]
#
# Options defined in ceilometer.publisher.rpc
#
# The topic that ceilometer uses for metering messages.
# (string value)
#metering_topic=metering
[rpc_notifier2]
#
# Options defined in ceilometer.openstack.common.notifier.rpc_notifier2
#
# AMQP topic(s) used for OpenStack notifications (list value)
#topics=notifications
[service_credentials]
#
# Options defined in ceilometer.service
#
# User name to use for OpenStack service access. (string
# value)
#os_username=ceilometer
# Password to use for OpenStack service access. (string value)
#os_password=admin
# Tenant ID to use for OpenStack service access. (string
# value)
#os_tenant_id=
# Tenant name to use for OpenStack service access. (string
# value)
#os_tenant_name=admin
# Certificate chain for SSL validation. (string value)
#os_cacert=<None>
# Auth URL to use for OpenStack service access. (string value)
#os_auth_url=http://localhost:5000/v2.0
# Region name to use for OpenStack service endpoints. (string
# value)
#os_region_name=<None>
588
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
[ssl]
#
# Options defined in ceilometer.openstack.common.sslutils
#
# CA certificate file to use to verify connecting clients
# (string value)
#ca_file=<None>
# Certificate file to use when starting the server securely
# (string value)
#cert_file=<None>
# Private key file to use when starting the server securely
# (string value)
#key_file=<None>
[vmware]
#
# Options defined in ceilometer.compute.virt.vmware.inspector
#
# IP address of the VMware Vsphere host (string value)
#host_ip=
# Username of VMware Vsphere (string value)
#host_username=
# Password of VMware Vsphere (string value)
#host_password=
# Number of times a VMware Vsphere API must be retried
# (integer value)
#api_retry_count=10
# Sleep time in seconds for polling an ongoing async task
# (floating point value)
#task_poll_interval=0.5
event_definitions.yaml
The event_definitions.yaml file defines how events received from other OpenStack
components should be translated to Telemetry samples.
589
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
590
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
fields: payload.audit_period_beginning
audit_period_ending:
type: datetime
fields: payload.audit_period_ending
pipeline.yaml
Pipelines describe a coupling between sources of samples and the corresponding sinks for
transformation and publication of the data. They are defined in the pipeline.yaml file.
You should not need to modify this file.
--sources:
- name: meter_source
interval: 600
meters:
- "*"
sinks:
- meter_sink
- name: cpu_source
interval: 600
meters:
- "cpu"
sinks:
- cpu_sink
- name: disk_source
interval: 600
meters:
- "disk.read.bytes"
- "disk.read.requests"
- "disk.write.bytes"
- "disk.write.requests"
sinks:
- disk_sink
- name: network_source
interval: 600
meters:
- "network.incoming.bytes"
- "network.incoming.packets"
- "network.outgoing.bytes"
- "network.outgoing.packets"
sinks:
- network_sink
sinks:
- name: meter_sink
transformers:
publishers:
- rpc://
- name: cpu_sink
transformers:
- name: "rate_of_change"
parameters:
target:
name: "cpu_util"
unit: "%"
type: "gauge"
scale: "100.0 / (10**9 * (resource_metadata.cpu_number or
1))"
591
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
publishers:
- rpc://
- name: disk_sink
transformers:
- name: "rate_of_change"
parameters:
source:
map_from:
name: "disk\\.(read|write)\\.(bytes|requests)"
unit: "(B|request)"
target:
map_to:
name: "disk.\\1.\\2.rate"
unit: "\\1/s"
type: "gauge"
publishers:
- rpc://
- name: network_sink
transformers:
- name: "rate_of_change"
parameters:
source:
map_from:
name: "network\\.(incoming|outgoing)\\.(bytes|packets)"
unit: "(B|packet)"
target:
map_to:
name: "network.\\1.\\2.rate"
unit: "\\1/s"
type: "gauge"
publishers:
- rpc://
policy.json
The policy.json file defines additional access controls that apply to the Telemetry service.
{
"context_is_admin":
[["role:admin"]]
[DEFAULT] glance_page_size = 0
592
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
[DEFAULT] monkey_patch_modules =
['nova.api.ec2.cloud:nova.notifications.notify_decorator',
'nova.compute.api:nova.notifications.notify_decorator']
[DEFAULT] password_length = 12
[DEFAULT] qpid_receiver_capacity = 1
[alarm] rest_notifier_max_retries = 0
(BoolOpt) Enable work-load partitioning, allowing multiple compute agents to be run simultaneously.
(StrOpt) The backend URL to use for distributed coordination. If left empty, per-deployment central agent and perhost compute agent won't do workload partitioning and
will only function correctly if a single instance of that service is running.
593
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
[database] db_max_retries = 20
(IntOpt) Maximum database connection retries before error is raised. Set to -1 to specify an infinite retry count.
[database] db_max_retry_interval = 10
(IntOpt) If db_inc_retry_interval is set, the maximum seconds between database connection retries.
[database] db_retry_interval = 1
[ipmi] node_manager_init_retry = 3
[keystone_authtoken] check_revocations_for_cached =
False
[notification] messaging_urls = []
594
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
[xenapi] login_timeout = 10
[DEFAULT] default_log_levels
amqp=WARN, amqplib=WARN,
amqp=WARN, amqplib=WARN,
boto=WARN, qpid=WARN,
boto=WARN, qpid=WARN,
sqlalchemy=WARN,
sqlalchemy=WARN, suds=INFO,
suds=INFO, iso8601=WARN,
oslo.messaging=INFO,
requests.packages.urllib3.connectionpool=WARN
iso8601=WARN,
requests.packages.urllib3.connectionpool=WARN,
urllib3.connectionpool=WARN,
websocket=WARN,
keystonemiddleware=WARN,
routes.middleware=WARN,
stevedore=WARN
[DEFAULT] rpc_zmq_matchmaker
ceilometer.openstack.common.rpc.matchmaker.MatchMakerLocalhost
oslo.messaging._drivers.matchmaker.MatchMakerLocalhost
[database] connection
sqlite:////home/gpocentek/Workspace/OpenStack/openstack-doc-tools/
autogenerate_config_docs/sources/
ceilometer/ceilometer/openstack/common/db/$sqlite_db
[database] slave_connection
[keystone_authtoken]
revocation_cache_time
None
None
300
10
Table11.32.Deprecated options
Deprecated option
New Option
[rpc_notifier2] topics
[DEFAULT] notification_topics
595
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Default ports
Port type
8776
8774
8773, 8775
5900-5999
6080
6081
6082
35357
adminurl
5000
publicurl
9292
9191
Networking (neutron)
9696
8004
8000
8003
Telemetry (ceilometer)
8777
To function properly, some OpenStack components depend on other, non-OpenStack services. For example, the OpenStack dashboard uses HTTP for non-secure communication. In
this case, you must configure the firewall to allow traffic to and from HTTP.
This table lists the ports that other OpenStack components use:
Default port
Used by
HTTP
80
HTTP alternate
8080
HTTPS
443
rsync
873
iSCSI target
3260
596
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
Service
Default port
Used by
3306
juno
On some deployments, the default port used by a service may fall within the defined local
port range of a host. To check a host's local port range:
$ sysctl -a | grep ip_local_port_range
If a service's default port falls within this range, run the following program to check if the
port has already been assigned to another application:
$ lsof -i :PORT
Configure the service to use a different port if the default port is already being used by another application.
597
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
AppendixB.Community support
Table of Contents
Documentation ...........................................................................................................
ask.openstack.org ........................................................................................................
OpenStack mailing lists ................................................................................................
The OpenStack wiki .....................................................................................................
The Launchpad Bugs area ...........................................................................................
The OpenStack IRC channel .........................................................................................
Documentation feedback ............................................................................................
OpenStack distribution packages .................................................................................
598
599
599
600
600
601
601
601
The following resources are available to help you run and use OpenStack. The OpenStack
community constantly improves and adds to the main features of OpenStack, but if you
have any questions, do not hesitate to ask. Use the following resources to get OpenStack
support, and troubleshoot your installations.
Documentation
For the available OpenStack documentation, see docs.openstack.org.
To provide feedback on documentation, join and use the
<[email protected]> mailing list at OpenStack Documentation
Mailing List, or report a bug.
The following books explain how to install an OpenStack cloud and its associated components:
Installation Guide for Debian 7.0
Installation Guide for openSUSE and SUSE Linux Enterprise Server
Installation Guide for Red Hat Enterprise Linux, CentOS, and Fedora
Installation Guide for Ubuntu 14.04 (LTS)
The following books explain how to configure and run an OpenStack cloud:
Architecture Design Guide
Cloud Administrator Guide
Configuration Reference
Operations Guide
High Availability Guide
Security Guide
598
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
ask.openstack.org
During the set up or testing of OpenStack, you might have questions about how a specific task is completed or be in a situation where a feature does not work correctly. Use
the ask.openstack.org site to ask questions and get answers. When you visit the http://
ask.openstack.org site, scan the recently asked questions to see whether your question has
already been answered. If not, ask a new question. Be sure to give a clear, concise summary
in the title and provide as much detail as possible in the description. Paste in your command
output or stack traces, links to screen shots, and any other information which might be useful.
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
A F T - Ju no -D R A F T - Ju no -D R A F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -D RA F T - Ju no -
October 7, 2014
juno
Documentation feedback
To provide feedback on documentation, join and use the
<[email protected]> mailing list at OpenStack Documentation
Mailing List, or report a bug.
601