Openstack Storage
Openstack Storage
The Block Storage (cinder) service manages volumes on storage devices in an environment.
The Object Storage (swift) service implements a highly available, distributed, eventually consistent object/blob
store that is accessible via HTTP/HTTPS.
The Image service (glance) can be configured to store images on a variety of storage back ends supported by
the glance_store drivers.
v OpenStack recognizes two types of storage: ephemeral and persistent.
v Ephemeral storage is storage that is associated only to a specific Compute instance.
v Once that instance is terminated, so is its ephemeral storage.
v This type of storage is useful for basic runtime requirements, such as storing the
instance’s operating system.
v Non-persistent storage, also known as ephemeral storage.
v As its name suggests, a user who actively uses a virtual machine in the OpenStack
environment will lose the associated disks once the VM is terminated.
v When a tenant boots a virtual machine on an OpenStack cluster, a copy of the glance
image is downloaded on the compute node. This image is used as the first disk for the
Nova instance, which provides the ephemeral storage. Anything stored on this disk will be
lost once the Nova instance is terminated.
v Ephemeral disks can be created and attached either locally on the storage of the
hypervisor host or hosted on external storage by the means of NFS mount.
Persistent storage
Ø The data are stored as binary large objects (blobs) with multiple replicas on the
object storage servers.
Ø The objects are stored in a flat namespace. Unlike a traditional storage system,
they do not preserve any specific structure or a particular hierarchy.
Ø Accessing the object storage is done using an API such as REST or SOAP. Object
storage cannot be directly accessed via file protocol such as BFS, SMB, or CIFS.
Ø Object storages are not suitable for high-performance requirements or structured
data that is frequently changed, such as databases.
Benefits Of Swift
: This accepts the incoming requests via either the OpenStack object API, or just
the raw HTTP. It accepts requests for file uploads, modifications to metadata, or container creation and
so on.
: This may optionally rely on caching, which is usually deployed with memcached to
improve performance.
: This manages the account that is defined with the objectstorage service. Its main
purpose is to maintain a list of containers associated with the account. A Swift account is equivalent to a
tenant on OpenStack.
: A container refers to the user-defined storage area within a Swift account. It
maintains a list of objects stored in the container. A container canbe conceptually similar to a folder in a
traditional filesystem.
: It manages an actual object within a container. The object storage defines where the
actual data and its metadata are stored.
The Swift ring
Ø Swift rings define the way Swift handles data in the cluster.
Ø The Swift, ring maps the logical layout of data from account, container, and object to a physical location on
the cluster. Swift maintains one ring per storage construct, that is there are separate rings maintained for
account, container, and object.
Ø The Swift proxy finds the appropriate ring to determine the location of the storage construct.
for example, to determine the location of a container, the Swift proxy locates the container ring.
Ø The rings are built by using an external tool called the swift-ring-builder. The ring builder tool performs the
inventory of Swift storage cluster and divides them into slots called partitions.
The Swift network
Swift network as follows:
: Proxy servers handle communication with the external clients over this network.
Besides, it forwards the traffic for the externalAPI access of the cluster.
: It allows communication between the storage nodes and proxies as well as inter-
node communication across several racks in the same region.
multi region clusters, where we dedicate a network segment for replication-related
communication between the storage nodes.
Deploying Swift service
The OpenStack Ansible project recommends at least three storage nodes with five disk drives.
The filesystem must be created for all five attached drives. Next make an entry in /etc/fstab to mount the drives on
boot:
LABEL=sdX /srv/node/sdX xfs
noatime,nodiratime,nobarrier,logbufs=8,noauto 0 0
Make sure that you don't forget to create the directory for the mount point:
# mkdir -p /srv/node/sdX
Finally mount the drives with mount /srv/node/sdX command. The mount points are
referenced in the /etc/openstack_deploy/conf.d/swift.yml file.
The configuration variables are provided at various levels; the cluster level values can be
adjusted using the Swift level.
Once the update is done, the Swift playbook can be run as follows:
# cd /opt/openstack-ansible/playbooks
# openstack-ansible os-swift-install.yml
Cinder provides persistent storage management for the virtual machine's hard drives.
Unlike ephemeral storage, virtual machines backed by Cinder volumes can be easily live
migrated and
evacuated.
In prior OpenStack releases, block storage was a part of thecompute service in OpenStack nova-volume.
The Cinder service is composed of the following components:
OpenStack Ansible provides playbooks to deploy Cinder services. The Cloud Controller and Common
Services, we will adjust the /etc/openstack_deploy/openstack_user_config.yml file to point to where the
Cinder API service will be deployed. Based on the design model ,Designing OpenStack Cloud Architectural
Consideration, the Cinder API service can be deployed in the controller nodes by adding the storage-
infra_hosts stanza as follows:
...
storage-infra_hosts:
cc-01:
ip: 172.47.0.10
cc-02:
ip: 172.47.0.11
cc-03:
ip: 172.47.0.12
Next, we describe the hosts that will be used for the Cinder storage nodes. The storage host can be configured
with the availability zone, backend, and volume driver configuration.This can be achieved by adding the
storage_hosts stanza with additional configuration for storage hosts with LVM and iSCSI backend as follows:
storage_hosts:
lvm-storage1:
ip: 172.47.44.5
container_vars:
cinder_backends:
vol-conf-grp-1:
volume_backend_name: lvm-standard-bkend
volume_driver: cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group: cinder-volumes