0% found this document useful (0 votes)
67 views14 pages

Openstack Storage

The document discusses different types of storage in OpenStack including block storage (Cinder), object storage (Swift), and image storage (Glance). It describes ephemeral storage which is tied to a specific compute instance and lost when the instance is terminated, versus persistent storage which remains available even if an instance is powered off. Object storage, file share storage, and block storage provide persistent storage options in OpenStack referred to as Swift, Manila, and Cinder respectively. The architecture and deployment process of the Swift object storage service and Cinder block storage service are outlined.

Uploaded by

A R C H I T H A
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
67 views14 pages

Openstack Storage

The document discusses different types of storage in OpenStack including block storage (Cinder), object storage (Swift), and image storage (Glance). It describes ephemeral storage which is tied to a specific compute instance and lost when the instance is terminated, versus persistent storage which remains available even if an instance is powered off. Object storage, file share storage, and block storage provide persistent storage options in OpenStack referred to as Swift, Manila, and Cinder respectively. The architecture and deployment process of the Swift object storage service and Cinder block storage service are outlined.

Uploaded by

A R C H I T H A
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

OpenStack has multiple storage realms to consider:

v Block Storage (cinder)


v Object Storage (swift)
v Image storage (glance)

The Block Storage (cinder) service manages volumes on storage devices in an environment.
The Object Storage (swift) service implements a highly available, distributed, eventually consistent object/blob
store that is accessible via HTTP/HTTPS.
The Image service (glance) can be configured to store images on a variety of storage back ends supported by
the glance_store drivers.
v OpenStack recognizes two types of storage: ephemeral and persistent.
v Ephemeral storage is storage that is associated only to a specific Compute instance.
v Once that instance is terminated, so is its ephemeral storage.
v This type of storage is useful for basic runtime requirements, such as storing the
instance’s operating system.
v Non-persistent storage, also known as ephemeral storage.
v As its name suggests, a user who actively uses a virtual machine in the OpenStack
environment will lose the associated disks once the VM is terminated.
v When a tenant boots a virtual machine on an OpenStack cluster, a copy of the glance
image is downloaded on the compute node. This image is used as the first disk for the
Nova instance, which provides the ephemeral storage. Anything stored on this disk will be
lost once the Nova instance is terminated.
v Ephemeral disks can be created and attached either locally on the storage of the
hypervisor host or hosted on external storage by the means of NFS mount.
Persistent storage

v Persistent storage means that the storage resource is always available.


v Powering off the virtual machine does not affect the data on a persistent
storage disk.
v We can divide persistent storage in OpenStack into three options:
object storage, file share storage, and block storage with the code names
Swift, Manila, and Cinder, respectively.
v This storage is used for any data that needs to be reused, either by different
instances or beyond the life of a specific instance.
Object storage(Swift) allows a user to store data in the form of objects by using the RESTful
HTTP APIs.

How object storage differs from a traditional NAS/SAN based storage:

Ø The data are stored as binary large objects (blobs) with multiple replicas on the
object storage servers.
Ø The objects are stored in a flat namespace. Unlike a traditional storage system,
they do not preserve any specific structure or a particular hierarchy.
Ø Accessing the object storage is done using an API such as REST or SOAP. Object
storage cannot be directly accessed via file protocol such as BFS, SMB, or CIFS.
Ø Object storages are not suitable for high-performance requirements or structured
data that is frequently changed, such as databases.
Benefits Of Swift

Swift is designed as a distributed architecture that provides


performance and scalability
: Swift offers provisioning storage on demand with a entralized
management endpoint
The dynamic ways to increase or decrease storage resources as
needed
The Swift architecture is very distributed, which prevents any single point of failure (SPOF).
It is also designed to scale horizontally.

The of Swift consist of the following:

: This accepts the incoming requests via either the OpenStack object API, or just
the raw HTTP. It accepts requests for file uploads, modifications to metadata, or container creation and
so on.
: This may optionally rely on caching, which is usually deployed with memcached to
improve performance.
: This manages the account that is defined with the objectstorage service. Its main
purpose is to maintain a list of containers associated with the account. A Swift account is equivalent to a
tenant on OpenStack.
: A container refers to the user-defined storage area within a Swift account. It
maintains a list of objects stored in the container. A container canbe conceptually similar to a folder in a
traditional filesystem.
: It manages an actual object within a container. The object storage defines where the
actual data and its metadata are stored.
The Swift ring

Ø Swift rings define the way Swift handles data in the cluster.
Ø The Swift, ring maps the logical layout of data from account, container, and object to a physical location on
the cluster. Swift maintains one ring per storage construct, that is there are separate rings maintained for
account, container, and object.
Ø The Swift proxy finds the appropriate ring to determine the location of the storage construct.
for example, to determine the location of a container, the Swift proxy locates the container ring.
Ø The rings are built by using an external tool called the swift-ring-builder. The ring builder tool performs the
inventory of Swift storage cluster and divides them into slots called partitions.
The Swift network
Swift network as follows:

: Proxy servers handle communication with the external clients over this network.
Besides, it forwards the traffic for the externalAPI access of the cluster.
: It allows communication between the storage nodes and proxies as well as inter-
node communication across several racks in the same region.
multi region clusters, where we dedicate a network segment for replication-related
communication between the storage nodes.
Deploying Swift service
The OpenStack Ansible project recommends at least three storage nodes with five disk drives.

The first step is to add a filesystem to the drives.


# apt-get install xfsprogs
# mkfs.xfs -f -i size=1024 -L sdX /dev/sdX

The filesystem must be created for all five attached drives. Next make an entry in /etc/fstab to mount the drives on
boot:
LABEL=sdX /srv/node/sdX xfs
noatime,nodiratime,nobarrier,logbufs=8,noauto 0 0

Make sure that you don't forget to create the directory for the mount point:
# mkdir -p /srv/node/sdX
Finally mount the drives with mount /srv/node/sdX command. The mount points are
referenced in the /etc/openstack_deploy/conf.d/swift.yml file.
The configuration variables are provided at various levels; the cluster level values can be
adjusted using the Swift level.

Once the update is done, the Swift playbook can be run as follows:
# cd /opt/openstack-ansible/playbooks
# openstack-ansible os-swift-install.yml
Cinder provides persistent storage management for the virtual machine's hard drives.
Unlike ephemeral storage, virtual machines backed by Cinder volumes can be easily live migrated and
evacuated.
In prior OpenStack releases, block storage was a part of thecompute service in OpenStack nova-volume.
The Cinder service is composed of the following components:

Ø Cinder API server


Ø Cinder scheduler
Ø Cinder volume server

The Cinder API server interacts with the outside world


using the REST interface.
It receives requests for managing volumes.
The Cinder volume servers are the nodes that host the
volumes.
The scheduler is responsible for choosing the volume
server for hosting new volume requested by the end-
users.
Deploying Cinder service

OpenStack Ansible provides playbooks to deploy Cinder services. The Cloud Controller and Common
Services, we will adjust the /etc/openstack_deploy/openstack_user_config.yml file to point to where the
Cinder API service will be deployed. Based on the design model ,Designing OpenStack Cloud Architectural
Consideration, the Cinder API service can be deployed in the controller nodes by adding the storage-
infra_hosts stanza as follows:

...
storage-infra_hosts:
cc-01:
ip: 172.47.0.10
cc-02:
ip: 172.47.0.11
cc-03:
ip: 172.47.0.12
Next, we describe the hosts that will be used for the Cinder storage nodes. The storage host can be configured
with the availability zone, backend, and volume driver configuration.This can be achieved by adding the
storage_hosts stanza with additional configuration for storage hosts with LVM and iSCSI backend as follows:

storage_hosts:
lvm-storage1:
ip: 172.47.44.5
container_vars:
cinder_backends:
vol-conf-grp-1:
volume_backend_name: lvm-standard-bkend
volume_driver: cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group: cinder-volumes

To run the Cinder playbook use the openstack-ansible command as follows:


# cd /opt/openstack-ansible/playbooks
# openstack-ansible os-cinder-install.yml
THANK YOU

You might also like