0% found this document useful (0 votes)
260 views

Hitachi Storage Integrations With UCP and OpenShift - Reference Architecture Guide

Uploaded by

WS
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
260 views

Hitachi Storage Integrations With UCP and OpenShift - Reference Architecture Guide

Uploaded by

WS
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 107

Hitachi Storage Integrations with UCP and

Red Hat OpenShift

Reference Architecture Guide

MK-SL-210-02
July 2022
© 2022 Hitachi Vantara LLC. All rights reserved.
No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including copying and recording,
or stored in a database or retrieval system for commercial purposes without the express written permission of Hitachi, Ltd., or Hitachi Vantara LLC
(collectively “Hitachi”). Licensee may make copies of the Materials provided that any such copy is: (i) created as an essential step in utilization of the
Software as licensed and is used in no other manner; or (ii) used for archival purposes. Licensee may not make any other copies of the Materials.
“Materials” mean text, data, photographs, graphics, audio, video and documents.
Hitachi reserves the right to make changes to this Material at any time without notice and assumes no responsibility for its use. The Materials contain
the most current information available at the time of publication.
Some of the features described in the Materials might not be currently available. Refer to the most recent product announcement for information about
feature and product availability, or contact Hitachi Vantara LLC at https://support.hitachivantara.com/en_us/contact-us.html.
Notice: Hitachi products and services can be ordered only under the terms and conditions of the applicable Hitachi agreements. The use of Hitachi
products is governed by the terms of your agreements with Hitachi Vantara LLC.
By using this software, you agree that you are responsible for:
1. Acquiring the relevant consents as may be required under local privacy laws or otherwise from authorized employees and other individuals; and
2. Verifying that your data continues to be held, retrieved, deleted, or otherwise processed in accordance with relevant laws.
Notice on Export Controls. The technical data and technology inherent in this Document may be subject to U.S. export control laws, including the
U.S. Export Administration Act and its associated regulations, and may be subject to export or import regulations in other countries. Reader agrees to
comply strictly with all such regulations and acknowledges that Reader has the responsibility to obtain licenses to export, re-export, or import the
Document and any Compliant Products.
Hitachi and Lumada are trademarks or registered trademarks of Hitachi, Ltd., in the United States and other countries.
AIX, AS/400e, DB2, Domino, DS6000, DS8000, Enterprise Storage Server, eServer, FICON, FlashCopy, GDPS, HyperSwap, IBM, Lotus, MVS, OS/
390, PowerHA, PowerPC, RS/6000, S/390, System z9, System z10, Tivoli, z/OS, z9, z10, z13, z14, z/VM, and z/VSE are registered trademarks or
trademarks of International Business Machines Corporation.
Active Directory, ActiveX, Bing, Excel, Hyper-V, Internet Explorer, the Internet Explorer logo, Microsoft, Microsoft Edge, the Microsoft corporate logo,
the Microsoft Edge logo, MS-DOS, Outlook, PowerPoint, SharePoint, Silverlight, SmartScreen, SQL Server, Visual Basic, Visual C++, Visual Studio,
Windows, the Windows logo, Windows Azure, Windows PowerShell, Windows Server, the Windows start button, and Windows Vista are registered
trademarks or trademarks of Microsoft Corporation. Microsoft product screen shots are reprinted with permission from Microsoft Corporation.
All other trademarks, service marks, and company names in this document or website are properties of their respective owners.
Copyright and license information for third-party and open source software used in Hitachi Vantara products can be found in the product
documentation, at https://www.hitachivantara.com/en-us/company/legal.html or https://knowledge.hitachivantara.com/Documents/
Open_Source_Software.

Feedback
Hitachi Vantara welcomes your feedback. Please share your thoughts by sending an email message to [email protected]. To assist the
routing of this message, use the paper number in the subject and the title of this white paper in the text.

Revision history

Changes Date

Updated to support OCP 4.8. July 6, 2022

Hitachi Storage Integrations with UCP and Red Hat OpenShift 2


Reference Architecture Guide
This paper demonstrates best practice operations for a reference configuration of Red Hat
OpenShift Container Platform (OCP) environment deployed on a Hitachi Unified Compute
Platform. It leverages the latest capabilities of data storage integrations and services to
create, protect, and manage on-premises Kubernetes clusters and workloads including
application services that require persistent storage. These are relevant to deployment
configurations with Hitachi Unified Compute Platform (UCP) in a hybrid converged/
hyperconverged configuration or Hitachi Virtual Storage Platform (VSP) with OpenShift
whether the configuration is bare metal, hypervisor-based, or hybrid OpenShift. This paper
covers private registry for air-gap environments, data protection of k8s, and persistent
storage to cloud resources. It also covers persistent storage options that are available to you
in hybrid bare metal and hypervisor-based deployments using well-known and supported
storage integrations between Hitachi Vantara, Red Hat OpenShift, and VMware.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 3
A key element in the successful deployment of a container platform is having a robust and
flexible infrastructure that can meet a wide variety of requirements in a highly dynamic
environment. Hitachi infrastructure with Red Hat OpenShift provides highly available and
high-performance infrastructure for container applications. Some specific challenges of
providing an infrastructure for a container platform are:
■ Data persistence

Data is at the core of any application. Many applications require data persistence, such as
MariaDB, PostgreSQL, MongoDB, and MySQL, among others. Continuous integration and
continuous delivery (CI/CD) pipelines require data persistency at every level.

Using well-known and proven CSI (Container Storage Interface) storage integrations, you
can provide persistent storage for stateful container applications. The Hitachi UCP
solution includes a Hitachi Storage CSI driver supported with Hitachi Storage Plug-in for
Containers (HSPC) and a VMware Container Native Storage (CNS) implementation
supported with Hitachi Storage Provider for VMware vCenter (VASA) software. For
example, using vSphere storage policies in combination with VASA, you can provide
dynamic ReadWriteOnce (RWO) VMFS and vVols-based VMDK persistent volumes to
container applications running within OpenShift on top of VMware clusters.

This combination of integrations can meet majority of persistent storage configurations


and data services that are needed. Hitachi UCP with Red Hat OpenShift provides the
infrastructure and integrations needed for your organization to successfully provide
container services to your application teams.
■ Backup, data protection, and replication

Backup is a critical aspect of any data center infrastructure. Red Hat OpenShift
Application Data Protection (OADP) includes a built-in Velero operator to enable your
organization to protect any container-related entity, including Kubernetes persistent
volumes.

The integration of Hitachi Storage Plug-in for Containers (HSPC) with OpenShift brings
other benefits such as snapshot and cloning and restore operations for persistent
volumes, enabling rapid copy creation for immediate use in decision support, software
development, and data protection operations.

Hitachi Content Platform for cloud scale (HCP for cloud scale) provides standard AWS-
compliant S3 storage that can be used with Red Hat OADP/Velero implementations as
target repository storage.

Hitachi Replication Plug-in for Containers (HRPC) supports any Kubernetes cluster
configured with Hitachi Storage Plug-in for Containers and provides data protection,
disaster recovery, and migration of persistent volumes to remote Kubernetes clusters.
HRPC supports replication for both bare metal and virtual environments.
■ Computing platform

With a wide range of applications that are stateful or stateless, a wide range of flexible
computing platforms are necessary to match both memory and CPU requirements.

The type of computing technology is also a consideration for licensing costs. Hitachi
Vantara provides different computing options from the 1U/2U dual socket Hitachi
Advanced Server DS120/220 G1/G2 to the 2U quad socket Hitachi Advanced Server
DS240 G1.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 4
■ Network connectivity

As with any infrastructure, a reliable network is needed to provide enough bandwidth and
security for container architectures. Hitachi Unified Compute Platform uses a spine and
leaf design using Cisco Nexus or Arista switches.
■ Infrastructure management

Having a robust and flexible infrastructure without efficient lifecycle management


decreases efficiency exponentially as the infrastructure scales.

Orchestration and automation are the key to operational efficiencies. Hitachi Unified
Compute Platform (UCP) Advisor provides a single pane of glass management and
lifecycle manager for converged infrastructure, with automation for compute, network, and
storage infrastructure. Hitachi Ops Center is also available with Hitachi Virtual Storage
Platform (VSP) for storage management.

From monitoring perspective, Hitachi Storage Plug-in for Prometheus (HSPP) enables
Kubernetes administrators to monitor metrics for Kubernetes resources and the Hitachi
Storage resources with a single tool.
This reference architecture also provides the reference design for a build-your-own Red Hat
OpenShift Container Platform environment using Hitachi Virtual Storage Platform. Although a
specific converged system is used as an example, this reference design still applies to
building your own container platform.
The intended audience of this document is IT administrators, system architects, consultants,
and sales engineers to assist in planning, designing, and implementing Unified Compute
Platform CI with OpenShift Container Platform solutions.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 5
Solution overview

Solution overview
Red Hat OpenShift is a successful container orchestration platform and is one of the
container orchestration solutions available with Unified Compute Platform. The following
figure shows a high-level diagram of OpenShift managing containers and persistent volumes
on the Unified Compute Platform stack with Hitachi Virtual Storage Platform series systems.

You can deploy OpenShift on bare metal hosts and/or virtual hosts or both. In some cases,
the master nodes are virtualized while the worker nodes can be hybrid bare metal and virtual
nodes. Depending on the deployment purposes, different deployments can be used. For this
reference architecture, the OpenShift clusters use a hybrid deployment, combining bare
metal and virtual worker nodes to show the benefits of both types of deployments.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 6
Solution components

These are the storage options and capabilities for bare metal worker nodes:
■ Any storage system from the Hitachi Virtual Storage Platform family can be used. Virtual
Storage Platform provides a REST API for Hitachi Storage Plug-in for Containers to
provision persistent volumes. Deploy Storage Plug-in for Containers within the OpenShift
Container Platform cluster in the respective cluster. Containers can access the persistent
volumes through a local mount point inside the worker node. The persistent volumes are
provided by Virtual Storage Platform-hosted LUNs through a block protocol to the worker
nodes.
■ Hitachi Storage Plug-in for Containers dynamically provisions persistent volumes for
stateful containers from Hitachi storage.
These are the storage options and capabilities for virtual worker nodes:
■ Any storage system from the Hitachi Virtual Storage Platform family can be used, as well
as Hitachi Unified Compute Platform. Hitachi Storage Provider for VMware vCenter
provides Virtual Storage Platform capabilities awareness to VMware vCenter, where it can
be used with VMware Storage Policy-Based Management.
■ VMware Cloud Native Storage (CNS) provides persistent storage provisioning capabilities
using the VMware storage stack. Containers can access the persistent volumes through a
local mount point inside the worker node virtual machines. The persistent volumes are
provided by VMDKs provisioned from VMFS or vVols datastores from Virtual Storage
Platform. You can also provide persistent volumes through Unified Compute Platform HC
based on VMware Virtual SAN Ready Nodes (vSAN Ready Nodes).
The following persistent volume options are available for this configuration:
■ Use VMware vVols to provision persistent volumes directly from Hitachi storage.
■ Create persistent volumes from regular VMFS datastores.
■ Create persistent volumes from VMware vSAN datastores hosted by Hitachi Unified
Compute Platform HC nodes.
The solution validation of this reference architecture consists of different use cases for data
protection of stateful applications, replication data services for container volumes across data
centers, and monitoring and private registry, all running within OpenShift clusters on top of
Hitachi UCP.
Follow the steps in Solution design and Solution Implementation and Validation to learn about
the storage capabilities, data protection, and monitoring features when using Hitachi UCP
with Red Hat OpenShift Container Platform.

Solution components
The following tables list the versions of hardware and software tested in this reference
architecture.

Hardware components
The tested solution used specific features based on the following hardware. You can use
either Hitachi Advanced Server DS120/DS220/DS225/DS240 Gen1 or Gen2 or any qualified
server platform for UCP.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 7
Hardware components

See the UCP CI Interoperability Matrix and UCP Product Compatibility Guide for more
information.
Table 1 Hardware Components

Hardware Description Version Quantity

Hitachi Advanced ■ 2 × Intel Xeon 6240 18-core 2.60 BMC: 4.68.06 4


Server DS120 (for GHz processors BIOS: 3B19.H00
VMware compute ■ 16 × 16 GB DIMMs, 256 GB
cluster)
memory
■ 32 GB SATADOM (boot)
■ Emulex LPe3200 32 Gbps dual port
PCIe HBA
■ 2 × Mellanox CX4 dual port 10/25G
NICs
■ vSAN Cache Tier: 2 × Intel Optane
SSD DC P4800X (375 GB, U.2)
NVMe
■ vSAN Capacity Tier: 10 × Intel SSD
DC P4510 (4 TB, U.2) NVMe

Hitachi Advanced ■ 2 × Intel Xeon 4210 10-core 2.20 BMC: 4.68.06 2


Server DS120 GHz processors BIOS: 3B19.H00
(bare metal ■ 16 × 16 GB DIMM, 256 GB memory
compute nodes)
■ 1 x NVMe for boot
■ Emulex LPe3200 32 Gbps dual port
PCIe HBA
■ 2 × Mellanox CX4 dual port 10/25G
NICs

Hitachi Virtual ■ 2 TB cache 90-08-22-00/01 1


Storage Platform ■ 16 × 3.8 TB NVMe drives
5600
■ 4 × 32 Gbps Fibre Channel ports
(used for
replication on the
primary site)

Hitachi Virtual ■ 2 TB cache 90-08-22-00/01 1


Storage Platform ■ 8 × 1.9 TB NVMe drives
5500
■ 4 × 32 Gbps Fibre Channel ports
(used for
replication use
case on the
secondary site)

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 8
Hardware components

Hardware Description Version Quantity

Hitachi Virtual ■ 86 GB cache 83-05-44-40/00 1


Storage Platform ■ 8 × 1.2 TB SAS 10K drives
G600
■ 4 × 8 Gbps Fibre Channel ports

Cisco Nexus ■ 32-port 40/100 GbE NXOS 9.3.5 2


9332C switch ■ 2-port 1/10 GbE
(spine)

Cisco Nexus ■ 48-port 10/25 GbE NXOS 9.3.5 2


93180YC-FX ■ 6-port 40/100 GbE
switch (leaf)

Cisco Nexus ■ 48-port 1 GbE NXOS 9.3.5 1


92348 ■ 4-port 1/10/25 GbE
■ 2-port 40/100 GbE

Brocade G620 ■ 48-port 16/32 Gbps Fibre Channel 9.0.0b 2


switch

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 9
Software components

Software components
The following table lists the key software components.
Table 2 Software Components

Software Version

Hitachi Storage Virtualization Operating System RF 90-05-02-00/01

83-05-33-40/00

Hitachi UCP Advisor 3.10

Hitachi Storage Provider for VMware vCenter (VASA) 3.6.2

Red Hat OpenShift Container Platform (OCP) 4.9

OpenShift API for Data Protection (OADP) operator 1.0.2

Red Hat Quay 3.6.4

Hitachi Storage Plug-in for Containers 3.9

Hitachi Replication Plug-in for Containers 1.1

Hitachi Storage Plug-in for Prometheus 1.1

Hitachi HCP for Cloud Scale 2.4.1.2

VMware vSphere 7.0 Update 2 or newer

Solution design
This section outlines the detailed solution example for the Hitachi Unified Compute Platform
and Red Hat OpenShift.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 10
UCP infrastructure components

UCP infrastructure components


The following figure shows a high availability configuration of Hitachi Unified Compute
Platform used to validate the Red Hat OpenShift solution. It includes the following
components:
■ Two Cisco 9332C or Arista 7050CX3 spine Ethernet switches.
■ Two Cisco 93180YC-FX or Arista 7050SX3 leaf Ethernet switches.
■ One Cisco 92348 or Arista 7010T management switch.
■ Four Hitachi DS120 servers for vSAN cluster.
● For vSAN compute nodes, leverage supported internal drives. These compute nodes
are vSAN Ready Node Certified as UCP HC V120-series/V120F/V121F/V123F/V124N,
UCP HC V220-series/V220F, UCP HC V225G, or UCP HC DS240, respectively. See
Hitachi Unified Compute Platform HC Series.
● For vVols or VMFS compute nodes, leverage the HBA PCIe card, which is optionally
configured together with the UCP HC vSAN ready nodes, or when configuring UCP
Fibre Channel-only nodes in UCP RS.
■ Two or more Hitachi DS120, DS220, DS225, or DS240 G1/G2 configured as bare metal
worker nodes for separate OCP clusters.
■ Two Hitachi VSP storage systems.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 11
VMware vVols and storage policy-based management (SPBM)

The following diagram represents a standard architecture for Hitachi Unified Compute
Platform (UCP) CI/ UCP HC/UCP RS.

The configuration with Hitachi Virtual Storage Platform is described in Unified Compute
Platform CI/HC/RS in HitachiUnified Compute Platform CI for VMware vSphere Reference
Architecture Guide.

VMware vVols and storage policy-based management (SPBM)


Storage Provider for VMware vCenter (VASA) enables organizations to deploy Hitachi
Storage infrastructure with VMware vSphere Virtual Volumes (vVols) to bring customers on a
reliable enterprise journey to a software-defined, policy-controlled datacenter.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 12
Hitachi UCP Advisor

Hitachi storage policy-based management allows automated provisioning of virtual machines


(VMs) and quicker adjustment to business changes. Virtual infrastructure (VI) administrators
can make changes to policies to reflect changes in their business environment, dynamically
matching storage-policy requirements for VMs to available storage pools and services. The
vVols solution reduces the operational burden between VI administrators and storage
administrators with an efficient collaboration framework leading to faster and better VM and
application services provisioning.
To use VMware vVols with Hitachi storage, install VASA. See VMware vSphere Virtual
Volumes (vVols) with Hitachi Virtual Storage Platform Quick Start and Reference Guide for
details.
See Storage Provider for VMware vCenter (VASA) to deploy this environment.

Hitachi UCP Advisor


Hitachi UCP Advisor is used for SAN storage provisioning and SAN storage management,
when vVols are not used.
Deploy Hitachi UCP Advisor into the management cluster within your UCP infrastructure. See
Simplify Operations With Hitachi Unified Compute Platform Advisor for more information.

Hitachi Storage Plug-in for Containers (HSPC)


Hitachi Storage Plug-in for Containers (HSPC) is a software component that contains
libraries, settings, and commands that you can use to create a container to run stateful
applications. It enables stateful applications to persist and maintain data after the lifecycle of
the container has ended. Storage Plug-in for Containers provides persistent volumes from
Hitachi Dynamic Provisioning (HDP) or Hitachi Thin Image (HTI) pools to bare metal or hybrid
deployments using Fibre Channel or iSCSI protocols. iSCSI protocol is supported for both
bare metal and virtual environments.
Storage Plug-in for Containers integrates Kubernetes or OpenShift with Hitachi storage
systems using Container Storage Interface (CSI).
The following diagram illustrates a container environment where Storage Plug-in for
Containers is deployed.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 13
Understanding Red Hat OpenShift container platform and data protection

Understanding Red Hat OpenShift container platform and data


protection
Red Hat OpenShift Container Platform (OCP) provides a single platform to build, deploy, and
manage applications consistently across on-premises and hybrid cloud deployments. OCP
provides the control plane and data plane within the same interface. OCP provides
administrator views to deploy operators, monitor container resources, manage container
health, manage users, work with operators, manage pods and deployment configurations, as
well as define storage resources.
OCP also provides a developer view to allow users to deploy application resources from
various pre-defined resources such as YAML files, Docker files, Catalogs, or GIT within user-
defined namespaces. With OCP kubectl, a native binary of Kubernetes is complemented
by the oc command which provides further support for OCP resources, such as deployment
and build configurations, routes, image streams, and tags. OCP provides a GUI and a CLI
interface.

Red Hat OCP components


Within an OCP cluster there are multiple node types and roles that provide functionality to the
container management platform. This section provides an overview of each node type and
specific node roles within the cluster. Not all components are covered, and you are
encouraged to see the Red Hat documentation for more information.

Master Nodes
Master nodes maintain the OCP cluster configuration as well as manage nodes within the
cluster and schedule pods to run on worker nodes. Master nodes consist of an API server,
controller manager server, certificate, and scheduler. If there is a master node outage,
container applications will not be impacted and end users can continue using resources, but
administrators of the cluster will not be able to make any changes to the cluster.

Worker Nodes
Worker nodes provide a runtime environment for containers and pods and are managed by
the master nodes within the cluster. Worker nodes can either be virtual or physical based on
the deployment type.

API Server
The API server, or kube-apiserver, provides the front end for the Kubernetes control plane by
managing the interactions of cluster components via RESTful API calls. Administrators can
run several instances of kube-apiserver to balance traffic among the cluster.

Scheduler
The scheduler, or kube-scheduler, ensures that container applications are scheduled to run
on worker nodes within the OCP cluster. The scheduler reads data from the pod and finds a
node that is a good fit based on configured policies.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 14
Deployment methods and types

Controller manager
The controller manager, or kube-controller-manager, provides the cluster with the necessary
state changes to provide the most applicable state for a healthy cluster. The controller
manager provides this functionality via the kube-apiserver.

Namespace (or Project)


Namespaces are intended for use in environments with many users spread across multiple
teams, or projects. Namespaces provide a scope for names. Resource names need to be
unique within a namespace, but not across namespaces. Namespaces cannot be nested
inside one another and each Kubernetes resource can only be in one namespace.

Deployment methods and types


There are multiple deployment methods for Red Hat OCP based on the hardware
configuration that supports the environment. These deployment methods include manual
deployments using the User Provisioned Infrastructure (UPI) for customized deployments or
fully automated deployments using the Installer Provisioned Infrastructure (IPI).
Within this guide, a hybrid environment is deployed using the UPI method. For more
information about deployment methods and types, see the Red Hat OCP documentation.

vSphere Cloud Native Storage (CNS) concepts


Cloud Native Storage (CNS) integrates vSphere and Kubernetes and offers capabilities to
create and manage container volumes deployed in a vSphere environment. CNS consists of
two components, a CNS component in vCenter Server and a vSphere volume driver (also
called the vSphere CSI driver) in Kubernetes, called vSphere Container Storage Plug-in.
■ CNS enables vSphere and vSphere storage (VMFS, vVols, and NFS), including vSAN, as
a platform to run stateful applications. CNS enables access to this data path for
Kubernetes and brings an understanding of Kubernetes volume and pod abstractions to
vSphere. CNS uses several components to work with vSphere storage; this includes
VMFS or vVols provided by the Hitachi Storage Provider for VMware vCenter. After you
create PVs, you can review them and their backing virtual disks in the vSphere Client, and
monitor their storage policy compliance.
■ The vSphere Container Storage Plug-in has different components that provide an
interface used by the Container Orchestrators such as OpenShift to manage the lifecycle
of vSphere volumes. It also allows you to create, expand and delete volumes, attach and
detach volumes to the cluster worker node VMs, and use bind mounts for the volumes
inside the pods.
Because the vSphere CSI driver is installed in the OCP cluster, provisioning operations are
similar to provisioning any Kubernetes cluster. A Persistent Volume Claim (PVC) is created
that references an available StorageClass, which maps to a vSphere SPBM policy. A first
class disk (FCD) is created within vSphere, and a resultant PV is presented to the OpenShift
layer from the CSI driver. The FCD is then mounted to the pod when requested for use as a
PV.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 15
Data protection options

The following figure illustrates how CNS components, CNS in vCenter Server, and vSphere
Container Storage Plug-in interact with other components in a vSphere environment (credit to
VMware).

Deployment of Openshift worker nodes on VMware vSphere requires installation of the


vSphere CSI driver to take advantage of the vSphere SPBM to Kubernetes StorageClass
mapping.

Data protection options


Red Hat OpenShift API for Data Protection, or OADP brings an API to the OpenShift
Container Platform that Red Hat partners and customers can leverage when creating a
disaster recovery and data protection solution.
With OADP, you can back up and restore applications and services in the following two ways:

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 16
Volume snapshots

1. If Hitachi Storage Plug-in for Containers (HSPC) is installed (typically for bare metal
workers with Fibre Channel or iSCSI or virtual worker with iSCSI), the backup process
can use CSI snapshots. Native snapshots are typically much faster than file copying
since they leverage copy-on-write capabilities from Hitachi VSP storage instead of doing
a full copy.
When using CSI snapshots, only the Kubernetes metadata is backed up by Velero to the
S3-compatible storage, while the HSPC leverages VSP capabilities to back up volume
data using snapshot technologies.
This backup process is explored in Solution Implementation and Validation (on
page 37), Scenario 1.

2. Another option is using Restic backups on an S3-compatible object storage such as


Hitachi HCP CS S3.
Restic is a file system copying technique that is used by OADP. Backups with Restic are
different than the backups produced with CSI snapshots because both volume data and
Kubernetes metadata are backed up by Velero and Restic to the S3 storage.
Restic is agnostic and works with any type of storage. This has been fully tested with an
OpenShift cluster as described in Solution Implementation and Validation (on page 37),
Scenario 2.

OADP alone is not a full end-to-end data protection solution, but the integration with Hitachi
Storage Plug-in for Containers and Hitachi HCP CS S3 storage provide a powerful solution
for data protection for container-based applications and their associated PVs and data. The
OADP operator sets up and installs Velero on the OpenShift cluster.

Note: The Red Hat OADP operator version 1.0 was released in Feb 2022 and is
the first generally available release that is fully supported. Earlier versions
(pre-1.0) of the operator were available as a community operator with only
community support.

In addition, Hitachi Replication Plug-in for Containers (HRPC) together with Hitachi VSP
storage replication capabilities can be used for data protection, disaster recovery, and
migration of persistent volumes to remote datacenters/Kubernetes clusters.

Volume snapshots
In OpenShift or Kubernetes, creating a Persistent VolumeClaim (PVC) initiates the creation of
a PersistentVolume (PV) which contains the data. A PVC also specifies a StorageClass
which provides additional attributes for backend storage.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 17
Deploy OpenShift Container Platform

Because this guide also covers backup with CSI snapshots, it is important to clarify some
additional concepts related to snapshots. A VolumeSnapshot represents a snapshot of a
volume on the storage system. In the same way how API resources PersistentVolume and
PersistentVolumeClaim are used to provision volumes for users and administrators,
VolumeSnapshot and VolumeSnapshotContent API resources are provided to create volume
snapshots. VolumeSnapshot support is only available for CSI drivers.
■ VolumeSnapshotContent – Represents a snapshot taken of a Volume in the cluster.
Similar to PersistentVolume object, the VolumeSnapshotContent is a cluster resource that
points to a real snapshot in the backend storage. VolumeSnapshotContent are not
namespaced.
■ VolumeSnapshot - Is a request for snapshot of a volume. It is similar to a
PersistentVolumeClaim. Creating a VolumeSnaphot triggers a snapshot
(VolumeSnapshotContent) and the objects are bound together, there is a one-to-one
binding between VolumeSnaphot and VolumeSnapshotContent. VolumeSnapshot are
namespaced.
■ VolumeSnapshotClass – Allows you to define different attributes belonging to a
VolumeSnapshot. This is similar to how a StorageClass is used for PVs.
This is covered in section Backing up persistent volumes with CSI snapshot in this guide as a
requirement for CSI snapshots.

Deploy OpenShift Container Platform


This guide does not cover the step-by-step how to implement OCP, follow Red Hat OCP
documentation for the setup of the cluster.
Two OCP clusters have been configured to support the different use cases described in this
guide, including replication of containerized applications between two Hitachi VSP Storage
systems.
Two OCP clusters have been configured to support the different use cases described in this
guide, each cluster with virtual and bare metal worker nodes and the following configuration:
■ The virtual worker nodes had access to vSAN storage, as well as vVols and VMFS
datastores supported by Hitachi VSP storage system and Hitachi Storage Provider for
VMware vCenter (VASAProvider).
● Details about the StorageClasses are provided in each of the use cases part of the
solution implementation and validation.
■ The bare metal worker node in cluster 1 (jpc2) was configured Fibre Channel connectivity
to the VSP 5500 Storage System.
■ The bare metal worker node in cluster 2 (jpc3) was configured Fibre Channel connectivity
to the VSP 5600 Storage System.
The following figure illustrates a hybrid OpenShift Container Platform architecture on top of
Hitachi UCP Platform stack for both clusters.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 18
Deploy OpenShift Container Platform

The tables below provide more details about the master and worker nodes for the two OCP
clusters:
OCP Cluster (Primary site)

OCP Cluster (Secondary site)

For detailed hybrid OpenShift Container Platform installation procedures, see Red Hat
documentation.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 19
Deploy and enable Hitachi Storage (VASA) Provider for VMware vCenter

Deploy and enable Hitachi Storage (VASA) Provider for VMware


vCenter
To deploy the Hitachi Storage (VASA) Provider for VMware vCenter, follow these steps:

Procedure
1. Use UCP Advisor to create the necessary zone sets in the Fibre Channel fabrics.
2. Use Hitachi Storage Navigator to provision the necessary ALU targets from each
storage system.
3. Register the Hitachi VSP storage systems in the VASA Provider.
4. Register VASA Provider as a Storage Provider within the vCenter associated with the
cluster hosting the virtual worker nodes.
5. Create a vVol datastore or VMFS (LDEV) datastores and associated SPBM policies for
the Hitachi VSP storage systems configured for use by the target OCP clusters.
For details, see VMware vSphere Virtual Volumes (vVols) with Hitachi Virtual Storage
Platform Quick Start and Reference Guide.

Install and Configure Hitachi Storage Plug-in for Containers (HSPC)


Hitachi Storage Plug-in for Containers is easily deployed to OpenShift using the Operator,
which can be installed from OperatorHub. Follow the steps to:
■ Install Hitachi Storage Plug-in for Containers.
■ Configure Secret settings to access Hitachi VSP Storage system.
■ Configure StorageClass settings.
■ Configure Multipathing (FC or iSCSI).
Specific steps how to configure Persistent Volume Claims (PVC) and PODs will be covered
as part of each of the use cases on the solution implementation and validation section.

Note: If there is a previous version of Storage Plug-in for Containers, remove it


before performing the installation procedure.

Install Hitachi Storage Plug-in for Containers

Procedure
1. Login to the console of your OCP cluster, select OperatorHub under Operators, then
search for Hitachi.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 20
Install Hitachi Storage Plug-in for Containers

2. Click the Hitachi Storage Plug-in for Containers, and then click Install.

Note: Select the following settings in Operator Subscription:

■ Installation Mode: A specific namespace on the cluster and <any namespace>


■ Approval Strategy: Manual and approve the Install Plan (see https://
docs.openshift.com/).

3. Confirm the status of the Operator is Succeeded either using the console or oc get
pods -n <namespace> command. On the console, click Installed Operators under
Operators and you can see the status of the HSPC plug-in.

4. The next step is to create the HSPC Instance, this can be done using the Operator
Details. Select the Hitachi Storage Plug-in for Containers, and then click Create
Instance on the Operator Details. Click Create.
5. Confirm the status READY is true for the HSPC instance with the following command:

[ocpusr@dminws-c2]$oc get hspc -n <namespace>


NAME READY AGE
hspc true 30s

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 21
Configure Secret

Finally, verify that all the HSPC pods are in running state using the following command:

oc get pods -n <namespace>

The Hitachi Storage Plug-in for Containers (HSPC) has been successfully installed. If
you want to make an advanced configuration, refer to Configuration of Storage Plug-in
for Containers.

Configure Secret
The secret contains storage system information that enables access to the Storage Plug-in
for Containers. It contains the storage URL (VSP REST API), user and password settings.
Here an example of the YAML manifest file:

apiVersion: v1
kind: Secret
metadata:
name: secret-vsp-113
namespace: test
type: Opaque
data:
url: aHR0cHM6Ly8xNzIuMjUuNDcuMTEz
user: b2NwdXNyMQ==
password: SGl0YWNoaTEh

The URL, user, and password are base64 encoded. Here an example how to get the base64
encoded of a user called “ocpusr1”, do the same for the URL and password:

[ocpusr@dminws-c2]$echo -n "ocpusr1" | base64


b2NwdXNyMQ==
[ocpusr@dminws-c2]$

One way to create a secret is using the following command:

oc create -f <secret-manifest-file>

Another way to create a secret is using the OpenShift console:

Procedure
1. Login to the OCP console, select Workloads > Secrets.
2. Confirm the namespace in which you are creating the secret, in this example “test”.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 22
Configure StorageClass

3. Click Create. Select From YAML. Then, either copy/paste the content of the secret
manifest file or just set the URL, user, and password in base64 encoded, and assign a
name and corresponding namespace.

4. Click Create.

Configure StorageClass
The StorageClass contains storage settings that are necessary for Storage Plug-in for
Containers to work with your environment. The following YAML manifest file provides
information about the required parameters:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: sc-vsp-113
annotations:
kubernetes.io/description: Hitachi Storage Plug-in for Containers
provisioner: hspc.csi.hitachi.com
reclaimPolicy: Delete
allowVolumeExpansion: true
volumeBindingMode: Immediate
parameters:
serialNumber: '40016'
poolID: '1'
portID: 'CL1-D,CL4-D'
connectionType: fc
csi.storage.k8s.io/fstype: ext4
csi.storage.k8s.io/provisioner-secret-namespace: test
csi.storage.k8s.io/provisioner-secret-name: secret-vsp-113
csi.storage.k8s.io/node-stage-secret-name: secret-vsp-113
csi.storage.k8s.io/controller-expand-secret-name: secret-vsp-113

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 23
Configure StorageClass

Here additional details about some of these parameters:


■ serialNumber: VSP serial number
■ provisioner: For HSPC, the default is hspc.csi.hitachi.com
■ poolID: Pool ID on the VSP used to carve dynamically persistent volumes
■ portID: VSP storage ports, use a comma separator for multipath
■ connectionType: The connection type between storage and nodes, fc and iscsi are
supported. If blank, fc is set
■ fstype: Set the filesystem type as ext4
■ secret-name: Define the VSP secret name
■ secret-namespace: Enter the same namespace used for the secret
One way to create a StorageClass is using the following command:

oc create -f <storage-class-manifest-file>

Another way to create a StorageClass is using the OpenShift console:

Procedure
1. Log in to the OCP console, and then select Storage > StorageClasses.
2. Click Create StorageClass. Then, click Edit YAML. Then, either copy/paste the content
of the StorageClass manifest file or enter each of the corresponding settings.

3. Click Create.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 24
Configure Multipathing

Configure Multipathing
For worker nodes connected to Hitachi VSP Storage via FC or iSCSI, it is recommended to
enable multipathing. The requirement is to create the multipath.conf and ensure that the
user_friendly_names option is set to yes and the multipathd.service is enabled.
This can be done by applying an MCO (Machine Config Operator) to the OCP cluster after it
has been deployed. Note, applying MachineConfig will restart the worker nodes one at a
time.
Consider the following before applying the multipath configuration:
■ For Fibre Channel, ensure that FC switches are configured with proper zoning for the
compute worker nodes and Hitachi VSP storage systems are accessible to each other.
■ For iSCSI, ensure the Hitachi VSP storage is properly configured for iSCSI and the
compute worker nodes can access the iSCSI targets. Also, for iSCSI, check the Hitachi
Storage Plug-in for Containers Release Notes for additional considerations regarding IQN
configurations.
■ RedHat CoreOS (RHCOS) already includes the device-mapper-multipath package
which is required to support multipathing. For solutions with iSCSI, RHCOS already has
the iSCSI initiator tools installed by default. There is no need to install any additional
package, apply the configurations as indicated in this section.
Configure multipathing for OCP worker nodes using MachineConfig:

To enable multipath for Hitachi HSPC, apply the MachineConfig below to the cluster. This will
enable multipathd (for FC and iSCSI) needed by the Hitachi VSP Storage and HSPC
integration on each worker node. It targets the worker nodes by using the label
machineconfiguration.openshift.io/role: worker.

The following YAML file can be used for both Fibre Channel and iSCSI configurations with
multipathing. To support iSCSI, uncomment the last three lines in the file.

A MachineConfig can be created directly from the command line using the following
command oc create -f <MachineConfigFile.yaml> or using the OpenShift console.

Procedure using the OCP console

Procedure
1. To apply a MachineConfig using the OCP console, login to the OCP web console and
navigate to Compute > Machine Configs. Click Create Machine Config, then copy
and paste the content of the YAML file and click Create.

apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
name: workers-enable-multipath-conf
labels:
machineconfiguration.openshift.io/role: worker
spec:
config:
ignition:
version: 3.2.0

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 25
Configure Multipathing

storage:
files:
- path: /etc/multipath.conf
mode: 400
filesystem: root
contents:
source: data:text/plain;charset=utf-8;base64,
ZGVmYXVsdHMgewogICAgICAgIHVzZXJfZnJpZW5kbHlfbmFtZXMgeWVzCiAgICAgICAgZmluZF9tdWx0a
XBhdGhzIHllcwp9CgpibGFja2xpc3Qgewp9Cg==
verification: {}
systemd:
units:
- name: multipathd.service
enabled: true
state: started
# Uncomment the following 3 lines if this MachineConfig will be used
with iSCSI
#- name: iscsid.service
# enabled: true
# state: started
osImageURL: ""

2. For iSCSI without multipathing, use the following MachineConfig:

apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
name: workers-enable-iscsi
labels:
machineconfiguration.openshift.io/role: worker
spec:
config:
ignition:
version: 3.2.0
systemd:
units:
- name: iscsid.service
enabled: true
state: started
osImageURL: ""

Note: The source data string located after the line source: data:text/
plain;charset=utf-8;base64, for multipath.conf is base64 encoded. If
you need to update the multipath.conf file to suite your environment needs,
you can run echo -n “<string>” | base64 -d to decode the
contents of the config file. If you want to update it, make your changes, and
then re-encode the file using base64.
3. After the MachineConfig is created, every worker node is rebooted one at a time after
the configuration is applied, and it could take from 20 to 30 minutes to apply the

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 26
Label nodes in the OCP cluster

configuration to the worker nodes. To verify whether the machine config is applied use
the oc get mcp command to verify that the machine config pool for workers is
updated. In addition, ssh to the worker nodes to confirm that the /etc/multipath.conf file
has been created and the multipathd service is running, if it is iscsi verify that the iscsid
service is running. Here an example:

Label nodes in the OCP cluster


Because the OCP cluster is a hybrid environment with virtual and bare metal worker nodes, it
is important for the operator to distinguish node type between physical and virtual. Assigning
the right labels to the nodes allows for granular deployment of resources to the correct worker
node types. This process can be done either from command line or the OpenShift console.
Follow these steps to assign labels to the worker nodes.
From the command line, use this command:
oc label nodes <node-name> <label>

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 27
HSPC and VSP Host Groups

Here an example where one virtual node is labeled vm, and one physical node is labeled
hspc-fc:

Procedure
1. From the OCP console, select Compute > Nodes.
2. From the nodes list click the ellipsis icon on a worker node (physical or virtual) and
select Edit Labels.

3. Assign the label using a key/value pair, for example: nodeType=hspc-fc, and then
click Save.
4. Use the following command to verify the labels:
oc get nodes --show-labels

HSPC and VSP Host Groups


Host groups required for Storage Plug-in for Containers are automatically created by Storage
Plug-in for Containers. Storage Plug-in for Containers automatically searches host groups
and iSCSI targets based on the name.
If you want to use existing host groups, rename them according to the naming rule. For
details, see Host group and iSCSI target naming rules in the HSPC Reference Guide.

Note: Storage Plug-in for Containers will overwrite host mode options even if
existing host groups have other host mode options.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 28
HSPC and VSP Resource Groups

HSPC and VSP Resource Groups


You can partition storage system resources by limiting the LDEV ID range added to the
resource group for a specific Kubernetes cluster. You can also isolate impacts between
Kubernetes clusters. The following requirements should be met:
■ Only one resource group for one Kubernetes is supported. Virtual storage machines are
not supported.
■ Storage system users must have access only to the resource group that they created. The
storage system user must not have access to other resource groups.
■ Create a pool from pool volumes with the resource group that you have created.
■ Allocate the necessary number of undefined LDEV IDs to the resource group.
■ Allocate the necessary number of undefined host group IDs to the resource group for each
storage system port defined in StorageClass. The number of host group IDs must be
equal to the number of hosts for all ports.
For details, see Resource Partitioning in the HSPC Reference Guide.

HSPC and VSP Resource Groups


You can partition storage system resources by limiting the LDEV ID range added to the
resource group for a specific Kubernetes cluster. You can also isolate impacts between
Kubernetes clusters. The following requirements should be met:
■ Only one resource group for one Kubernetes is supported. Virtual storage machines are
not supported.
■ Storage system users must have access only to the resource group that they created. The
storage system user must not have access to other resource groups.
■ Create a pool from pool volumes with the resource group that you have created.
■ Allocate the necessary number of undefined LDEV IDs to the resource group.
■ Allocate the necessary number of undefined host group IDs to the resource group for each
storage system port defined in StorageClass. The number of host group IDs must be
equal to the number of hosts for all ports.
For details, see Resource Partitioning in the HSPC Reference Guide.

Install and Configure Backup and Data Protection Operator


Complete the following procedures to configure and enable OpenShift Application Data
protection (OADP)/Velero functionality with Hitachi HCP CS S3 storage.

Choose a target S3 object store for backups


While any AWS-compliant S3 target can be used as a Velero backup target, it is
recommended that you use HCP for cloud scale due to the enterprise-grade compliance,
security, retention, and replication features available.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 29
Configure Hitachi Content Platform for cloud scale

It is recommended that you back up to a remote S3 target separate from your source
infrastructure. This ensures maximum protection of your data in case of a site or
infrastructure-specific failure.

Configure Hitachi Content Platform for cloud scale


You can obtain a 60-day trial of Hitachi Content Platform for cloud scale (HCP for cloud scale)
by visiting https://trycontent.hitachivantara.com/. Follow the directions in your trial access
email after registering to generate credentials and to create an S3 bucket to be used as a
target for Velero backup data.
See Hitachi Content Platform for Cloud Scale Architecture Fundamentals for more
information.

Install OpenShift Application Data Protection (OADP) from the OperatorHub


OpenShift Application Data protection (OADP) operator sets up and installs Velero on the
OpenShift cluster.

Procedure
1. Log in to the OCP console, select OperatorHub under Operators, and then select the
OADP Operator and click Install.
2. Click Install to install the Operator in the opeshift-adp project.

Once the operator has been installed, it looks like this when it is ready to use:

3. The next step is to configure the Data Protection Application instance which deploys
Velero and Restic pods.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 30
MK-SL-210 Create HCP CS/Velero credentials

4. Follow the instructions in the next section to complete the setup.

MK-SL-210 Create HCP CS/Velero credentials

Configure OpenShift Application Data Protection (OADP) with HCP CS


DataProtectionApplication instance represents configuration to install a data protection
application to safely backup and restore, perform disaster recovery and migrate Kubernetes
cluster resources and persistent volumes.
To perform the setup of the DataProtectionApplication instance, it is required to create the
secret to access the S3 storage.
For additional and advanced setup see Red Hat OADP documentation.

Create secret for backup and snapshot locations


Installing the Data Protection Application custom resource (CR) requires creating a secret
object; for this example we are using the same credentials for both backup and snapshot
locations.
The following command uses the default name of the secret which is cloud-credentials and
the credential file from previous step.

oc create secret generic cloud-credentials --namespace


openshift-adp --from-file cloud=velero-hcpcs-credentials

Create DataProtectionApplication instance


The installation of the Data Protection Application (DPA) is done by creating an instance of
the DataProtectionApplication API. This can be done from the command line or by using the
OpenShift console.
The following procedure uses the OpenShift console.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 31
Create DataProtectionApplication instance

Procedure
1. Log in to the OCP console, click Operators > Installed Operators and select the OADP
Operator.
2. Under Provided APIs, click Create instance in the DataProtectionApplication box.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 32
Create DataProtectionApplication instance

3. Click YAML View and update the parameters of the DataProtectionApplication manifest
with the following:
■ HCP CS information:
● S3URL - Set the URL to the S3 endpoint URL of your HCP CS system.
● Region - You can choose any region for this variable because HCP for cloud
scale will accept any region name that is provided.
● Bucket - This variable should be set to the S3 bucket name you configured in
HCP for cloud scale.
● Credential - set this to the name of the secret with the access keys for HCP CS
(for example cloud-credentials).
● Specify a prefix for Velero backups.
● The snapshot location must be in the same region as the PVs.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 33
Create DataProtectionApplication instance

■ Enable the Container Storage Interface (CSI) in the DataProtectionApplication CR to


back up persistent volumes (PVs) with CSI snapshots. In this specific case, CSI
snapshots are supported with the Hitachi Storage Plugin for Containers (HSPC).

● Add the csi default plug-in.


● Add the EnableCSI feature flag.

4. Click Create.
5. Verify the installation of the OADP resources.
When the DataProtectionApplication instance is created, you should have Velero, Restic
pods, and services running within the openshift-adp namespace.
6. Run the oc get all -n openshift-adp command and wait until all the pods are
running successfully. The following figure shows both Velero and Restic pods running
within the openshift-adp namespace.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 34
Create DataProtectionApplication instance

Velero is now enabled and installed on the RedHat OpenShift cluster.


The following is the text of the DataProtectionApplication custom resource (CR):

apiVersion: oadp.openshift.io/v1alpha1
kind: DataProtectionApplication
metadata:
name: velero-ocp-hcpcs
namespace: openshift-adp
spec:
backupLocations:
- velero:
config:
profile: default
region: us-west-1
insecureSkipTLSVerify: "true"
s3Url: "https://tryhcpforcloudscale.hitachivantara.com/"
s3ForcePathStyle: "true"
credential:
key: cloud
name: cloud-credentials
default: true
objectStorage:
bucket: ocp-eng-velero-target
prefix: velero
provider: aws
configuration:
restic:
enable: true
velero:
defaultPlugins:
- openshift
- aws
- csi
- kubevirt
featureFlags:
- EnableCSI

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 35
Back up persistent volumes with CSI snapshot

snapshotLocations:
- velero:
config:
profile: default
region: us-west-1
provider: aws

Back up persistent volumes with CSI snapshot


To perform backup of persistent volumes with CSI snapshots, create a
VolumeSnapshotClass custom resource (CR) to register the CSI driver. In this example we
are using the Hitachi HSPC CSI driver.
The VolumeSnapshotCalss CR must contain the following parameters:
■ Ensure that the desired volumesnapshot class has the velero.io/csi-volumesnapshot-class
label.
■ The driver must use hspc.csi.hitachi.com which corresponds to Hitachi Storage
Plug-in for Containers (HSPC).
■ The poolID must be the same as the one specified in the StorageClass.
■ The secret name and secret namespace must be the same as the ones specified in the
StorageClass definition.
The YAML file below provides an example of a VolumeSnapshotClass CR using Hitachi
Storage Plug-in for Containers (HSPC).

[ocpusr@dminws-c2]$ cat VolumeSnapshotClass_CSI_HSPC.yaml

apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
name: volume-snapshot-class-csi-hspc
labels:
velero.io/csi-volumesnapshot-class: "true"
driver: hspc.csi.hitachi.com
deletionPolicy: Delete
parameters:
poolID: "1"
csi.storage.k8s.io/snapshotter-secret-name: "secret-vsp-113"
csi.storage.k8s.io/snapshotter-secret-namespace: "test"

Procedure
1. Create the VolumeSnapshotClass with the following command:

oc apply -f VolumeSnapshotClass_CSI_HSPC.yaml

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 36
Solution Implementation and Validation

2. Verify the status with the following command:

oc get VolumeSnapshotClass

[ocpusr@dminws-c2]$ oc get VolumeSnapshotClass


NAME DRIVER DELETIONPOLICY AGE
volume-snapshot-class-csi-hspc hspc.csi.hitachi.com Delete 7d21h

Note: In addition to VolumeSnapshotClass, the CSI must be enabled on the


DataProtectionApplication CR as indicated in the previous section.

Solution Implementation and Validation


If you have followed the guidance in the Solution Design section and your infrastructure is
prepared, you can try the example deployments. This reference architecture was validated by
the following:
■ Deploying the MySQL application with persistent volume using Helm and proving data
protection functionality in OpenShift with OADP/Velero and Hitachi Content Platform for
cloud scale. In this scenario, the backup/restore is performed leveraging HSPC CSI
snapshots
■ Deploying the Wordpress application with persistent volumes using Helm and proving data
protection functionality in OpenShift with OADP/Velero and Hitachi Content Platform for
cloud scale. In this scenario, the backup/restore is performed leveraging Restic, and the
backup is taken in a primary OCP cluster and restored in a secondary OCP cluster. This
process can be used to migrate cluster resources to other clusters
■ Deploying and configuring Hitachi Replication Plug-in for Containers and validating
replication of applications across datacenters.
■ Installing and configuring Hitachi Storage Plug-in for Prometheus to monitor Kubernetes
resources and Hitachi storage.
■ Deploying a private container registry using Red Hat Quay and Hitachi HCP CS S3.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 37
Persistent volume and data protection using Velero and Hitachi HCP CS S3 – Scenario 1

Persistent volume and data protection using Velero and Hitachi HCP
CS S3 – Scenario 1
The following layout shows a high-level configuration of the setup to perform the backup and
restore operation in the same OCP cluster using Velero and HSPC CSI snapshots.

Use the following procedures to:


■ Deploy a MySQL database application with persistent volumes backed by Hitachi VSP
storage system.
■ Explore the backend storage resources mapping and allocations.
■ Create a table and insert test data, back it up to HCP for cloud scale, delete it, and then
restore the application from HCP for cloud scale using OADP/Velero Operator and verify
the data.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 38
Install and configure Helm utilities

The following figure shows the solution architecture with the solution to be validated.

Install and configure Helm utilities

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 39
Verify StorageClasses

Helm allows you to install complex container-based applications easily with the ability to
customize the deployment to your needs. On your Linux workstation, install the Helm binary
by following the Helm documentation for your distribution.
Add the Bitnami repository to your Helm configuration by running the following command:
helm repo add bitnami https://charts.bitnami.com/bitnami
Search for the MySQL Helm chart by running the following command:

helm search repo mysql

The following figure shows example output, showing that the Helm binary is installed properly
and the bitnami repository has been added with a MySQL Helm chart available for use.

[ocpusr@dminws-c2]$ helm search repo mysql


NAME CHART VERSION APP VERSION DESCRIPTION
bitnami/mysql 8.8.23 8.0.28 Chart to create a Highly
available MySQL cluster

Verify StorageClasses
The OCP cluster used for this test is a hybrid, containing some virtual worker nodes hosted
on VMware ESXi and bare metal worker nodes. The bare metal worker nodes use FC HBAs
and are connected to Hitachi VSP storage.
On this OCP cluster we have StorageClass using vSphere CSI provisioner, and other
StorageClasses with Hitachi HSPC CSI provisioner. To verify the defined StorageClasses
enter the command oc get sc.
For this example, we used the following StorageClass:
■ One StorageClass sc-vsp-113 for the primary MySQL DB instance.
The following figure shows an example listing of StorageClasses available on the OCP
cluster.

Customize and deploy MySQL Helm chart with persistent storage


You can customize a Helm char deployment by downloading the chart values to a YAML file
and using that file during Helm chart installation. You can also specify the custom values for a
deployment on the command line or in a script.
First, we create a namespace (or project) with the following command:

oc new-project demoapps

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 40
Customize and deploy MySQL Helm chart with persistent storage

The following is the command and parameters used to deploy MySQL Helm chart:

helm install -n demoapps mysql-demo1 \


--set primary.nodeSelector.nodeType=hspc-fc \
--set global.storageClass=sc-vsp-113 \
--set primary.persistence.size=512Mi \
--set auth.rootPassword=Hitachi123,auth.database=hur_database \
--set secondary.replicaCount=0 \
--set primary.podSecurityContext.enabled=false \
--set primary.containerSecurityContext.enabled=false \
--set secondary.podSecurityContext.enabled=false \
--set secondary.containerSecurityContext.enabled=false \
bitnami/mysql

You must modify these values to match your environment and the StorageClass that are
available in your OCP cluster. The values and their corresponding impact to the MySQL
deployment are as follows:
■ nodeSelector - Sets the node type where the pod must be deployed.
■ database - Sets the name of the database to be created.
■ rootPassword - Sets the password for the root user in the MySQL DB.
■ global.storageClass - Sets the StorageClass to be used for the MySQL pod.
■ primary.persistence.size - Sets the size of the persistent volume to be assigned to the
MySQL DB.
■ Set the SecurityContext parameters according to your environment.
Execute the command and Helm will begin to deploy your MySQL deployment to your OCP
cluster.
The following figure shows output from Helm at the beginning of the deployment after the
install command has been issued.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 41
Verify the MySQL deployment and insert some data

You can monitor the MySQL deployment by viewing the resources in your demoapps
namespace. Run the oc get all -n demoapps command to display the readiness of the
pods, deployment, statefulsets, and services of the MySQL deployment.
The following figure shows an example of the output of this command for a fully running,
healthy MySQL deployment.

Verify the MySQL deployment and insert some data


In the previous step you displayed the resources in the demoapps namespace, which
included the output of the MySQL pod. The next step is to connect to the MySQL DB and
insert some data for test.

Procedure
1. Inserting test data into the MySQL database .
Once the MySQL pod is ready, a new table called replication_cr_status is
created on the hur_database database. Then a few records of test data are inserted
into this new table.
The following shows the commands used to connect to the MySQL DB to create a table
and insert test data.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 42
Explore backend storage resource mapping and allocations

2. Verify the data that has been inserted into the MySQL database with the purpose to test
and verify backup and restore of persistent volumes:

Explore backend storage resource mapping and allocations


When modifying the Helm chart values for the MySQL deployment, you provided a
StorageClass that mapped back to Hitachi VSP for persistent volume allocation to the
MySQL pod. You can follow the storage paths from the OCP persistent volume layer to the
Hitachi VSP layer. Complete the following procedures to validate the storage path from the
running pods to the allocated backend storage.

Verify the MySQL persistent volume data path


Starting at the OCP cluster layer, you can explore the PVC and corresponding PVs that were
provisioned by the Hitachi HSPC CSI driver following the procedure:

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 43
Verify the MySQL persistent volume data path

Procedure
1. To list the PVCs created during the MySQL Helm chart deployment, run the following
command:

oc get pvc -n demoapps

2. Get details on the PVC by running the following command:

oc describe pvc data-mysql-demo-0 -n demoapps

The following figure shows the output of this command, including the details of the
persistent volume claim and the access mode that was specified during Helm chart
deployment.

3. Note the volume identifier and copy it for the next step.
4. Now that you have viewed the details of the PVC for the MySQL DM, explore the
associated PV to the claim by running the following command, entering your Volume
identifier from the previous step as the PV ID:

oc describe pv <PV ID>

The following figure shows the output of this command, including the details of the PV
created for the PVC. On the VolumeAttributes note the volume nickname for the next
step.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 44
Back up the MySQL application using Red Hat OADP/Velero and HSPC CSI snapshots

5. Open the Storage Navigator and connect to the VSP storage system.
6. In the left pane, expand Pools, and then click on Pool #1 which is the one specified in
the sc-vsp-113 StorageClass.
7. Click Virtual Volumes. You will see container volumes provisioned to your cluster in the
right pane.
8. Find the volume that matches the volume nickname from the previous step.

In this way, we can trace the full data path from the OCP cluster to the Hitachi VSP
storage system.

Back up the MySQL application using Red Hat OADP/Velero and HSPC CSI
snapshots
Follow this procedure to create and verify the status of the backup:

Procedure
1. From your Linux workstation, log in to your OCP cluster.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 45
Back up the MySQL application using Red Hat OADP/Velero and HSPC CSI snapshots

2. Create a backup custom resource (CR). Here an example of the backup CR used to
backup MySQL DB:

cat demoapps-backup-csi.yaml

apiVersion: velero.io/v1
kind: Backup
metadata:
namespace: openshift-adp
name: demoapps-backup-csi
labels:
velero.io/storage-location: default
spec:
includedNamespaces:
- demoapps

3. Issue the following command to back up the MySQL application.

oc apply -f demoapps-backup-csi.yaml

4. The following command will list all the backups.

[ocpusr@dminws-c2]$oc get backups -n openshift-adp


NAME AGE
demoapps-backup-csi 68s

5. When the Velero backup is initiated, you can run the oc describe backup
demoapps-backup-csi command to show the progress of the Velero backup.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 46
Verify the VolumeSnapshots and VolumeSnapshotContent created during the backup

Verify the VolumeSnapshots and VolumeSnapshotContent created during the


backup
Because we are using Hitachi HSPC which supports CSI snapshots, the backup CR creates
backup files for Kubernetes resources and internal images on the S3 object storage, and
creates snapshots for the persistent volumes (PVs). The CSI snapshot controller binds
VolumeSnapshot and VolumeSnapshotContent objects.
The following procedure can trace all the VolumeSnapshots and
VolumeSnapshotContent back to the Hitachi VSP storage system.

Procedure
1. Run the oc get volumesnapshots -n demoapps command to list the
volumesnapshots created during the backup process. In the output below we can see
the MySQL PVC, the name of the volume snapshot, the volume snapshot class, and the
associated volume snapshot content name.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 47
Verify the VolumeSnapshots and VolumeSnapshotContent created during the backup

2. The following command shows additional details of the volume snapshot. We can see
the label corresponding to the name of the backup demoapps-backup-csi and the
MySQL PVC called data-mysql-demo-0.

3. Run the oc get volumesnapshotcontent command to list the volume snapshot


contents. On the list we can identify the volume snapshot associated with the volume
snapshot described previously. Also, we can see the Hitachi HSPC CSI driver
associated with this volume.

4. If we use the describe command, we can see more details for the volume snapshot
content and trace back to the Hitachi VSP LDEV created.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 48
Explore the Hitachi Content Platform for cloud scale S3 object store

5. Open Storage Navigator and connect to the VSP storage system.


6. In the left pane expand Pools, and then click on Pool #1 which is the one specified in
the sc-vsp-113 StorageClass.
7. Click on Virtual Volumes. You will see container volumes; LDEV spc-e80a75a836
corresponds to the MySQL volume and LDEV spc-86ac993d0d corresponds to the
snapshot.

Explore the Hitachi Content Platform for cloud scale S3 object store
Follow this procedure to explore the data on the HCP CS S3 bucket.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 49
Delete the MySQL application from OCP cluster

Procedure
1. Open a web browser, navigate to your Hitachi Content Platform for cloud scale web
interface, and log in.
2. Browse the contents of your bucket, locate the ocp-eng-velero-target folder, and open it.
You will see the contents of all the Kubernetes objects that were backed up during the
Velero backup, with the exception of the PVs and their data.

Delete the MySQL application from OCP cluster


Follow this procedure to delete MySQL application and its persistent volume.

Procedure
1. Log in to your OCP cluster from your Linux workstation.
2. Run the helm delete mysql-demo -n demoapps command to remove the MySQL
application.

3. When the application is removed, delete PVCs and PVs from the cluster.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 50
Restore the MySQL application using the Velero operator

Restore the MySQL application using the Velero operator


Follow this procedure to restore an application from backup.

Procedure
1. Run the velero backup get command to list the backups in the Velero database.
You can see the backups of the MySQL application that you took previously.

[ocpusr@dminws-c2]$oc get backups


NAME AGE
demoapps-backup-csi 53m

2. Create a restore custom resource (CR).


The following is an example of restore CR used to restore MySQL from demoapps-
backup-csi backup:

cat demoapps-restore-csi.yaml

apiVersion: velero.io/v1
kind: Restore
metadata:
namespace: openshift-adp
name: demoapps-restore-csi
spec:
backupName: demoapps-backup-csi
includedNamespaces:
- demoapps

3. Run the following command to begin the restore of the MySQL application and its
associated PVs and data:

oc apply -f demoapps-restore-csi.yaml

4. The following command lists the restores:

[ocpusr@dminws-c2]$oc get restores


NAME AGE
demoapps-restore-csi 5s

5. After the Velero restore is submitted, run the oc describe restore demoapps-
restore-csi command to view the progress of the restore. Note that the restore will

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 51
Restore the MySQL application using the Velero operator

not show as completed until the application and all of its PVs and associated data are
restored.

6. Run the following commands to view all of the MySQL components that were restored.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 52
Verify MySQL persistent data

Verify MySQL persistent data


The following shows the commands used to connect to the restore MySQL DB and verify the
data that was inserted before the backup. You will see all the records created before deleting
the MySQL application.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 53
Persistent volume and data protection using Velero and Hitachi HCP CS S3 – Scenario 2

Persistent volume and data protection using Velero and Hitachi HCP
CS S3 – Scenario 2
The following layout shows a high-level configuration of the setup to perform the backup and
restore operations between primary and secondary OCP clusters using Velero and Restic.

Use the following procedures to:


■ Deploy a Wordpress database application with persistent volumes backed by Hitachi VSP
storage systems and using different SPBM policies.
■ Explore the vSphere CNS layer.
■ Modify the Wordpress application in the primary OCP cluster, back it up to HCP for cloud
scale, delete it, and then restore the application to a secondary OCP cluster from HCP for
cloud scale using OADP/Velero Operator and verify the data.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 54
Install and configure Helm utilities

The following figure shows the solution architecture with the solution to be validated.

Install and configure Helm utilities


The installation and backup process are executed in the primary cluster called jpc2 The
following shows the master and worker nodes corresponding to this hybrid OCP cluster
(virtual and bare metal worker nodes).

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 55
Verify StorageClasses

Helm allows you to install complex container-based applications easily with the ability to
customize the deployment to your needs. On your Linux workstation, install the Helm binary
by following the Helm documentation for your distribution.
Add the Bitnami repository to your Helm configuration by running the following command:
helm repo add bitnami https://charts.bitnami.com/bitnami
Search for the MySQL Helm chart by running the following command:

helm search repo mysql

The following figure shows example output, showing that the Helm binary is installed properly
and the bitnami repository has been added with a MySQL Helm chart available for use.

[ocpusr@dminws-c2]$ helm search repo wordpress


NAME CHART VERSION APP VERSION DESCRIPTION
bitnami/wordpress 13.0.11 5.9.0 WordPress is the world's most
popular blogging ...
bitnami/wordpress-intel 0.1.4 5.9.0 WordPress for Intel is the
most popular bloggin...

Verify StorageClasses
The OCP cluster used for this test is a hybrid, containing some virtual worker nodes hosted
on VMware ESXi and bare metal worker nodes. The virtual worker nodes are running on a
VMware vSphere cluster. The VMware cluster has been configured with different SPBM
policies that allow placement of the Persistent Volumes (PVs) into different types of storage
(vSAN, vVols, VMFS).
On this OCP cluster we have StorageClass using vSphere CSI provisioner, and other
StorageClasses with Hitachi HSPC CSI provisioner. To verify the defined StorageClasses
enter the command oc get sc.
For this example, we used the following three StorageClasses:
■ One for the frontend Wordpress pod
■ One for the primary MariaDB instance
■ One for the secondary MariaDB instance
The following figure shows an example listing of StorageClasses available on the OCP
cluster.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 56
Customize and deploy Wordpress Helm chart with persistent storage

Customize and deploy Wordpress Helm chart with persistent storage


You can customize a Helm char deployment by downloading the chart values to a YAML file
and using that file during Helm chart installation. You can also specify the custom values for a
deployment on the command line or in a script.
First, we create a namespace (or project) with the following command:

oc new-project demovelero

The following is the command and parameters used to deploy Wordpress Helm chart:

helm install -n demovelero wordpress \


--set wordpressUsername=admin \
--set wordpressPassword=wordpress \
--set replicaCount=1 \
--set persistence.storageClass=vsan-test-sc \
--set persistence.size=200Mi \
--set mariadb.architecture=replication \
--set mariadb.primary.persistence.storageClass=hitachi-vvol-tier1-sc \
--set mariadb.primary.persistence.size=256Mi \
--set mariadb.secondary.persistence.storageClass=hitachi-vmfs-tier2-sc \
--set mariadb.secondary.persistence.size=256Mi \
bitnami/wordpress

You must modify these values to match your environment and the StorageClass(es) that are
available in your OCP cluster. The values and their corresponding impact to the Wordpress
deployment are as follows:
■ wordpressUsername - Sets the admin username for the Wordpress application.
■ wordpressPassword - Sets the password for the admin user in the Wordpress application.
■ replicaCount - Configures the number of frontend Wordpress pods.
■ persistence.storageClass - Sets the StorageClass to be used for the frontend Wordpress
pods.
■ persistence.size - Sets the size of the persistent volume to be assigned to the frontend
Wordpress pods.
■ mariadb.architecture - Indicates whether Helm should deploy a single backend database
(standalone, single pod) or a high-availability backend database (replication, two pods).
■ mariadb.primary.persistence.storageClass - Sets the StorageClass to be used for the
primary MariaDB instance.
■ mariadb.primary.persistence.size - Sets the size of the persistent volume to be assigned
to the primary MariaDB instance.
■ mariadb.secondary.persistence.storageClass - Sets the StorageClass to be used for the
secondary MariaDB instance.
Set the SecurityContext in your OCP cluster according to your environment and security
requirements.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 57
Verify the Wordpress deployment

Execute the command and Helm will begin to deploy your Wordpress deployment to your
OCP cluster.
The following figure shows output from Helm at the beginning of the deployment after the
install command has been issued.
You can monitor the Wordpress deployment by viewing the resources in your demoapps
namespace. Run the oc get all -n demoapps command to display the readiness of the
pods, deployment, statefulsets, and services of the Wordpress deployment.
The following figure shows an example of the output of this command for a fully running,
healthy Wordpress deployment.

Verify the Wordpress deployment


In the previous step you displayed the resources in the demovelero namespace, which
included the output of the services for Wordpress. The next step is to expose the service/
wordpress using the oc expose service/wordpress command, log in to Wordpress and
create a post that will be validated during the backup/restore process.

Procedure
1. To identify the host/port of the exposed Wordpress service, use the following command:

2. Open a browser and enter the address of the Wordpress service (http), and then the
Wordpress interface should display. The following shows an example of the default
Wordpress application user interface. We can identify the Wordpress application has
been deployed in OCP cluster #2 (jpc2).

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 58
Verify the Wordpress deployment

3. Create some content, for this purpose browse to the admin interface of Wordpress (/
admin), and log in using the username and password you set in the Helm installation
script.
4. Click the Create for your first post link.
5. Enter information to create a blog post, and then publish the post.
6. Navigate back to the default URL of the Wordpress application to verify that your post
was committed to the database. Here is an example of the new post:

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 59
Explore backend storage resource mapping and allocations

Explore backend storage resource mapping and allocations


When modifying the Helm chart values for the Wordpress deployment, you provided three
different StorageClasses that mapped back to vSphere SPBM policies for persistent volume
allocation to the Wordpress and MariaDB pods.
Using the VMware vSphere Cloud Native Storage and First Class Disk (FCD) features, you
can follow the storage paths from the Kubernetes persistent volume layer to the vSphere
vSAN/vVol layer.

Verify the backend storage resources mapping and allocations


Starting at the OCP cluster layer, you can explore the PVC and corresponding PVs that were
provisioned by the Hitachi HSPC CSI driver by following this procedure.

Procedure
1. To list the PVCs created during the Wordpress Helm chart deployment, run the following
command:

oc get pvc -n demovelero

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 60
Back up the Wordpress application using Red Hat OADP/Velero and HSPC CSI snapshots

2. To observe details about the volume within vCenter, open a browser and then open a
vSphere web client session to the vCenter hosting the OCP cluster jpc2.
3. Highlight the vSphere cluster hosting the OCP cluster VMs and navigate to its Monitor
tab.
4. In the left pane expand Cloud Native Storage, and then click Container Volumes.
You will see container volumes provisioned to your cluster in the right pane.

In the next step, we are going to display more details for one of the container volumes,
the one assigned to the primary DB.

5. Find the volume that matches your PVC ID from the previous step, and then click on the
Details icon.
This displays the details about the volume that are surfaced from OpenShift, including
the persistent volume ID, namespace, labels, and pod allocation from within the OCP
cluster.

Back up the Wordpress application using Red Hat OADP/Velero and HSPC CSI
snapshots
Follow this procedure to create and verify the status of the backup:

Procedure
1. From your Linux workstation, log in to your OCP cluster.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 61
Back up the Wordpress application using Red Hat OADP/Velero and HSPC CSI snapshots

2. Create a backup custom resource (CR). Makes sure the default VolumesToRestic
parameter is set to true. The following is an example of the backup CR used to back
up the Wordpress application:

cat demowordpress-backup-restic.yaml

apiVersion: velero.io/v1
kind: Backup
metadata:
namespace: openshift-adp
name: demowordpress-backup-restic
labels:
velero.io/storage-location: default
spec:
defaultVolumesToRestic: true
includedNamespaces:
- demovelero

3. Issue the following command to back up the Wordpress application:

oc apply -f demowordpress-backup-restic.yaml

4. The following command lists all the backups.

[ocpusr@dminws-c2]$ oc get backups -n openshift-adp


NAME AGE
demoapps2-backup-restic 77m
demowordpress-backup-restic 9s

5. When the Velero backup is initiated, you can run the oc describe backup
demovelero-backup-restic command to show the progress of the Velero backup.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 62
Explore the Hitachi Content Platform for cloud scale S3 object store

Explore the Hitachi Content Platform for cloud scale S3 object store
Follow this procedure to explore the data on the HCP CS S3 bucket.

Procedure
1. Open a web browser, navigate to your Hitachi Content Platform for cloud scale web
interface, and log in.
2. Browse the contents of your bucket; for this test we are using a new bucket called the
onprem-vcf-velero-target folder and open it.
You will see the contents of all of the Kubernetes objects that were backed up during the
Velero backup, with the exception of the PVs and their data.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 63
Explore the Hitachi Content Platform for cloud scale S3 object store

3. Browse the contents of your bucket, locate the onprem-vcf-velero-target folder, and
open it.
4. Navigate back to the root of your bucket and select the plugins folder.
5. Navigate to restic/demovelero/data, where demovelero is the name of the namespace
for the Wordpress application.
6. Under data you will find multiple folders with the data.
These folders correspond to the PVs that were backed up from the vSphere container
volumes taken by the Red Hat OADP/Velero and Restic.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 64
Explore the Hitachi Content Platform for cloud scale S3 object store

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 65
Delete the Wordpress application from OCP cluster

Note: Do not delete or modify these objects. If you want to do so, do it


through the Velero CLI.

Delete the Wordpress application from OCP cluster


Follow this procedure to delete Wordpress application and its persistent volumes from the
primary cluster.

Procedure
1. Log in to your OCP cluster from your Linux workstation.
2. Run the helm delete wordpress -n demovelero command to remove the
Wordpress application.
3. When the application is removed, run the oc delete project demovelero
command to remove all of the resources including PVCs and PVs from the cluster.

Note: While the restore process will be executed in a secondary OCP


cluster, there should be no need to remove the Wordpress application from
the primary OCP cluster.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 66
Restore the Wordpress application using the Velero operator

Restore the Wordpress application using the Velero operator


The restore process in this demo is executed in a secondary OCP cluster called jpc3. This
OCP cluster is a hybrid (virtual and bare metal worker nodes) similar to the primary cluster,
and while the scheduling of the applications can be managed using a NodeSelector, for this
exercise the bare metal node has been marked as unschedulable to make sure the
application is scheduled/restored in any of the virtual worker nodes because these worker
nodes are the ones that can leverage vSphere CSI.
Follow this procedure to restore an application from backup in the secondary OCP cluster.

Procedure
1. Verify the bare metal worker node has been marked as unschedulable.

2. Because we are restoring the application in a new cluster, make sure that the
StorageClasses have the same name as the ones used to deploy the Wordpress
application in the primary cluster.
The following example shows StorageClasses with the same name as those from the
primary OCP cluster:

3. The OADP/Velero in the secondary cluster has been installed/configured with the same
S3/bucket. Verify that the secondary cluster can see the backup made from the primary
cluster. Run the velero backup get command to list the backups in the Velero
database.
You can see the backups of the MySQL application that you took previously.

[ocpusr@dminws-c3]$ oc get backups -n openshift-adp


NAME AGE
demoapps2-backup-restic 82m
demowordpress-backup-restic 4m43s

4. Verify there is no namespace with the name demovelero:

[ocpusr@dminws-c3]$ oc get project demovelero


Error from server (NotFound): namespaces "demovelero" not found

5. Create a restore custom resource (CR).

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 67
Restore the Wordpress application using the Velero operator

The following is an example of the restore CR used to restore Wordpress from


demowordpress-backup-restic backup using Restic:

cat cat demowordpress-restore-restic.yaml


apiVersion: velero.io/v1
kind: Restore
metadata:
namespace: openshift-adp
name: demowordpress-restore-restic
spec:
backupName: demowordpress-backup-restic
includedNamespaces:
- demovelero

6. Run the following command to begin the restore of the MySQL application and its
associated PVs and data:

oc apply -f demowordpress-restore-restic.yaml

The following command lists the restores:

[ocpusr@dminws-c3]$ oc get restores -n openshift-adp


NAME AGE
demoapps2-restore-restic 89m
demowordpress-restore-restic 30s

7. After the Velero restore is submitted, run the oc describe restore


demowordpress-restore-restic command to view the progress of the restore.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 68
Restore the Wordpress application using the Velero operator

Note that the restore will not show as complete until the application and all of its PVs
and associated data are restored.

8. Run the following commands to view all of the Wordpress components that were
restored in the demovelero namespace in the secondary cluster.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 69
Restore the Wordpress application using the Velero operator

9. To observe details about the restored container volumes within vCenter, open a browser,
and then open a vSphere web client session to the vCenter hosting the secondary OCP
cluster jpc3.
10. Highlight the vSphere cluster hosting the secondary OCP cluster VMs and navigate to
its Monitor tab.
11. In the left pane expand Cloud Native Storage, and then click Container Volumes.
You will see the restored container volumes provisioned to your cluster in the right pane.

The following is an example of additional details of one of the restored volumes for the
primary DB.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 70
Verify Wordpress persistent application data

Verify Wordpress persistent application data


Open a browser and enter the address of the Wordpress service (http) in the secondary OCP
cluster, then the Wordpress interface should display. The following shows an example of the
restored Wordpress application with the blog entry created before backup. The URL shows
the Wordpress application has been restored and is running on the OCP cluster jpc3.

Disaster Recovery / Replication Services for Persistent Storage


Replication services for persistent storage on Hitachi VSP storage systems can be enabled
with storage class which uses Hitachi Storage (VASA) Provider supported VMware CNS-CSI
persistent storage VMDKs or Hitachi Replication Plug-in for Containers (HRPC) for Hitachi
CSI-managed persistent volumes.
In this guide we are demonstrating a use case for persistent volume replication with Hitachi
Replication Plug-in for Containers.
Hitachi Replication Plug-in for Containers (HRPC) provides replication data services for the
persistent volumes on Hitachi’s VSP storage platforms. Covering use cases such as:
■ Migration – Persistent volumes can be snapshotted and cloned locally or to remote
Kubernetes clusters with their own remote VSP storage system.
■ Disaster Recovery – Persistent volumes can be protected against datacenter failures by
having the data replicated at extensive distances using Hitachi Universal Replicator.
■ Backup – Persistent volumes can be protected with point in time snapshots locally with the
Hitachi CSI plugin (Hitachi Storage Plug-in for Containers), or they can be backed up to
remote VSP storage using HRPC.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 71
Requirements

Hitachi Replication Plug-in for Containers (HRPC) supports any Kubernetes cluster
configured with Hitachi Storage Plug-in for Containers; this guide covers the installation on a
Red Hat OpenShift Container Platform configured with Hitachi VSP storage. The
infrastructure for this demo is based on the Hitachi Unified Compute Platform.
For configuration details, see the Hitachi Replication Plug-in for Containers Configuration
Guide.

Requirements
Before installation, complete the following requirements:
■ Install two Kubernetes clusters, one in the primary and the other in the secondary site. A
single Kubernetes cluster is not supported.
■ Configure Hitachi Universal Replicator (HUR). For more details, see Universal Replicator
Overview.
■ Install Hitachi Storage Plug-in for Containers in both clusters, either Kubernetes or Red
Hat OpenShift Container.
■ For inter-site connectivity:
■ ● Hitachi Replication Plug-in for Containers in the primary site must communicate with
the Kubernetes cluster in the secondary site and vice versa.
● Hitachi Replication Plug-in for Containers in the primary site must communicate with
the storage system in the secondary site and vice versa.
● Connection between primary and secondary storage system RESP API.
● Fibre Channel or iSCSI connection is needed between primary and secondary storage
systems for data copy.

See the HRPC Configuration Guide for more details.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 72
Installing and Configuring Hitachi Replication Plug-in for Containers

Installing and Configuring Hitachi Replication Plug-in for Containers


For this demo we have configured two OpenShift Clusters, each configured with 3 masters
and 3 workers, and each cluster was connected to different Hitachi VSP Storage systems as
seen in the following figure.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 73
Configure the storage systems

Configure the storage systems


Configure the storage system for replication:
■ Configure the storage system as described in the Hitachi Storage Plug-in for Containers
Quick Reference Guide.
■ Configure the remote path between primary site and secondary site storage systems. For
details, see the Hitachi Universal Replicator (HUR) User Guide.
■ Configure journal volumes. For details, see the Hitachi Universal Replicator (HUR) User
Guide.
■ Create a StorageClass in both primary and secondary sites:
● The name and fstype of the StorageClass must be the same for both sites
● The StorageClass in the primary site must point to the storage in the primary site.
● The StorageClass in the secondary site must point to the storage in the secondary site.

An example of the StorageClass (for both sites) is provided in Creating a manifest file
for Replication CR (on page 80).
■ Create a namespace in both primary and secondary sites. The namespace must have the
same name in both sites.
The following figure shows the remote connection configured between the two Hitachi VSP
storage systems; the first VSP is connected to the primary Kubernetes cluster, and the
second VSP is connected to the secondary Kubernetes cluster.

Configure Hitachi Replication Plug-in for Containers (HRPC)


The installation of HRPC requires to have a dedicated management workstation/VM that can
access both the primary and secondary Kubernetes clusters.
The following tasks are just a summary of installation/configuration steps, for details follow
the Hitachi Replication Plug-in for Containers Configuration Guide.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 74
Part I: Prepare manifest files and environment variables

Part I: Prepare manifest files and environment variables

Procedure
1. Download and extract the installation media for HRPC into the management
workstation.

unzip hrpc_<version>.zip

2. Get the kubeconfig file from both the primary and secondary sites.

KUBECONFIG_P=/path/to/primary-kubeconfig
KUBECONFIG_S=/path/to/secondary-kubeconfig

The following files are created in the later steps:

SECRET_KUBECONFIG_P=/path/to/primary-kubeconfig-secret.yaml
SECRET_KUBECONFIG_S=/path/to/secondary-kubeconfig-secret.yaml

3. Configure an environment variable for the secret file of the storage system.

SECRET_STORAGE=/path/to/storage-secret.yaml

4. Copy the namespace manifest file to the management machine. This file is provided in
the media kit (hspc-replication-operator-namespace.yaml). Do not edit it.
5. Create a Secret manifest file with the secondary kubeconfig information to access
the secondary Kubernetes cluster from Hitachi Replication Plug-in for Containers
running in the primary Kubernetes cluster. For reference, see the remote-
kubeconfig-sample.yaml file. Here an example:

# base64 encoding
cat ${KUBECONFIG_S} | base64 -w 0
vi secondary-kubeconfig-secret.yaml

apiVersion: v1
kind: Secret
metadata:
name: hspc-replication-operator-remote-kubeconfig
namespace: hspc-replication-operator-system
type: Opaque
data:
remote-kubeconfig: <base64 encoded secondary kubeconfig>

6. Create a Secret manifest file the with the primary kubeconfig information to access
the primary Kubernetes cluster from Hitachi Replication Plug-in for Containers running in
the secondary Kubernetes cluster. For reference, see the remote-kubeconfig-
sample.yaml file.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 75
Part I: Prepare manifest files and environment variables

Here is an example:

# base64 encoding
cat ${KUBECONFIG_P} | base64 -w 0
vi primary-kubeconfig-secret.yaml

apiVersion: v1
kind: Secret
metadata:
name: hspc-replication-operator-remote-kubeconfig
namespace: hspc-replication-operator-system
type: Opaque
data:
remote-kubeconfig: <base64 encoded primary kubeconfig>

7. Create a Secret manifest file containing storage system information that enables access
by Hitachi Replication Plug-in for Containers. For reference, see the storage-
secrets-sample.yaml file. This manifest file includes information for both the
primary and secondary storage systems.
Here is an example:

vi ${SECRET_STORAGE}

apiVersion: v1
kind: Secret
metadata:
name: hspc-replication-operator-storage-secrets
namespace: hspc-replication-operator-system
type: Opaque
stringData:
storage-secrets.yaml: |-
storages:
- serial: 40016 #Serial number, primary storage system
url: https://172.25.47.x #URL for the REST API server
user: UserPriary #User, primary storage system
password: PasswordPrimary #Password for user
journal: 1 #Journal ID HUR primary storage system
- serial: 30595 #Serial number, secondary storage system
url: https://172.25.47.y #URL for the REST API server
user: UserSecondary #User, secondary storage system
password: PasswordSecondary #Password for user
journal: 1 #Journal ID HUR secondary storage system

8. Modify the Hitachi Replication Plug-in for Containers manifest file (hspc-
replication-operator.yaml) provided in the media kit based on your requirement
to use your private repository.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 76
Part II: Install the Hitachi Replication Plug-in for Containers Operator

Part II: Install the Hitachi Replication Plug-in for Containers Operator

Procedure
1. From the management workstation, login to both primary and secondary clusters:

KUBECONFIG=${KUBECONFIG_P} oc login -u <admin user> -p <Password>


KUBECONFIG=${KUBECONFIG_S} oc login -u <admin user> -p <Password>

2. Create Namespaces in the primary and secondary sites. Use the same manifest file in
primary and secondary sites.

KUBECONFIG=${KUBECONFIG_P} oc create -f hspc-replication-operator-namespace.yaml


KUBECONFIG=${KUBECONFIG_S} oc create -f hspc-replication-operator-namespace.yaml

3. Create Secrets containing kubeconfig information in primary and secondary sites. Use
the different manifest files in primary and secondary sites.

KUBECONFIG=${KUBECONFIG_P} oc create -f ${SECRET_KUBECONFIG_S}


KUBECONFIG=${KUBECONFIG_S} oc create -f ${SECRET_KUBECONFIG_P}

4. Create Secrets containing storage system information in primary and secondary sites.
Use the same manifest file in primary and secondary sites.

KUBECONFIG=${KUBECONFIG_P} oc create -f ${SECRET_STORAGE}


KUBECONFIG=${KUBECONFIG_S} oc create -f ${SECRET_STORAGE}

5. Load the container (for example, docker load or podman for OpenShift)
hrpc_<version>.tar and push the loaded container to your private repository.
6. Create Hitachi Replication Plug-in for Containers in primary and secondary sites. Use
the same manifest file for both the primary and secondary sites.

KUBECONFIG=${KUBECONFIG_S} oc create -f hspc-replication-operator.yaml


KUBECONFIG=${KUBECONFIG_P} oc create -f hspc-replication-operator.yaml

7. Confirm that Hitachi Replication Plug-in for Containers are running in primary and
secondary sites.
Check HRPC operator in primary site:

KUBECONFIG=${KUBECONFIG_P} oc get pods -n hspc-replication-operator-system

Check HRPC operator in secondary site:

KUBECONFIG=${KUBECONFIG_S} oc get pods -n hspc-replication-operator-system

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 77
Installing a Stateful App for Replication Testing on the Primary Site

At this point, the Hitachi Replication Plug-in for Containers operator is ready and the
next steps is to install and test with a stateful app or just create a PVC and Pod that
consumes the PVC.

Installing a Stateful App for Replication Testing on the Primary Site


For this demo, we install MySQL database using Bitnami Helm chart.

Procedure
1. First, we need to add the Bitnami repository by running the following command:
helm repo add bitnami https://charts.bitnami.com/bitnami
Search for the MySQL Helm chart by running the following command:

[ocpinstall@adminws ~]$ helm search repo mysql


NAME CHART VERSION APP VERSION DESCRIPTION
bitnami/mysql 8.8.23 8.0.28 Chart to create a Highly
available MySQL cluster

2. Next, create a namespace for the stateful app for the demo. Use the following command
to create a namespace (project):

KUBECONFIG=${KUBECONFIG_P} oc new-project demoapps

3. Verify storage class.


StorageClass vsp-hrpc-sc was created following the requirements for Hitachi Storage
Plug-in for Containers.

[ocpinstall@adminws ~]$ KUBECONFIG=${KUBECONFIG_P} oc get sc

4. Customize and deploy MySQL Helm chart with persistent storage.


The following shows an example of the installation of MySQL on the primary site using
StorageClass vsp-hrpc-sc, database called vsp_database, and a Persistent
Volume with 5Gi of capacity.

[ocpinstall@adminws ~]$ helm install mysql-hrpc-example \


--set secondary.replicaCount=0 \
--set global.storageClass=vsp-hrpc-sc \
--set primary.persistence.size=5Gi \
--set auth.rootPassword=Hitachi123,auth.database=vsp_database \
--set secondary.replicaCount=0 \
--set primary.podSecurityContext.enabled=false \
--set primary.containerSecurityContext.enabled=false \
--set secondary.podSecurityContext.enabled=false \
--set secondary

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 78
Installing a Stateful App for Replication Testing on the Primary Site

We can use the following commands to check the status of the MySQL pod and its
corresponding Persistent Volume.

KUBECONFIG=${KUBECONFIG_P} oc get pods

KUBECONFIG=${KUBECONFIG_P} oc get pvc

5. Insert test data in the MySQL database.


Once the MySQL pod is ready, a new table called replication_cr_status is
created on the vsp_database database. Then a few records of test data are inserted
into this new table. The create table and insert commands are not showed here.

The following data has been inserted into the MySQL database with the purpose to test
and verify replicated PVC to the secondary site:

The next step is to replicate the Persistent Volume data-mysql-hrpc-example-0


used by the MySQL pod to the secondary site.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 79
Replicating Persistent Volumes

Replicating Persistent Volumes


To replicate storage volumes requires to create a Replication CustomResource (CR) object.
Once the Replication CR has been created, HRPC starts replicating the specified PVC and
triggers the creation of a PVC in the secondary site. The data in the target PVC from the
primary site is copied and protected by HUR.

Creating a manifest file for Replication CR


A Replication CR manifest file contains the name of the PVC and the StorageClass name.
The manifest file below is created to replicate the PVC data-mysql-hrpc-example-0
previously used by MySQL Pod.

Note: A StorageClass with the same name must exist on the secondary site
(examples below). Also, a namespace with the same name must be created on
the secondary site before creating the Replication CR.

cat hspc_v1_msqldb_replication.yaml

apiVersion: hspc.hitachi.com/v1
kind: Replication
metadata:
name: replication-mysqldb1
spec:
persistentVolumeClaimName: data-mysql-hrpc-example-0
storageClassName: vsp-hrpc-sc

Here is an example of the StorageClass CR for the VSP storage on the Primary site:

cat vsp-hrpc-sc.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: vsp-hrpc-sc
annotations:
kubernetes.io/description: Hitachi Storage Plug-in for Containers
provisioner: hspc.csi.hitachi.com
reclaimPolicy: Delete
volumeBindingMode: Immediate
allowVolumeExpansion: true
parameters:
serialNumber: "40016"
poolID: "1"
portID : CL1-D,CL4-D
connectionType: fc
csi.storage.k8s.io/fstype: ext4
csi.storage.k8s.io/node-publish-secret-name: "secret-vsp-113"
csi.storage.k8s.io/node-publish-secret-namespace: "test"

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 80
Creating a Replication CR object

csi.storage.k8s.io/provisioner-secret-name: "secret-vsp-113"
csi.storage.k8s.io/provisioner-secret-namespace: "test"
csi.storage.k8s.io/controller-publish-secret-name: "secret-vsp-113"
csi.storage.k8s.io/controller-publish-secret-namespace: "test"
csi.storage.k8s.io/node-stage-secret-name: "secret-vsp-113"
csi.storage.k8s.io/node-stage-secret-namespace: "test"
csi.storage.k8s.io/controller-expand-secret-name: "secret-vsp-113"
csi.storage.k8s.io/controller-expand-secret-namespace: "test"

Here is an example of the StorageClass CR for the VSP storage on the Secondary site:

cat vsp-hrpc-sc.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: vsp-hrpc-sc
annotations:
kubernetes.io/description: Hitachi Storage Plug-in for Containers
provisioner: hspc.csi.hitachi.com
reclaimPolicy: Delete
volumeBindingMode: Immediate
allowVolumeExpansion: true
parameters:
serialNumber: "30595"
poolID: "12"
portID : CL1-B,CL4-B
connectionType: fc
csi.storage.k8s.io/fstype: ext4
csi.storage.k8s.io/node-publish-secret-name: "secret-vsp-112"
csi.storage.k8s.io/node-publish-secret-namespace: "test"
csi.storage.k8s.io/provisioner-secret-name: "secret-vsp-112"
csi.storage.k8s.io/provisioner-secret-namespace: "test"
csi.storage.k8s.io/controller-publish-secret-name: "secret-vsp-112"
csi.storage.k8s.io/controller-publish-secret-namespace: "test"
csi.storage.k8s.io/node-stage-secret-name: "secret-vsp-112"
csi.storage.k8s.io/node-stage-secret-namespace: "test"
csi.storage.k8s.io/controller-expand-secret-name: "secret-vsp-112"
csi.storage.k8s.io/controller-expand-secret-namespace: "test"

We can see that both StorageClasses have the same name, and each points to the
respective VSP storage in each site. As we can see in these examples, the StorageClass CR
is similar that the one used for non-replicated environment.

Creating a Replication CR object


The Replication CR object is created in the primary site, using the manifest file from previous
step. This triggers the creation of an HUR pair and initial data copy. Also, this triggers the
creation of a Replication CR object on the secondary site.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 81
Verifying the status of Replication CR in primary and secondary site

Use the following command to create the Replication CR:

KUBECONFIG=${KUBECONFIG_P} oc create -f hspc_v1_msqldb_replication.yaml

Verifying the status of Replication CR in primary and secondary site


The Replication CR status is changed to Ready when the initial replication is created, and the
data protection is started. Transition to the Ready status depends on data size in target PVC
and might take some time to change.
The following commands help to verify the status of the Replication CR objects in both the
primary and secondary site.

Also, we can see that a PVC data-mysql-hrpc-example-0 has been automatically


created on the secondary site.

On the VSP Storage systems, we can verify that UR pairs have been automatically created
as well:
UR pairs on primary storage system:

UR pairs on secondary storage system:

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 82
Checking replicated data on the Secondary Site

The following command provides more details from the Replication CR, like storage serial
number, LDEV Name for both primary and secondary site, and it can easily be correlated with
the LDEVs seen on the UR Pairs.

KUBECONFIG=${KUBECONFIG_P} oc describe replication replication-mysqldb1

Checking replicated data on the Secondary Site


After the Replication CR is created and it is in “Ready” status, we can perform a split and
resync operation to check the data from the secondary site. The data copy process from the
primary site to the secondary site is stopped during the split and resync operation.
When you split a pair, write-data is no longer sent to the S-VOL and the pair is no longer
synchronized. Splitting a pair or mirror gives you a point-in-time copy of the P-VOL.
For more details about HUR operations, see Universal Replicator Overview.

Disaster Recovery Test : Splitting the Hitachi Universal Replicator pair


To split the HUR pair, from the primary site, change the status of Replication CR to perform
the split operation. This triggers HRPC to split the HUR pair.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 83
Confirming the replicated data

First confirm the status of the Replication CR is Ready and Operation value is none.

Then use the command below to edit the Replication CR and change the
spec.desiredPairState to split.

KUBECONFIG=${KUBECONFIG_P} oc edit replication replication-mysqldb1

After the edit, make sure the Replication CR status is split and the operation value is none.

Confirming the replicated data


To confirm the replicated data on the secondary site, we are going to deploy the same
MySQL Helm chart, but this time it will be deployed with the existing Persistent Volume that
was replicated from the primary site data-mysql-hrpc-example-0.
To accomplish this with helm chart we need to customize and indicate that MySQL needs to
be installed with an existing PVC as seen below.

[ocpinstall@jpc3-ocp-admin-ws]$ helm install mysql-hrpc-example \


--set secondary.replicaCount=0 \
--set global.storageClass=vsp-hrpc-sc \
--set primary.persistence.size=5Gi \
--set auth.rootPassword=Hitachi123,auth.database=vsp_database \
--set secondary.replicaCount=0 \
--set primary.podSecurityContext.enabled=false \
--set primary.containerSecurityContext.enabled=false \
--set secondary.podSecurityContext.enabled=false \
--set secondary.containerSecurityContext.enabled=false \
--set primary.persistence.storageClass=vsp-hrpc-sc \

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 84
Confirming the replicated data

--set primary.persistence.existingClaim=data-mysql-hrpc-example-0 \
bitnami/mysql

We can use the following commands to check the status of the MySQL pod and its
corresponding Persistent Volume on the Secondary site.

KUBECONFIG=${KUBECONFIG_P} oc get pods


KUBECONFIG=${KUBECONFIG_P} oc get pvc

Or directly from the console of the secondary cluster:

The next step is to connect to the MySQL database and verify the same data that was
created from the primary site.

The following query confirms that the PVC/ MySQL database contains the same data that
was inserted on the primary site.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 85
Resynchronizing HUR pair

Now that we have confirmed the same data on the PVC on the secondary site, we can
uninstall the MySQL helm chart, the PVC will remain there since it is controlled by the
Replication CR.

helm uninstall mysql-hrpc-example


oc get pvc
oc get pods

Resynchronizing HUR pair


To resync the HUR pair, from the primary site, change the status of the Replication CR to
perform the resync operation. This triggers Hitachi Replication Plug-in for Containers to
resync the HUR pair.
Make sure no Pod is using the PVC, otherwise the resync operation will not work.

KUBECONFIG=${KUBECONFIG_P} oc edit replication replication-mysqldb1

After the edit, make sure the Replication CR status is Ready and the Operation value is none.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 86
Monitoring Kubernetes resources and Hitachi storage with Hitachi Storage Plug-in for Prometheus

Monitoring Kubernetes resources and Hitachi storage with Hitachi


Storage Plug-in for Prometheus
Hitachi Storage Plug-in for Prometheus enables the Kubernetes administrator to monitor the
metrics of Kubernetes resources and Hitachi storage system resources within a single tool.
Hitachi Storage Plug-in for Prometheus uses Prometheus to collect metrics and Grafana to
visualize those metrics for easy evaluation by the Kubernetes administrator. Prometheus
collects storage system metrics such as capacity, IOPS, and transfer rate in five-minute
intervals.
For additional details about configuration, follow the Hitachi Storage Plug-in for Prometheus
Quick Reference Guide.
The following diagram shows the flow of metric collection using Hitachi Storage Plug-in for
Prometheus.

Requirements
■ Install the Kubernetes or Red Hat OpenShift Container Platform.
■ Download the Storage Plug-in for Prometheus installation media kit from the Hitachi
Support Connect Portal: https://support.hitachivantara.com/en/user/home.html. A Hitachi
login credential is required.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 87
Installing Hitachi Storage Plug-in for Prometheus on OpenShift Cluster

■ Install Hitachi Storage Plug-in for Containers in Kubernetes or Red Hat OpenShift
Container Platform.
■ Configure StorageClass for Hitachi Storage Plug-in for Containers in Kubernetes or Red
Hat OpenShift Container Platform.

Installing Hitachi Storage Plug-in for Prometheus on OpenShift Cluster


The installation below for this demo is executed on top of an OpenShift Cluster (version 4.8)
configured with 3 masters and 3 workers where the worker nodes are connected to Hitachi
VSP Storage.

oc get nodes

NAME STATUS ROLES AGE VERSION


jpc2-master-1 Ready master 29d v1.21.6+b4b4813
jpc2-master-2 Ready master 29d v1.21.6+b4b4813
jpc2-master-3 Ready master 29d v1.21.6+b4b4813
jpc2-worker-1 Ready worker 29d v1.21.6+b4b4813
jpc2-worker-2 Ready worker 29d v1.21.6+b4b4813
jpc2-worker-3 Ready worker 28d v1.21.6+b4b4813

Procedure
1. Download and extract the installation media.

tar zxvf storage-exporter.tar.gz

2. Load the Storage Plug-in for Prometheus image into the repository.
3. Update the exporter.yaml with the corresponding registry hostname and port for the
cluster.
4. Update the secret-sample.yaml using the info from the VSP storage: Serial
Number, Storage System API URL, user and password.

apiVersion: v1
kind: Secret
metadata:
name: storage-exporter-secret
namespace: hspc-monitoring-system
type: Opaque
stringData:
storage-exporter.yaml: |-
storages:
- serial: 40016
url: https://172.25.47.x
user: MaintenanceUser
password: PasswordForUser

5. Create the namespace hspc-monitoring-system for Storage Plug-in for Prometheus:

oc apply -f yaml/namespace.yaml

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 88
Installing Prometheus and Grafana

6. Create security context constraints (SCC):

oc apply -f yaml/scc-for-openshift.yaml

7. Install Storage Plug-in for Prometheus and Prometheus Pushgateway.

oc apply -f yaml/secret-sample.yaml -f yaml/exporter.yaml

Verify the storage-exporter and pushgateway are running on namespace hspc-


monitoring-system:

oc get pods

NAME READY STATUS RESTARTS AGE


pushgateway-77b85489b9-4vnzt 1/1 Running 0 5d
storage-exporter-77b644b8b7-pzhdj 1/1 Running 0 5d

Installing Prometheus and Grafana


After installing Hitachi Storage Plug-in for Prometheus, install and configure Prometheus and
Grafana. For more information, see https://prometheus.io/ and https://grafana.com/.
If you are installing Prometheus and Grafana manually, there is a couple of steps you will
need to take. Follow Hitachi Storage Plug-in for Prometheus Reference Guide.
■ Connect Prometheus to Pushgateway
■ Import sample dashboard json grafana/sample.json to Grafana.
For this test and demo purpose, we are using the quick installer that comes with the Hitachi
Storage Plug-in for Prometheus package.

Procedure
1. In the grafana-prometheus-sample.yaml file, replace StorageClass of with your
own StorageClass.
2. (Optional) Modify the Grafana service.
The grafana-prometheus-sample.yaml file exposes Grafana as a NodePort with a
random nodeport. If you want to expose Grafana in a different way, modify the
grafana-prometheus-sample.yaml file.
3. Deploy Grafana and Prometheus.

oc apply -f yaml/grafana-prometheus-sample.yaml

Verify the Prometheus and Grafana PODs are running:

oc get pods

NAME READY STATUS RESTARTS AGE


grafana-0 1/1 Running 0 5d
prometheus-0 1/1 Running 0 5d

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 89
Monitoring Dashboard with Hitachi Storage Plug-in for Prometheus

pushgateway-77b85489b9-4vnzt 1/1 Running 0 5d


storage-exporter-77b644b8b7-pzhdj 1/1 Running 0 5d

Make sure the four PODs are running.

4. Access Grafana.
If you use NodePort, access Grafana with <Your Node IP Address>:<Grafana Port>.
You can identify <Grafana Port> by using the following command.

oc get svc

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE


grafana NodePort 172.30.219.171 <none> 3000:31929/TCP 5d
prometheus NodePort 172.30.180.214 <none> 9090:31661/TCP 5d
pushgateway ClusterIP 172.30.25.210 <none> 9091/TCP 5d

If you expose the Grafana, please get endpoint by yourself. The Grafana user/password
are admin/secret.

oc expose svc/grafana

oc get routes

NAME HOST/PORT PATH


SERVICES PORT TERMINATION WILDCARD
grafana grafana-hspc-monitoring-system.apps.jpc2.ocp.hvlab.local
grafana None

Monitoring Dashboard with Hitachi Storage Plug-in for Prometheus


When using the quick installer for Grafana and Prometheus following the previous steps,
there are no additional steps to configure Prometheus as a data source since it is pre-
configured.
On the dashboards, the installer includes a dashboard for HSPC Volumes.

The HSPC Volumes Dashboard shows metrics for Persistent Volumes such as Capacity,
Response Time, IOPS, Read/Write Transfer Rate, and Cache Hit Rate.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 90
Deploying a private container registry using Red Hat Quay and Hitachi HCP CS S3

These metrics can be presented by Namespace, Persistent Volume Claims VC, Storage
Class, Storage Serial Number, or Storage Pool ID.

When doing performance testing you can view specific metrics as shown.

Deploying a private container registry using Red Hat Quay and Hitachi
HCP CS S3
Some environments are not allowed to have Internet connected access to public registry for
images. In other cases, having a private registry is a security practice that some customers
adopt.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 91
Requirements

Red Hat Quay is a distributed and highly available container image registry platform that,
when integrated with Hitachi HCP CS S3, provides a secure storage, distribution, and
governance of containers on any infrastructure. This is available as a standalone component
or running on top of OCP cluster.

Requirements
Before starting with the deployment of Red Hat Quay Operator on the OCP cluster, consider
the following:
■ The OCP cluster is using OpenShift 4.5 or later.
■ Ensure the OCP cluster has sufficient compute resources for Quay deployment, see Red
Hat Quay documentation for specific requirements.
■ Ensure an Object Storage is available. For this demo we are using Hitachi HCP CS S3
storage.

Install Quay Operator from OperatorHub


Follow this procedure to install Quay Operator from OperatorHub.

Procedure
1. Log in to the OCP console, select OperatorHub under Operators, and then select the
Red Hat Quay Operator.

The installation page shows features and prerequisites.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 92
Install Quay Operator from OperatorHub

2. Select Install and the operator installation page appears. Select Install one more time.
After a short time, you will see the installed operator is ready for use. The operator can
be seen on the Installed Operators page as well.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 93
Configure Quay before deployment

Configure Quay before deployment


The deployment for this demo uses Hitachi VSP storage for Persistent Volumes for Quay
Progress DB and Hitachi HCP CS S3 storage for the image repository.
If there are several StorageClasses already defined on the OCP cluster, make sure to
configure one of the storage classes as default so the PVC for Quay Postgres DB can be
created there during deployment.
In this case, we are using storage class vsp-186-sc which is a storage class defined to be
used with Hitachi Storage Plug-in for Containers and Hitachi VSP storage systems.
The following command sets storage class vsp-186-sc as the default:

oc patch storageclass vsp-186-sc -p '{"metadata": {"annotations":


{"storageclass.kubernetes.io/is-default-class": "true"}}}'

Create an S3 bucket for the repository


Follow this procedure to prepare S3 storage.

Procedure
1. Open a web browser, navigate to your Hitachi Content Platform for cloud scale web
interface, and log in.
2. Browse the contents and create a bucket for the Quay repository.
The following example an onprem-ocp-quay-repo bucket that has been created for this
purpose.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 94
Create configuration and secret to access Hitachi HCP CS S3 store

Create configuration and secret to access Hitachi HCP CS S3 store


Create a configuration file that includes the S3 storage (URL for HCP CS), access key, secret
key to access Hitachi HCP CS S3, and bucket. Here is an example of the configuration file:

cat config_hcpcs.yaml

DISTRIBUTED_STORAGE_CONFIG:
s3Storage:
- S3Storage
- host: tryhcpforcloudscale.hitachivantara.com
s3_access_key: Hitachi_HCP_CS_access_key_here
s3_secret_key: Hitachi_HCP_CS_secret_key_here
s3_bucket: onprem-ocp-quay-repo
storage_path: /
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: []
DISTRIBUTED_STORAGE_PREFERENCE:
- s3Storage

The following command creates secret config-bundle-secret to access S3 storage


using the information from the previous configuration file.

oc create secret generic --from-file config.yaml=./config_hcpcs.yaml config-bundle-


secret

Create the Quay Registry


The next step is to create the Quay Registry; this can be done from the console or using
config files. For this demo we are using a minimum configuration with the configuration file
quayregistry_hcpcs.yaml where we indicate that the object storage will be unmanaged.
For production environment or if there is a need to use an existing database, follow Red Hat
documentation. Make sure to indicate the name of the secret created in the previous step.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 95
Create Quay Admin User and Login to Quay EndPoint

Here is an example of the configuration file used to create a Quay Registry:

cat quayregistry_hcpcs.yaml

apiVersion: quay.redhat.com/v1
kind: QuayRegistry
metadata:
name: example-registry
namespace: openshift-operators
spec:
configBundleSecret: config-bundle-secret
components:
- kind: objectstorage
managed: false
- kind: clair
managed: false
- kind: horizontalpodautoscaler
managed: false
- kind: mirror
managed: false
- kind: monitoring
managed: false

The following command is used to provision the Quay Registry:

oc apply -f quayregistry_hcpcs.yaml

After a short time, the following pods for the Quay Registry will be deployed. Note that these
are the components deployed for a minimum deployment. The number of PODs might vary
depending on the type of deployment.

Also, we can see that a PVC was automatically created for the Quay database:

Create Quay Admin User and Login to Quay EndPoint


The admin user to access the Quay Registry can be created from the OCP console following
this procedure.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 96
Test the Quay Registry

Procedure
1. In the OCP console, navigate to Operators > Installed Operators, with the appropriate
namespace/project.
2. Click on the newly installed Quay Registry, to view the details:

3. Navigate to the URL of the Registry EndPoint, for example in this demo:
hhps://example-registry-quay-openshift-operators.apps.jpc0.ocp.hvlab.local/
4. Select Create Account in the Quay registry UI to create a user account.

Test the Quay Registry


Once the admin user has been created, the next step is to test the registry.

Using Podman to push images to the Quay registry


For this test we are using Podman; as seen we are connecting to the Registry EndPoint with
the admin user created in the previous step:

podman login --tls-verify=false example-registry-quay-openshift-


operators.apps.jpc0.ocp.hvlab.local

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 97
Verification of images on the Quay Registry UI

For this test we are going to use the image for Hitachi Storage Plug-in for Prometheus which
has been downloaded to a local folder and Podman will be used to push the image to the
Quay registry. First, we load the image, assign a tag, and then push the image to the registry:

podman load -i storage-exporter/program/storage-exporter.tar

podman tag hitachi/storage-plugin-for-prometheus:v1.0.0 example-registry-quay-


openshift-operators.apps.jpc0.ocp.hvlab.local/quayadmin/storage-plugin-for-
prometheus:v1.0.0

podman push --tls-verify=false example-registry-quay-openshift-


operators.apps.jpc0.ocp.hvlab.local/quayadmin/storage-plugin-for-prometheus:v1.0.0

Verification of images on the Quay Registry UI


Navigate to the URL of the Registry EndPoint with the admin account and check the image
uploaded with Podman. In this case we have the image for Hitachi Storage Plug-in for
Prometheus.
https://example-registry-quay-openshift-operators.apps.jpc0.ocp.hvlab.local/

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 98
Verification of images on Hitachi HCP CS S3

Verification of images on Hitachi HCP CS S3


To verify images on the S3 store, open a web browser, navigate to your Hitachi Content
Platform for cloud scale web interface, and log in. Then, browse the contents for the onprem-
ocp-quay-repo bucket as shown.

Conclusion
Hitachi Unified Compute Platform, Hitachi Virtual Storage Platform, Hitachi Content Platform
for cloud scale, Hitachi Storage Plug-ins for Containers, VMware CSI and VASA plugins, and
Red Hat OpenShift Container Platform combine to create a powerful and flexible Kubernetes
ecosystem.
This reference architecture highlights recommended approaches for using Red Hat OpenShift
in a Hitachi infrastructure environment (UCP and/or VSP) while taking advantage of various
Hitachi storage platforms and data storage integrations to achieve a highly resilient and
protected platform to deliver Kubernetes clusters and containers at scale.

Product descriptions
This section provides information about the hardware and software components used in this
solution for OpenShift on Hitachi Unified Compute Platform.

Hardware components
These are the hardware components available for Hitachi Unified Compute Platform.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 99
Hitachi Advanced Server DS120

Hitachi Advanced Server DS120


Optimized for performance, high density, and power efficiency in a dual-processor server,
Hitachi Advanced Server DS120 delivers a balance of compute and storage capacity. This 1U
rack-mounted server has the flexibility to power a wide range of solutions and applications.
The highly-scalable memory supports up to 3 TB using 24 slots of high-speed DDR4 memory.
Advanced Server DS120 is powered by the Intel Xeon Scalable processor family for complex
and demanding workloads. There are flexible OCP and PCIe I/O expansion card options
available. This server supports up to 12 small form factor storage devices with up to 4 NVMe
drives.
This solution allows you to have a high CPU-to-storage ratio. This is ideal for balanced and
compute-heavy workloads.
Multiple CPU and storage devices are available. Contact your Hitachi Vantara sales
representative to get the latest list of options.

Hitachi Advanced Server DS120 G2


With support for two Intel Xeon Scalable processors in just 1U of rack space, the Hitachi
Advanced Server DS120 G2 delivers exceptional compute density. It provides flexible
memory and storage options to meet the needs of converged and hyperconverged
infrastructure solutions, as well as for dedicated application platforms such as internet of
things (IoT) and data appliances.
The Intel Xeon Scalable processor family is optimized to address the growing demands on
today’s IT infrastructure. The server provides 32 slots for high-speed DDR4 memory, allowing
up to 4 TB memory capacity with RDIMM population (128 GB × 32) or 8 TB (512 GB × 16) of
Intel Optane Persistent Memory. DS120 G2 supports up to 12 hot-pluggable, front-side-
accessible 2.5-inch non-volatile memory express (NVMe), serial-attached SCSI (SAS), serial-
ATA (SATA) hard disk drive (HDD), or solid-state drives (SSD). The system also offers 2
onboard M.2 slots.
With these options, DS120 G2 can be flexibly configured to address both I/O performance
and capacity requirements for a wide range of applications and solutions.

Hitachi Advanced Server DS220


With a combination of two Intel Xeon Scalable processors and high storage capacity in a 2U
rack-space package, Hitachi Advanced Server DS220 delivers the storage and I/O to meet
the needs of converged solutions and high-performance applications in the data center.
The Intel Xeon Scalable processor family is optimized to address the growing demands on
today's IT infrastructure. The server provides 24 slots for high-speed DDR4 memory, allowing
up to 3 TB of memory per node when 128 GB DIMMs are used. This server supports up to 12
large form factor storage devices and an additional 2 small form factor storage devices.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 100
Hitachi Advanced Server DS220 G2

This server has three storage configuration options:


■ 12 large form factor storage devices and an additional 2 small form factor storage devices
in the back of the chassis
■ 16 SAS or SATA drives, 8 NVMe drives, and an additional 2 small form factor storage
devices in the back of the chassis
■ 24 SFF devices and an additional 2 SFF storage devices in the back of the chassis

Hitachi Advanced Server DS220 G2


With a combination of two Intel Xeon Scalable processors and high storage capacity in a 2U
rack-space package, Hitachi Advanced Server DS220 G2 delivers the storage and I/O to
meet the needs of converged solutions and high-performance applications in the data center.
The Intel Xeon Scalable processor family is optimized to address the growing demands on
today’s IT infrastructure. The server provides 32 slots for high-speed DDR4 memory, allowing
up to 4 TB memory capacity with RDIMM population (128 GB × 32) or 8TB (512 GB × 16)
with Intel Optane Persistent Memory population.
DS220 G2 comes in three storage configurations to allow for end user flexibility. The first
configuration supports 24 2.5-inch non-volatile memory express (NVMe) drives, the second
supports 24 2.5-inch serial-attached SCSI (SAS), serial-ATA (SATA) and up to 8 NVMe
drives, and the third supports 12 3.5-inch SAS or SATA and up to 8 NVMe drives. All the
configurations support hot-pluggable, front-side-accessible drives as well as 2 optional 2.5-
inch rear mounted drives. The DS220 G2 delivers high I/O performance and high capacity for
demanding applications and solutions.

Hitachi Advanced Server DS225


Choose Hitachi Advanced Server DS225 to ensure you have the flexibility and performance
you need to support your business-critical enterprise applications.
Advanced Server DS225 delivers compute density and efficiency to meet the needs of your
most demanding high-performance applications. It takes full advantage of the Intel Xeon
scalable processor family with up to four dual-width 300 W graphic accelerator cards, up to 3
TB memory capacity, and additional PCIe 3.0 expansion slots in a 2U rack space package.
Front-side accessible storage bays supports up to eight hot-pluggable, serial-attached SCSI
(SAS) or serial-ATA (SATA) devices. These bays also support flexible configuration, which
allows Advanced Server DS225 to deliver high I/O performance and high capacity.

Hitachi Advanced Server DS240


Meet the needs of your most demanding high-performance applications with Hitachi
Advanced Server DS240. With up to four Intel Zeon Scalable Processors and up to 6 TB
memory capacity in a 2U rack-space package, this server delivers unparalleled compute
density and efficiency.
The Advanced Server DS240 architecture takes full advantage of the Intel Xeon Scalable
Processor family, including the highest performance options, to address the growing
demands of your IT infrastructure.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 101
Hitachi Virtual Storage Platform 5000 series

Hitachi Virtual Storage Platform 5000 series

This enterprise-class, flash array evolution storage, Hitachi Virtual Storage Platform 5000
series (VSP) has an innovative, scale-out design optimized for NVMe and storage class
memory. It achieves the following:
■ Agility using NVMe: Speed, massive scaling with no performance slowdowns, intelligent
tiering, and efficiency.
■ Resilience: Superior application availability and flash resilience. Your data is always
available, mitigating business risk.
■ Storage simplified: Do more with less, integrate AI (artificial intelligence) and ML
(machine learning), simplify management, and save money and time with consolidation.

Hitachi Virtual Storage Platform E990


Hitachi Virtual Storage Platform E990 supercharges business application performance with
all-NVMe storage. It uses Hitachi Ops Center, so you can improve IT operations with the
latest AI and ML capabilities. Advanced data reduction in Virtual Storage Platform E990
enables you to run data reduction with even the most performance hungry applications.
The all-NVMe architecture in Virtual Storage Platform E990 delivers consistent, low-
microsecond latency to reduce latency costs for critical applications. This predictable
performance optimizes storage resources.
With Virtual Storage Platform E990 and the rest of Hitachi midrange storage family, you have
agile and automated data center technology. These systems allow you to cost-effectively
meet your current digital expectations and give you the ability to address future challenges,
as your application data needs and service levels evolve. With time-tested, proven availability
and scalability, Hitachi Vantara delivers infrastructure solutions that help you maximize your
data center advantage.

Hitachi Virtual Storage Platform F Series family


Use Hitachi Virtual Storage Platform F series family storage for a flash-powered cloud
platform for your mission critical applications. This storage meets demanding performance
and uptime business needs. Extremely scalable, its 4.8 million random read IOPS allows you
to consolidate more applications for more cost savings.
Hitachi Virtual Storage Platform F series family delivers superior all-flash performance for
business-critical applications, with continuous data availability.

Hitachi Virtual Storage Platform G series family


The Hitachi Virtual Storage Platform G series family enables the seamless automation of the
data center. It has a broad range of efficiency technologies that deliver maximum value while
making ongoing costs more predictable. You can focus on strategic projects and
consolidating more workloads while using a wide range of media choices.
The benefits start with Hitachi Storage Virtualization Operating System RF. This includes an
all new enhanced software stack that offers up to three times greater performance than our
previous midrange models, even as data scales to petabytes

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 102
Arista Data Center switches

Hitachi Virtual Storage Platform G series offers support for containers to accelerate cloud-
native application development. Provision storage in seconds, and provide persistent data
availability, all the while being orchestrated by industry leading container platforms. Move
these workloads into an enterprise production environment seamlessly, saving money while
reducing support and management costs.

Arista Data Center switches


Arista Networks builds software-driven cloud networks for data center, cloud, and campus
environments. Arista delivers efficient, reliable and high-performance Universal Cloud
Network architectures, based on 10 GbE, 25 GbE, 40 GbE, 50 GbE, and 100 GbE platforms
delivered with an extensible operating system - Arista EOS.
■ Arista 7050CX3-32S is a 1RU sized spine switch with 32 (downlink) and 4 (uplink) 100
GbE QSFP ports for multiple-rack solutions. Each QSFP port supports a choice of five
speeds, with flexible configuration between 100 GbE, 40 GbE, 4 × 10 GbE, 4 × 25 GbE, or
2 × 50 GbE modes.
■ Arista 7050SX3-48YC8 is a 1RU sized switch with 48 × 25 GbE SFP and 8 × 100 GbE
QSFP ports. The high density SFP ports can be configured in groups of 4 to run either at
25 GbE or a mix of 10 GbE/1 GbE speeds. The QSFP ports allow 100 GbE or 40 GbE
high speed network uplinks.
■ Arista 7010T is a 1RU sized, 48-port 1 GbE management switch for single-rack and
multiple-rack solutions.

Cisco Nexus switches


The Cisco Nexus switch product line provides a series of solutions that make it easier to
connect and manage disparate data center resources with software-defined networking
(SDN). Leveraging the Cisco Unified Fabric, which unifies storage, data and networking
(Ethernet/IP) services, the Nexus switches create an open, programmable network
foundation built to support a virtualized data center environment.

Brocade switches from Broadcom


Brocade and Hitachi Vantara have partnered to deliver storage networking and data center
solutions. These solutions reduce complexity and cost, as well as enable virtualization and
cloud computing to increase business agility.
Brocade Fibre Channel switches deliver industry-leading performance, simplifying scale-out
network architectures. Get the high-performance, availability, and ease of management you
need for a solid foundation to grow the storage network you want.

Software components
These are the software components used in this reference architecture.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 103
Hitachi Storage Virtualization Operating System RF

Hitachi Storage Virtualization Operating System RF


Hitachi Storage Virtualization Operating System RF powers the Hitachi Virtual Storage
Platform (VSP) family. It integrates storage system software to provide system element
management and advanced storage system functions. Used across multiple platforms,
Storage Virtualization Operating System includes storage virtualization, thin provisioning,
storage service level controls, dynamic provisioning, and performance instrumentation.
Flash performance is optimized with a patented flash-aware I/O stack, which accelerates data
access. Adaptive inline data reduction increases storage efficiency while enabling a balance
of data efficiency and application performance. Industry-leading storage virtualization allows
SVOS RF to use third-party all-flash and hybrid arrays as storage capacity, consolidating
resources for a higher ROI and providing a high-speed front end to slower, less-predictable
arrays.

Hitachi Unified Compute Platform Advisor


Hitachi Unified Compute Platform Advisor (UCP Advisor) is a comprehensive cloud
infrastructure management and automation software that enables IT agility and simplifies day
0-N operations for edge, core, and cloud environments. The fourth-generation UCP Advisor
accelerates application deployment and drastically simplifies converged and hyperconverged
infrastructure deployment, configuration, life cycle management, and ongoing operations with
advanced policy-based automation and orchestration for private and hybrid cloud
environments.
The centralized management plane enables remote, federated management for the entire
portfolio of converged, hyperconverged, and storage data center infrastructure solutions to
improve operational efficiency and reduce management complexity. Its intelligent automation
services accelerate infrastructure deployment and configuration, significantly minimizing
deployment risk and reducing provisioning time and complexity, automating hundreds of
mandatory tasks.

Hitachi Storage Provider for VMware vCenter


When you want to support policy-based automation and improve operational insight into the
storage or converged platform hosting that environment, use Hitachi Storage Provider for
VMware vCenter. This allows a unique implementation of VMware vSphere API for Storage
Awareness (VASA), supporting traditional-based datastores (VMFS and NFS) and VMware
vVols-based datastores.
Hitachi Storage Provider for VMware vCenter, as part of the infrastructure, communicates
with VMware vCenter to indicate storage capabilities and state information. It supports policy-
based management, operations management, and resource scheduling functionality.

Hitachi Content Platform for cloud scale


Hitachi Content Platform for cloud scale (HCP for cloud scale) is a software-defined object
storage solution that is based on a massively parallel microservice architecture, and is
compatible with the Amazon S3 application programming interface (API). HCP for cloud scale
is well suited to service applications requiring high bandwidth and compatibility with Amazon
S3 APIs.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 104
Hitachi Storage Plug-in for Containers

Hitachi Storage Plug-in for Containers


Hitachi Storage Plug-in for Containers (HSPC) provides connectivity between Docker,
Kubernetes, or Kubernetes Container Storage Interface (CSI) containers and Hitachi Virtual
Storage Platform 5000 series, G series, and F series systems. With the compatibility plug-in,
your organization can deliver shared storage for containers that persists beyond the timeline
of a single container host.

Hitachi Replication Plug-in for Containers


Hitachi Replication Plug-in for Containers (HRPC) automates storage replication between two
different Kubernetes clusters and storage systems located at different sites. This enables
your organization to take a self-service approach when creating replications using the
Kubernetes command-line tool, kubectl or oc.

Hitachi Storage Plug-in for Prometheus


Hitachi Storage Plug-in for Prometheus (HSPP) enables Kubernetes administrators to
monitor the metrics of Kubernetes resources and Hitachi storage system resources within a
single tool.

Red Hat OpenShift

Red Hat Enterprise Linux High Availability Add-On allows a service to fail over from 1 node to
another with no apparent interruption to cluster clients, evicting faulty nodes during transfer to
prevent data corruption. This Add-On can be configured for most applications (both off-the-
shelf and custom) and virtual guests, supporting up to 16 nodes. The High Availability Add-
On features a cluster manager, lock management, fencing, command-line cluster
configuration, and a Conga administration tool.

Red Hat Enterprise Linux


Using the stability and flexibility of Red Hat Enterprise Linux, reallocate your resources
towards meeting the next challenges instead of maintaining the status quo. Deliver
meaningful business results by providing exceptional reliability on military-grade security. Use
Enterprise Linux to tailor your infrastructure as markets shift and technologies evolve.

VMware vSphere
VMware vSphere is a virtualization platform that provides a datacenter infrastructure. It helps
you get the best performance, availability, and efficiency from your infrastructure and
applications. Virtualize applications with confidence using consistent management.

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 105
VMware vSphere

VMware vSphere has the following components:


■ VMware vSphere ESXi
This hypervisor loads directly on a physical server. ESXi provides a robust, high-
performance virtualization layer that abstracts server hardware resources and makes
them shareable by multiple virtual machines.
■ VMware vCenter Server
This management software provides a centralized platform for managing your VMware
vSphere environments so you can automate and deliver a virtual infrastructure with
confidence:
● VMware vSphere vMotion
● VMware vSphere Storage vMotion
● VMware vSphere Distributed Resource Scheduler
● VMware vSphere High Availability
● VMware vSphere Fault Tolerance

Reference Architecture Guide


Hitachi Storage Integrations with UCP and Red Hat OpenShift 106
Hitachi Vantara
Corporate Headquarters Contact Information
2535 Augustine Drive USA: 1-800-446-0744
Santa Clara, CA 95054 USA Global: 1-858-547-4526
HitachiVantara.com | community.HitachiVantara.com HitachiVantara.com/contact

You might also like