Hitachi Storage Integrations With UCP and OpenShift - Reference Architecture Guide
Hitachi Storage Integrations With UCP and OpenShift - Reference Architecture Guide
MK-SL-210-02
July 2022
© 2022 Hitachi Vantara LLC. All rights reserved.
No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including copying and recording,
or stored in a database or retrieval system for commercial purposes without the express written permission of Hitachi, Ltd., or Hitachi Vantara LLC
(collectively “Hitachi”). Licensee may make copies of the Materials provided that any such copy is: (i) created as an essential step in utilization of the
Software as licensed and is used in no other manner; or (ii) used for archival purposes. Licensee may not make any other copies of the Materials.
“Materials” mean text, data, photographs, graphics, audio, video and documents.
Hitachi reserves the right to make changes to this Material at any time without notice and assumes no responsibility for its use. The Materials contain
the most current information available at the time of publication.
Some of the features described in the Materials might not be currently available. Refer to the most recent product announcement for information about
feature and product availability, or contact Hitachi Vantara LLC at https://support.hitachivantara.com/en_us/contact-us.html.
Notice: Hitachi products and services can be ordered only under the terms and conditions of the applicable Hitachi agreements. The use of Hitachi
products is governed by the terms of your agreements with Hitachi Vantara LLC.
By using this software, you agree that you are responsible for:
1. Acquiring the relevant consents as may be required under local privacy laws or otherwise from authorized employees and other individuals; and
2. Verifying that your data continues to be held, retrieved, deleted, or otherwise processed in accordance with relevant laws.
Notice on Export Controls. The technical data and technology inherent in this Document may be subject to U.S. export control laws, including the
U.S. Export Administration Act and its associated regulations, and may be subject to export or import regulations in other countries. Reader agrees to
comply strictly with all such regulations and acknowledges that Reader has the responsibility to obtain licenses to export, re-export, or import the
Document and any Compliant Products.
Hitachi and Lumada are trademarks or registered trademarks of Hitachi, Ltd., in the United States and other countries.
AIX, AS/400e, DB2, Domino, DS6000, DS8000, Enterprise Storage Server, eServer, FICON, FlashCopy, GDPS, HyperSwap, IBM, Lotus, MVS, OS/
390, PowerHA, PowerPC, RS/6000, S/390, System z9, System z10, Tivoli, z/OS, z9, z10, z13, z14, z/VM, and z/VSE are registered trademarks or
trademarks of International Business Machines Corporation.
Active Directory, ActiveX, Bing, Excel, Hyper-V, Internet Explorer, the Internet Explorer logo, Microsoft, Microsoft Edge, the Microsoft corporate logo,
the Microsoft Edge logo, MS-DOS, Outlook, PowerPoint, SharePoint, Silverlight, SmartScreen, SQL Server, Visual Basic, Visual C++, Visual Studio,
Windows, the Windows logo, Windows Azure, Windows PowerShell, Windows Server, the Windows start button, and Windows Vista are registered
trademarks or trademarks of Microsoft Corporation. Microsoft product screen shots are reprinted with permission from Microsoft Corporation.
All other trademarks, service marks, and company names in this document or website are properties of their respective owners.
Copyright and license information for third-party and open source software used in Hitachi Vantara products can be found in the product
documentation, at https://www.hitachivantara.com/en-us/company/legal.html or https://knowledge.hitachivantara.com/Documents/
Open_Source_Software.
Feedback
Hitachi Vantara welcomes your feedback. Please share your thoughts by sending an email message to [email protected]. To assist the
routing of this message, use the paper number in the subject and the title of this white paper in the text.
Revision history
Changes Date
Data is at the core of any application. Many applications require data persistence, such as
MariaDB, PostgreSQL, MongoDB, and MySQL, among others. Continuous integration and
continuous delivery (CI/CD) pipelines require data persistency at every level.
Using well-known and proven CSI (Container Storage Interface) storage integrations, you
can provide persistent storage for stateful container applications. The Hitachi UCP
solution includes a Hitachi Storage CSI driver supported with Hitachi Storage Plug-in for
Containers (HSPC) and a VMware Container Native Storage (CNS) implementation
supported with Hitachi Storage Provider for VMware vCenter (VASA) software. For
example, using vSphere storage policies in combination with VASA, you can provide
dynamic ReadWriteOnce (RWO) VMFS and vVols-based VMDK persistent volumes to
container applications running within OpenShift on top of VMware clusters.
Backup is a critical aspect of any data center infrastructure. Red Hat OpenShift
Application Data Protection (OADP) includes a built-in Velero operator to enable your
organization to protect any container-related entity, including Kubernetes persistent
volumes.
The integration of Hitachi Storage Plug-in for Containers (HSPC) with OpenShift brings
other benefits such as snapshot and cloning and restore operations for persistent
volumes, enabling rapid copy creation for immediate use in decision support, software
development, and data protection operations.
Hitachi Content Platform for cloud scale (HCP for cloud scale) provides standard AWS-
compliant S3 storage that can be used with Red Hat OADP/Velero implementations as
target repository storage.
Hitachi Replication Plug-in for Containers (HRPC) supports any Kubernetes cluster
configured with Hitachi Storage Plug-in for Containers and provides data protection,
disaster recovery, and migration of persistent volumes to remote Kubernetes clusters.
HRPC supports replication for both bare metal and virtual environments.
■ Computing platform
With a wide range of applications that are stateful or stateless, a wide range of flexible
computing platforms are necessary to match both memory and CPU requirements.
The type of computing technology is also a consideration for licensing costs. Hitachi
Vantara provides different computing options from the 1U/2U dual socket Hitachi
Advanced Server DS120/220 G1/G2 to the 2U quad socket Hitachi Advanced Server
DS240 G1.
As with any infrastructure, a reliable network is needed to provide enough bandwidth and
security for container architectures. Hitachi Unified Compute Platform uses a spine and
leaf design using Cisco Nexus or Arista switches.
■ Infrastructure management
Orchestration and automation are the key to operational efficiencies. Hitachi Unified
Compute Platform (UCP) Advisor provides a single pane of glass management and
lifecycle manager for converged infrastructure, with automation for compute, network, and
storage infrastructure. Hitachi Ops Center is also available with Hitachi Virtual Storage
Platform (VSP) for storage management.
From monitoring perspective, Hitachi Storage Plug-in for Prometheus (HSPP) enables
Kubernetes administrators to monitor metrics for Kubernetes resources and the Hitachi
Storage resources with a single tool.
This reference architecture also provides the reference design for a build-your-own Red Hat
OpenShift Container Platform environment using Hitachi Virtual Storage Platform. Although a
specific converged system is used as an example, this reference design still applies to
building your own container platform.
The intended audience of this document is IT administrators, system architects, consultants,
and sales engineers to assist in planning, designing, and implementing Unified Compute
Platform CI with OpenShift Container Platform solutions.
Solution overview
Red Hat OpenShift is a successful container orchestration platform and is one of the
container orchestration solutions available with Unified Compute Platform. The following
figure shows a high-level diagram of OpenShift managing containers and persistent volumes
on the Unified Compute Platform stack with Hitachi Virtual Storage Platform series systems.
You can deploy OpenShift on bare metal hosts and/or virtual hosts or both. In some cases,
the master nodes are virtualized while the worker nodes can be hybrid bare metal and virtual
nodes. Depending on the deployment purposes, different deployments can be used. For this
reference architecture, the OpenShift clusters use a hybrid deployment, combining bare
metal and virtual worker nodes to show the benefits of both types of deployments.
These are the storage options and capabilities for bare metal worker nodes:
■ Any storage system from the Hitachi Virtual Storage Platform family can be used. Virtual
Storage Platform provides a REST API for Hitachi Storage Plug-in for Containers to
provision persistent volumes. Deploy Storage Plug-in for Containers within the OpenShift
Container Platform cluster in the respective cluster. Containers can access the persistent
volumes through a local mount point inside the worker node. The persistent volumes are
provided by Virtual Storage Platform-hosted LUNs through a block protocol to the worker
nodes.
■ Hitachi Storage Plug-in for Containers dynamically provisions persistent volumes for
stateful containers from Hitachi storage.
These are the storage options and capabilities for virtual worker nodes:
■ Any storage system from the Hitachi Virtual Storage Platform family can be used, as well
as Hitachi Unified Compute Platform. Hitachi Storage Provider for VMware vCenter
provides Virtual Storage Platform capabilities awareness to VMware vCenter, where it can
be used with VMware Storage Policy-Based Management.
■ VMware Cloud Native Storage (CNS) provides persistent storage provisioning capabilities
using the VMware storage stack. Containers can access the persistent volumes through a
local mount point inside the worker node virtual machines. The persistent volumes are
provided by VMDKs provisioned from VMFS or vVols datastores from Virtual Storage
Platform. You can also provide persistent volumes through Unified Compute Platform HC
based on VMware Virtual SAN Ready Nodes (vSAN Ready Nodes).
The following persistent volume options are available for this configuration:
■ Use VMware vVols to provision persistent volumes directly from Hitachi storage.
■ Create persistent volumes from regular VMFS datastores.
■ Create persistent volumes from VMware vSAN datastores hosted by Hitachi Unified
Compute Platform HC nodes.
The solution validation of this reference architecture consists of different use cases for data
protection of stateful applications, replication data services for container volumes across data
centers, and monitoring and private registry, all running within OpenShift clusters on top of
Hitachi UCP.
Follow the steps in Solution design and Solution Implementation and Validation to learn about
the storage capabilities, data protection, and monitoring features when using Hitachi UCP
with Red Hat OpenShift Container Platform.
Solution components
The following tables list the versions of hardware and software tested in this reference
architecture.
Hardware components
The tested solution used specific features based on the following hardware. You can use
either Hitachi Advanced Server DS120/DS220/DS225/DS240 Gen1 or Gen2 or any qualified
server platform for UCP.
See the UCP CI Interoperability Matrix and UCP Product Compatibility Guide for more
information.
Table 1 Hardware Components
Software components
The following table lists the key software components.
Table 2 Software Components
Software Version
83-05-33-40/00
Solution design
This section outlines the detailed solution example for the Hitachi Unified Compute Platform
and Red Hat OpenShift.
The following diagram represents a standard architecture for Hitachi Unified Compute
Platform (UCP) CI/ UCP HC/UCP RS.
The configuration with Hitachi Virtual Storage Platform is described in Unified Compute
Platform CI/HC/RS in HitachiUnified Compute Platform CI for VMware vSphere Reference
Architecture Guide.
Master Nodes
Master nodes maintain the OCP cluster configuration as well as manage nodes within the
cluster and schedule pods to run on worker nodes. Master nodes consist of an API server,
controller manager server, certificate, and scheduler. If there is a master node outage,
container applications will not be impacted and end users can continue using resources, but
administrators of the cluster will not be able to make any changes to the cluster.
Worker Nodes
Worker nodes provide a runtime environment for containers and pods and are managed by
the master nodes within the cluster. Worker nodes can either be virtual or physical based on
the deployment type.
API Server
The API server, or kube-apiserver, provides the front end for the Kubernetes control plane by
managing the interactions of cluster components via RESTful API calls. Administrators can
run several instances of kube-apiserver to balance traffic among the cluster.
Scheduler
The scheduler, or kube-scheduler, ensures that container applications are scheduled to run
on worker nodes within the OCP cluster. The scheduler reads data from the pod and finds a
node that is a good fit based on configured policies.
Controller manager
The controller manager, or kube-controller-manager, provides the cluster with the necessary
state changes to provide the most applicable state for a healthy cluster. The controller
manager provides this functionality via the kube-apiserver.
The following figure illustrates how CNS components, CNS in vCenter Server, and vSphere
Container Storage Plug-in interact with other components in a vSphere environment (credit to
VMware).
1. If Hitachi Storage Plug-in for Containers (HSPC) is installed (typically for bare metal
workers with Fibre Channel or iSCSI or virtual worker with iSCSI), the backup process
can use CSI snapshots. Native snapshots are typically much faster than file copying
since they leverage copy-on-write capabilities from Hitachi VSP storage instead of doing
a full copy.
When using CSI snapshots, only the Kubernetes metadata is backed up by Velero to the
S3-compatible storage, while the HSPC leverages VSP capabilities to back up volume
data using snapshot technologies.
This backup process is explored in Solution Implementation and Validation (on
page 37), Scenario 1.
OADP alone is not a full end-to-end data protection solution, but the integration with Hitachi
Storage Plug-in for Containers and Hitachi HCP CS S3 storage provide a powerful solution
for data protection for container-based applications and their associated PVs and data. The
OADP operator sets up and installs Velero on the OpenShift cluster.
Note: The Red Hat OADP operator version 1.0 was released in Feb 2022 and is
the first generally available release that is fully supported. Earlier versions
(pre-1.0) of the operator were available as a community operator with only
community support.
In addition, Hitachi Replication Plug-in for Containers (HRPC) together with Hitachi VSP
storage replication capabilities can be used for data protection, disaster recovery, and
migration of persistent volumes to remote datacenters/Kubernetes clusters.
Volume snapshots
In OpenShift or Kubernetes, creating a Persistent VolumeClaim (PVC) initiates the creation of
a PersistentVolume (PV) which contains the data. A PVC also specifies a StorageClass
which provides additional attributes for backend storage.
Because this guide also covers backup with CSI snapshots, it is important to clarify some
additional concepts related to snapshots. A VolumeSnapshot represents a snapshot of a
volume on the storage system. In the same way how API resources PersistentVolume and
PersistentVolumeClaim are used to provision volumes for users and administrators,
VolumeSnapshot and VolumeSnapshotContent API resources are provided to create volume
snapshots. VolumeSnapshot support is only available for CSI drivers.
■ VolumeSnapshotContent – Represents a snapshot taken of a Volume in the cluster.
Similar to PersistentVolume object, the VolumeSnapshotContent is a cluster resource that
points to a real snapshot in the backend storage. VolumeSnapshotContent are not
namespaced.
■ VolumeSnapshot - Is a request for snapshot of a volume. It is similar to a
PersistentVolumeClaim. Creating a VolumeSnaphot triggers a snapshot
(VolumeSnapshotContent) and the objects are bound together, there is a one-to-one
binding between VolumeSnaphot and VolumeSnapshotContent. VolumeSnapshot are
namespaced.
■ VolumeSnapshotClass – Allows you to define different attributes belonging to a
VolumeSnapshot. This is similar to how a StorageClass is used for PVs.
This is covered in section Backing up persistent volumes with CSI snapshot in this guide as a
requirement for CSI snapshots.
The tables below provide more details about the master and worker nodes for the two OCP
clusters:
OCP Cluster (Primary site)
For detailed hybrid OpenShift Container Platform installation procedures, see Red Hat
documentation.
Procedure
1. Use UCP Advisor to create the necessary zone sets in the Fibre Channel fabrics.
2. Use Hitachi Storage Navigator to provision the necessary ALU targets from each
storage system.
3. Register the Hitachi VSP storage systems in the VASA Provider.
4. Register VASA Provider as a Storage Provider within the vCenter associated with the
cluster hosting the virtual worker nodes.
5. Create a vVol datastore or VMFS (LDEV) datastores and associated SPBM policies for
the Hitachi VSP storage systems configured for use by the target OCP clusters.
For details, see VMware vSphere Virtual Volumes (vVols) with Hitachi Virtual Storage
Platform Quick Start and Reference Guide.
Procedure
1. Login to the console of your OCP cluster, select OperatorHub under Operators, then
search for Hitachi.
2. Click the Hitachi Storage Plug-in for Containers, and then click Install.
3. Confirm the status of the Operator is Succeeded either using the console or oc get
pods -n <namespace> command. On the console, click Installed Operators under
Operators and you can see the status of the HSPC plug-in.
4. The next step is to create the HSPC Instance, this can be done using the Operator
Details. Select the Hitachi Storage Plug-in for Containers, and then click Create
Instance on the Operator Details. Click Create.
5. Confirm the status READY is true for the HSPC instance with the following command:
Finally, verify that all the HSPC pods are in running state using the following command:
The Hitachi Storage Plug-in for Containers (HSPC) has been successfully installed. If
you want to make an advanced configuration, refer to Configuration of Storage Plug-in
for Containers.
Configure Secret
The secret contains storage system information that enables access to the Storage Plug-in
for Containers. It contains the storage URL (VSP REST API), user and password settings.
Here an example of the YAML manifest file:
apiVersion: v1
kind: Secret
metadata:
name: secret-vsp-113
namespace: test
type: Opaque
data:
url: aHR0cHM6Ly8xNzIuMjUuNDcuMTEz
user: b2NwdXNyMQ==
password: SGl0YWNoaTEh
The URL, user, and password are base64 encoded. Here an example how to get the base64
encoded of a user called “ocpusr1”, do the same for the URL and password:
oc create -f <secret-manifest-file>
Procedure
1. Login to the OCP console, select Workloads > Secrets.
2. Confirm the namespace in which you are creating the secret, in this example “test”.
3. Click Create. Select From YAML. Then, either copy/paste the content of the secret
manifest file or just set the URL, user, and password in base64 encoded, and assign a
name and corresponding namespace.
4. Click Create.
Configure StorageClass
The StorageClass contains storage settings that are necessary for Storage Plug-in for
Containers to work with your environment. The following YAML manifest file provides
information about the required parameters:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: sc-vsp-113
annotations:
kubernetes.io/description: Hitachi Storage Plug-in for Containers
provisioner: hspc.csi.hitachi.com
reclaimPolicy: Delete
allowVolumeExpansion: true
volumeBindingMode: Immediate
parameters:
serialNumber: '40016'
poolID: '1'
portID: 'CL1-D,CL4-D'
connectionType: fc
csi.storage.k8s.io/fstype: ext4
csi.storage.k8s.io/provisioner-secret-namespace: test
csi.storage.k8s.io/provisioner-secret-name: secret-vsp-113
csi.storage.k8s.io/node-stage-secret-name: secret-vsp-113
csi.storage.k8s.io/controller-expand-secret-name: secret-vsp-113
oc create -f <storage-class-manifest-file>
Procedure
1. Log in to the OCP console, and then select Storage > StorageClasses.
2. Click Create StorageClass. Then, click Edit YAML. Then, either copy/paste the content
of the StorageClass manifest file or enter each of the corresponding settings.
3. Click Create.
Configure Multipathing
For worker nodes connected to Hitachi VSP Storage via FC or iSCSI, it is recommended to
enable multipathing. The requirement is to create the multipath.conf and ensure that the
user_friendly_names option is set to yes and the multipathd.service is enabled.
This can be done by applying an MCO (Machine Config Operator) to the OCP cluster after it
has been deployed. Note, applying MachineConfig will restart the worker nodes one at a
time.
Consider the following before applying the multipath configuration:
■ For Fibre Channel, ensure that FC switches are configured with proper zoning for the
compute worker nodes and Hitachi VSP storage systems are accessible to each other.
■ For iSCSI, ensure the Hitachi VSP storage is properly configured for iSCSI and the
compute worker nodes can access the iSCSI targets. Also, for iSCSI, check the Hitachi
Storage Plug-in for Containers Release Notes for additional considerations regarding IQN
configurations.
■ RedHat CoreOS (RHCOS) already includes the device-mapper-multipath package
which is required to support multipathing. For solutions with iSCSI, RHCOS already has
the iSCSI initiator tools installed by default. There is no need to install any additional
package, apply the configurations as indicated in this section.
Configure multipathing for OCP worker nodes using MachineConfig:
To enable multipath for Hitachi HSPC, apply the MachineConfig below to the cluster. This will
enable multipathd (for FC and iSCSI) needed by the Hitachi VSP Storage and HSPC
integration on each worker node. It targets the worker nodes by using the label
machineconfiguration.openshift.io/role: worker.
The following YAML file can be used for both Fibre Channel and iSCSI configurations with
multipathing. To support iSCSI, uncomment the last three lines in the file.
A MachineConfig can be created directly from the command line using the following
command oc create -f <MachineConfigFile.yaml> or using the OpenShift console.
Procedure
1. To apply a MachineConfig using the OCP console, login to the OCP web console and
navigate to Compute > Machine Configs. Click Create Machine Config, then copy
and paste the content of the YAML file and click Create.
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
name: workers-enable-multipath-conf
labels:
machineconfiguration.openshift.io/role: worker
spec:
config:
ignition:
version: 3.2.0
storage:
files:
- path: /etc/multipath.conf
mode: 400
filesystem: root
contents:
source: data:text/plain;charset=utf-8;base64,
ZGVmYXVsdHMgewogICAgICAgIHVzZXJfZnJpZW5kbHlfbmFtZXMgeWVzCiAgICAgICAgZmluZF9tdWx0a
XBhdGhzIHllcwp9CgpibGFja2xpc3Qgewp9Cg==
verification: {}
systemd:
units:
- name: multipathd.service
enabled: true
state: started
# Uncomment the following 3 lines if this MachineConfig will be used
with iSCSI
#- name: iscsid.service
# enabled: true
# state: started
osImageURL: ""
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
name: workers-enable-iscsi
labels:
machineconfiguration.openshift.io/role: worker
spec:
config:
ignition:
version: 3.2.0
systemd:
units:
- name: iscsid.service
enabled: true
state: started
osImageURL: ""
Note: The source data string located after the line source: data:text/
plain;charset=utf-8;base64, for multipath.conf is base64 encoded. If
you need to update the multipath.conf file to suite your environment needs,
you can run echo -n “<string>” | base64 -d to decode the
contents of the config file. If you want to update it, make your changes, and
then re-encode the file using base64.
3. After the MachineConfig is created, every worker node is rebooted one at a time after
the configuration is applied, and it could take from 20 to 30 minutes to apply the
configuration to the worker nodes. To verify whether the machine config is applied use
the oc get mcp command to verify that the machine config pool for workers is
updated. In addition, ssh to the worker nodes to confirm that the /etc/multipath.conf file
has been created and the multipathd service is running, if it is iscsi verify that the iscsid
service is running. Here an example:
Here an example where one virtual node is labeled vm, and one physical node is labeled
hspc-fc:
Procedure
1. From the OCP console, select Compute > Nodes.
2. From the nodes list click the ellipsis icon on a worker node (physical or virtual) and
select Edit Labels.
3. Assign the label using a key/value pair, for example: nodeType=hspc-fc, and then
click Save.
4. Use the following command to verify the labels:
oc get nodes --show-labels
Note: Storage Plug-in for Containers will overwrite host mode options even if
existing host groups have other host mode options.
It is recommended that you back up to a remote S3 target separate from your source
infrastructure. This ensures maximum protection of your data in case of a site or
infrastructure-specific failure.
Procedure
1. Log in to the OCP console, select OperatorHub under Operators, and then select the
OADP Operator and click Install.
2. Click Install to install the Operator in the opeshift-adp project.
Once the operator has been installed, it looks like this when it is ready to use:
3. The next step is to configure the Data Protection Application instance which deploys
Velero and Restic pods.
Procedure
1. Log in to the OCP console, click Operators > Installed Operators and select the OADP
Operator.
2. Under Provided APIs, click Create instance in the DataProtectionApplication box.
3. Click YAML View and update the parameters of the DataProtectionApplication manifest
with the following:
■ HCP CS information:
● S3URL - Set the URL to the S3 endpoint URL of your HCP CS system.
● Region - You can choose any region for this variable because HCP for cloud
scale will accept any region name that is provided.
● Bucket - This variable should be set to the S3 bucket name you configured in
HCP for cloud scale.
● Credential - set this to the name of the secret with the access keys for HCP CS
(for example cloud-credentials).
● Specify a prefix for Velero backups.
● The snapshot location must be in the same region as the PVs.
4. Click Create.
5. Verify the installation of the OADP resources.
When the DataProtectionApplication instance is created, you should have Velero, Restic
pods, and services running within the openshift-adp namespace.
6. Run the oc get all -n openshift-adp command and wait until all the pods are
running successfully. The following figure shows both Velero and Restic pods running
within the openshift-adp namespace.
apiVersion: oadp.openshift.io/v1alpha1
kind: DataProtectionApplication
metadata:
name: velero-ocp-hcpcs
namespace: openshift-adp
spec:
backupLocations:
- velero:
config:
profile: default
region: us-west-1
insecureSkipTLSVerify: "true"
s3Url: "https://tryhcpforcloudscale.hitachivantara.com/"
s3ForcePathStyle: "true"
credential:
key: cloud
name: cloud-credentials
default: true
objectStorage:
bucket: ocp-eng-velero-target
prefix: velero
provider: aws
configuration:
restic:
enable: true
velero:
defaultPlugins:
- openshift
- aws
- csi
- kubevirt
featureFlags:
- EnableCSI
snapshotLocations:
- velero:
config:
profile: default
region: us-west-1
provider: aws
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
name: volume-snapshot-class-csi-hspc
labels:
velero.io/csi-volumesnapshot-class: "true"
driver: hspc.csi.hitachi.com
deletionPolicy: Delete
parameters:
poolID: "1"
csi.storage.k8s.io/snapshotter-secret-name: "secret-vsp-113"
csi.storage.k8s.io/snapshotter-secret-namespace: "test"
Procedure
1. Create the VolumeSnapshotClass with the following command:
oc apply -f VolumeSnapshotClass_CSI_HSPC.yaml
oc get VolumeSnapshotClass
Persistent volume and data protection using Velero and Hitachi HCP
CS S3 – Scenario 1
The following layout shows a high-level configuration of the setup to perform the backup and
restore operation in the same OCP cluster using Velero and HSPC CSI snapshots.
The following figure shows the solution architecture with the solution to be validated.
Helm allows you to install complex container-based applications easily with the ability to
customize the deployment to your needs. On your Linux workstation, install the Helm binary
by following the Helm documentation for your distribution.
Add the Bitnami repository to your Helm configuration by running the following command:
helm repo add bitnami https://charts.bitnami.com/bitnami
Search for the MySQL Helm chart by running the following command:
The following figure shows example output, showing that the Helm binary is installed properly
and the bitnami repository has been added with a MySQL Helm chart available for use.
Verify StorageClasses
The OCP cluster used for this test is a hybrid, containing some virtual worker nodes hosted
on VMware ESXi and bare metal worker nodes. The bare metal worker nodes use FC HBAs
and are connected to Hitachi VSP storage.
On this OCP cluster we have StorageClass using vSphere CSI provisioner, and other
StorageClasses with Hitachi HSPC CSI provisioner. To verify the defined StorageClasses
enter the command oc get sc.
For this example, we used the following StorageClass:
■ One StorageClass sc-vsp-113 for the primary MySQL DB instance.
The following figure shows an example listing of StorageClasses available on the OCP
cluster.
oc new-project demoapps
The following is the command and parameters used to deploy MySQL Helm chart:
You must modify these values to match your environment and the StorageClass that are
available in your OCP cluster. The values and their corresponding impact to the MySQL
deployment are as follows:
■ nodeSelector - Sets the node type where the pod must be deployed.
■ database - Sets the name of the database to be created.
■ rootPassword - Sets the password for the root user in the MySQL DB.
■ global.storageClass - Sets the StorageClass to be used for the MySQL pod.
■ primary.persistence.size - Sets the size of the persistent volume to be assigned to the
MySQL DB.
■ Set the SecurityContext parameters according to your environment.
Execute the command and Helm will begin to deploy your MySQL deployment to your OCP
cluster.
The following figure shows output from Helm at the beginning of the deployment after the
install command has been issued.
You can monitor the MySQL deployment by viewing the resources in your demoapps
namespace. Run the oc get all -n demoapps command to display the readiness of the
pods, deployment, statefulsets, and services of the MySQL deployment.
The following figure shows an example of the output of this command for a fully running,
healthy MySQL deployment.
Procedure
1. Inserting test data into the MySQL database .
Once the MySQL pod is ready, a new table called replication_cr_status is
created on the hur_database database. Then a few records of test data are inserted
into this new table.
The following shows the commands used to connect to the MySQL DB to create a table
and insert test data.
2. Verify the data that has been inserted into the MySQL database with the purpose to test
and verify backup and restore of persistent volumes:
Procedure
1. To list the PVCs created during the MySQL Helm chart deployment, run the following
command:
The following figure shows the output of this command, including the details of the
persistent volume claim and the access mode that was specified during Helm chart
deployment.
3. Note the volume identifier and copy it for the next step.
4. Now that you have viewed the details of the PVC for the MySQL DM, explore the
associated PV to the claim by running the following command, entering your Volume
identifier from the previous step as the PV ID:
The following figure shows the output of this command, including the details of the PV
created for the PVC. On the VolumeAttributes note the volume nickname for the next
step.
5. Open the Storage Navigator and connect to the VSP storage system.
6. In the left pane, expand Pools, and then click on Pool #1 which is the one specified in
the sc-vsp-113 StorageClass.
7. Click Virtual Volumes. You will see container volumes provisioned to your cluster in the
right pane.
8. Find the volume that matches the volume nickname from the previous step.
In this way, we can trace the full data path from the OCP cluster to the Hitachi VSP
storage system.
Back up the MySQL application using Red Hat OADP/Velero and HSPC CSI
snapshots
Follow this procedure to create and verify the status of the backup:
Procedure
1. From your Linux workstation, log in to your OCP cluster.
2. Create a backup custom resource (CR). Here an example of the backup CR used to
backup MySQL DB:
cat demoapps-backup-csi.yaml
apiVersion: velero.io/v1
kind: Backup
metadata:
namespace: openshift-adp
name: demoapps-backup-csi
labels:
velero.io/storage-location: default
spec:
includedNamespaces:
- demoapps
oc apply -f demoapps-backup-csi.yaml
5. When the Velero backup is initiated, you can run the oc describe backup
demoapps-backup-csi command to show the progress of the Velero backup.
Procedure
1. Run the oc get volumesnapshots -n demoapps command to list the
volumesnapshots created during the backup process. In the output below we can see
the MySQL PVC, the name of the volume snapshot, the volume snapshot class, and the
associated volume snapshot content name.
2. The following command shows additional details of the volume snapshot. We can see
the label corresponding to the name of the backup demoapps-backup-csi and the
MySQL PVC called data-mysql-demo-0.
4. If we use the describe command, we can see more details for the volume snapshot
content and trace back to the Hitachi VSP LDEV created.
Explore the Hitachi Content Platform for cloud scale S3 object store
Follow this procedure to explore the data on the HCP CS S3 bucket.
Procedure
1. Open a web browser, navigate to your Hitachi Content Platform for cloud scale web
interface, and log in.
2. Browse the contents of your bucket, locate the ocp-eng-velero-target folder, and open it.
You will see the contents of all the Kubernetes objects that were backed up during the
Velero backup, with the exception of the PVs and their data.
Procedure
1. Log in to your OCP cluster from your Linux workstation.
2. Run the helm delete mysql-demo -n demoapps command to remove the MySQL
application.
3. When the application is removed, delete PVCs and PVs from the cluster.
Procedure
1. Run the velero backup get command to list the backups in the Velero database.
You can see the backups of the MySQL application that you took previously.
cat demoapps-restore-csi.yaml
apiVersion: velero.io/v1
kind: Restore
metadata:
namespace: openshift-adp
name: demoapps-restore-csi
spec:
backupName: demoapps-backup-csi
includedNamespaces:
- demoapps
3. Run the following command to begin the restore of the MySQL application and its
associated PVs and data:
oc apply -f demoapps-restore-csi.yaml
5. After the Velero restore is submitted, run the oc describe restore demoapps-
restore-csi command to view the progress of the restore. Note that the restore will
not show as completed until the application and all of its PVs and associated data are
restored.
6. Run the following commands to view all of the MySQL components that were restored.
Persistent volume and data protection using Velero and Hitachi HCP
CS S3 – Scenario 2
The following layout shows a high-level configuration of the setup to perform the backup and
restore operations between primary and secondary OCP clusters using Velero and Restic.
The following figure shows the solution architecture with the solution to be validated.
Helm allows you to install complex container-based applications easily with the ability to
customize the deployment to your needs. On your Linux workstation, install the Helm binary
by following the Helm documentation for your distribution.
Add the Bitnami repository to your Helm configuration by running the following command:
helm repo add bitnami https://charts.bitnami.com/bitnami
Search for the MySQL Helm chart by running the following command:
The following figure shows example output, showing that the Helm binary is installed properly
and the bitnami repository has been added with a MySQL Helm chart available for use.
Verify StorageClasses
The OCP cluster used for this test is a hybrid, containing some virtual worker nodes hosted
on VMware ESXi and bare metal worker nodes. The virtual worker nodes are running on a
VMware vSphere cluster. The VMware cluster has been configured with different SPBM
policies that allow placement of the Persistent Volumes (PVs) into different types of storage
(vSAN, vVols, VMFS).
On this OCP cluster we have StorageClass using vSphere CSI provisioner, and other
StorageClasses with Hitachi HSPC CSI provisioner. To verify the defined StorageClasses
enter the command oc get sc.
For this example, we used the following three StorageClasses:
■ One for the frontend Wordpress pod
■ One for the primary MariaDB instance
■ One for the secondary MariaDB instance
The following figure shows an example listing of StorageClasses available on the OCP
cluster.
oc new-project demovelero
The following is the command and parameters used to deploy Wordpress Helm chart:
You must modify these values to match your environment and the StorageClass(es) that are
available in your OCP cluster. The values and their corresponding impact to the Wordpress
deployment are as follows:
■ wordpressUsername - Sets the admin username for the Wordpress application.
■ wordpressPassword - Sets the password for the admin user in the Wordpress application.
■ replicaCount - Configures the number of frontend Wordpress pods.
■ persistence.storageClass - Sets the StorageClass to be used for the frontend Wordpress
pods.
■ persistence.size - Sets the size of the persistent volume to be assigned to the frontend
Wordpress pods.
■ mariadb.architecture - Indicates whether Helm should deploy a single backend database
(standalone, single pod) or a high-availability backend database (replication, two pods).
■ mariadb.primary.persistence.storageClass - Sets the StorageClass to be used for the
primary MariaDB instance.
■ mariadb.primary.persistence.size - Sets the size of the persistent volume to be assigned
to the primary MariaDB instance.
■ mariadb.secondary.persistence.storageClass - Sets the StorageClass to be used for the
secondary MariaDB instance.
Set the SecurityContext in your OCP cluster according to your environment and security
requirements.
Execute the command and Helm will begin to deploy your Wordpress deployment to your
OCP cluster.
The following figure shows output from Helm at the beginning of the deployment after the
install command has been issued.
You can monitor the Wordpress deployment by viewing the resources in your demoapps
namespace. Run the oc get all -n demoapps command to display the readiness of the
pods, deployment, statefulsets, and services of the Wordpress deployment.
The following figure shows an example of the output of this command for a fully running,
healthy Wordpress deployment.
Procedure
1. To identify the host/port of the exposed Wordpress service, use the following command:
2. Open a browser and enter the address of the Wordpress service (http), and then the
Wordpress interface should display. The following shows an example of the default
Wordpress application user interface. We can identify the Wordpress application has
been deployed in OCP cluster #2 (jpc2).
3. Create some content, for this purpose browse to the admin interface of Wordpress (/
admin), and log in using the username and password you set in the Helm installation
script.
4. Click the Create for your first post link.
5. Enter information to create a blog post, and then publish the post.
6. Navigate back to the default URL of the Wordpress application to verify that your post
was committed to the database. Here is an example of the new post:
Procedure
1. To list the PVCs created during the Wordpress Helm chart deployment, run the following
command:
2. To observe details about the volume within vCenter, open a browser and then open a
vSphere web client session to the vCenter hosting the OCP cluster jpc2.
3. Highlight the vSphere cluster hosting the OCP cluster VMs and navigate to its Monitor
tab.
4. In the left pane expand Cloud Native Storage, and then click Container Volumes.
You will see container volumes provisioned to your cluster in the right pane.
In the next step, we are going to display more details for one of the container volumes,
the one assigned to the primary DB.
5. Find the volume that matches your PVC ID from the previous step, and then click on the
Details icon.
This displays the details about the volume that are surfaced from OpenShift, including
the persistent volume ID, namespace, labels, and pod allocation from within the OCP
cluster.
Back up the Wordpress application using Red Hat OADP/Velero and HSPC CSI
snapshots
Follow this procedure to create and verify the status of the backup:
Procedure
1. From your Linux workstation, log in to your OCP cluster.
2. Create a backup custom resource (CR). Makes sure the default VolumesToRestic
parameter is set to true. The following is an example of the backup CR used to back
up the Wordpress application:
cat demowordpress-backup-restic.yaml
apiVersion: velero.io/v1
kind: Backup
metadata:
namespace: openshift-adp
name: demowordpress-backup-restic
labels:
velero.io/storage-location: default
spec:
defaultVolumesToRestic: true
includedNamespaces:
- demovelero
oc apply -f demowordpress-backup-restic.yaml
5. When the Velero backup is initiated, you can run the oc describe backup
demovelero-backup-restic command to show the progress of the Velero backup.
Explore the Hitachi Content Platform for cloud scale S3 object store
Follow this procedure to explore the data on the HCP CS S3 bucket.
Procedure
1. Open a web browser, navigate to your Hitachi Content Platform for cloud scale web
interface, and log in.
2. Browse the contents of your bucket; for this test we are using a new bucket called the
onprem-vcf-velero-target folder and open it.
You will see the contents of all of the Kubernetes objects that were backed up during the
Velero backup, with the exception of the PVs and their data.
3. Browse the contents of your bucket, locate the onprem-vcf-velero-target folder, and
open it.
4. Navigate back to the root of your bucket and select the plugins folder.
5. Navigate to restic/demovelero/data, where demovelero is the name of the namespace
for the Wordpress application.
6. Under data you will find multiple folders with the data.
These folders correspond to the PVs that were backed up from the vSphere container
volumes taken by the Red Hat OADP/Velero and Restic.
Procedure
1. Log in to your OCP cluster from your Linux workstation.
2. Run the helm delete wordpress -n demovelero command to remove the
Wordpress application.
3. When the application is removed, run the oc delete project demovelero
command to remove all of the resources including PVCs and PVs from the cluster.
Procedure
1. Verify the bare metal worker node has been marked as unschedulable.
2. Because we are restoring the application in a new cluster, make sure that the
StorageClasses have the same name as the ones used to deploy the Wordpress
application in the primary cluster.
The following example shows StorageClasses with the same name as those from the
primary OCP cluster:
3. The OADP/Velero in the secondary cluster has been installed/configured with the same
S3/bucket. Verify that the secondary cluster can see the backup made from the primary
cluster. Run the velero backup get command to list the backups in the Velero
database.
You can see the backups of the MySQL application that you took previously.
6. Run the following command to begin the restore of the MySQL application and its
associated PVs and data:
oc apply -f demowordpress-restore-restic.yaml
Note that the restore will not show as complete until the application and all of its PVs
and associated data are restored.
8. Run the following commands to view all of the Wordpress components that were
restored in the demovelero namespace in the secondary cluster.
9. To observe details about the restored container volumes within vCenter, open a browser,
and then open a vSphere web client session to the vCenter hosting the secondary OCP
cluster jpc3.
10. Highlight the vSphere cluster hosting the secondary OCP cluster VMs and navigate to
its Monitor tab.
11. In the left pane expand Cloud Native Storage, and then click Container Volumes.
You will see the restored container volumes provisioned to your cluster in the right pane.
The following is an example of additional details of one of the restored volumes for the
primary DB.
Hitachi Replication Plug-in for Containers (HRPC) supports any Kubernetes cluster
configured with Hitachi Storage Plug-in for Containers; this guide covers the installation on a
Red Hat OpenShift Container Platform configured with Hitachi VSP storage. The
infrastructure for this demo is based on the Hitachi Unified Compute Platform.
For configuration details, see the Hitachi Replication Plug-in for Containers Configuration
Guide.
Requirements
Before installation, complete the following requirements:
■ Install two Kubernetes clusters, one in the primary and the other in the secondary site. A
single Kubernetes cluster is not supported.
■ Configure Hitachi Universal Replicator (HUR). For more details, see Universal Replicator
Overview.
■ Install Hitachi Storage Plug-in for Containers in both clusters, either Kubernetes or Red
Hat OpenShift Container.
■ For inter-site connectivity:
■ ● Hitachi Replication Plug-in for Containers in the primary site must communicate with
the Kubernetes cluster in the secondary site and vice versa.
● Hitachi Replication Plug-in for Containers in the primary site must communicate with
the storage system in the secondary site and vice versa.
● Connection between primary and secondary storage system RESP API.
● Fibre Channel or iSCSI connection is needed between primary and secondary storage
systems for data copy.
An example of the StorageClass (for both sites) is provided in Creating a manifest file
for Replication CR (on page 80).
■ Create a namespace in both primary and secondary sites. The namespace must have the
same name in both sites.
The following figure shows the remote connection configured between the two Hitachi VSP
storage systems; the first VSP is connected to the primary Kubernetes cluster, and the
second VSP is connected to the secondary Kubernetes cluster.
Procedure
1. Download and extract the installation media for HRPC into the management
workstation.
unzip hrpc_<version>.zip
2. Get the kubeconfig file from both the primary and secondary sites.
KUBECONFIG_P=/path/to/primary-kubeconfig
KUBECONFIG_S=/path/to/secondary-kubeconfig
SECRET_KUBECONFIG_P=/path/to/primary-kubeconfig-secret.yaml
SECRET_KUBECONFIG_S=/path/to/secondary-kubeconfig-secret.yaml
3. Configure an environment variable for the secret file of the storage system.
SECRET_STORAGE=/path/to/storage-secret.yaml
4. Copy the namespace manifest file to the management machine. This file is provided in
the media kit (hspc-replication-operator-namespace.yaml). Do not edit it.
5. Create a Secret manifest file with the secondary kubeconfig information to access
the secondary Kubernetes cluster from Hitachi Replication Plug-in for Containers
running in the primary Kubernetes cluster. For reference, see the remote-
kubeconfig-sample.yaml file. Here an example:
# base64 encoding
cat ${KUBECONFIG_S} | base64 -w 0
vi secondary-kubeconfig-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: hspc-replication-operator-remote-kubeconfig
namespace: hspc-replication-operator-system
type: Opaque
data:
remote-kubeconfig: <base64 encoded secondary kubeconfig>
6. Create a Secret manifest file the with the primary kubeconfig information to access
the primary Kubernetes cluster from Hitachi Replication Plug-in for Containers running in
the secondary Kubernetes cluster. For reference, see the remote-kubeconfig-
sample.yaml file.
Here is an example:
# base64 encoding
cat ${KUBECONFIG_P} | base64 -w 0
vi primary-kubeconfig-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: hspc-replication-operator-remote-kubeconfig
namespace: hspc-replication-operator-system
type: Opaque
data:
remote-kubeconfig: <base64 encoded primary kubeconfig>
7. Create a Secret manifest file containing storage system information that enables access
by Hitachi Replication Plug-in for Containers. For reference, see the storage-
secrets-sample.yaml file. This manifest file includes information for both the
primary and secondary storage systems.
Here is an example:
vi ${SECRET_STORAGE}
apiVersion: v1
kind: Secret
metadata:
name: hspc-replication-operator-storage-secrets
namespace: hspc-replication-operator-system
type: Opaque
stringData:
storage-secrets.yaml: |-
storages:
- serial: 40016 #Serial number, primary storage system
url: https://172.25.47.x #URL for the REST API server
user: UserPriary #User, primary storage system
password: PasswordPrimary #Password for user
journal: 1 #Journal ID HUR primary storage system
- serial: 30595 #Serial number, secondary storage system
url: https://172.25.47.y #URL for the REST API server
user: UserSecondary #User, secondary storage system
password: PasswordSecondary #Password for user
journal: 1 #Journal ID HUR secondary storage system
8. Modify the Hitachi Replication Plug-in for Containers manifest file (hspc-
replication-operator.yaml) provided in the media kit based on your requirement
to use your private repository.
Part II: Install the Hitachi Replication Plug-in for Containers Operator
Procedure
1. From the management workstation, login to both primary and secondary clusters:
2. Create Namespaces in the primary and secondary sites. Use the same manifest file in
primary and secondary sites.
3. Create Secrets containing kubeconfig information in primary and secondary sites. Use
the different manifest files in primary and secondary sites.
4. Create Secrets containing storage system information in primary and secondary sites.
Use the same manifest file in primary and secondary sites.
5. Load the container (for example, docker load or podman for OpenShift)
hrpc_<version>.tar and push the loaded container to your private repository.
6. Create Hitachi Replication Plug-in for Containers in primary and secondary sites. Use
the same manifest file for both the primary and secondary sites.
7. Confirm that Hitachi Replication Plug-in for Containers are running in primary and
secondary sites.
Check HRPC operator in primary site:
At this point, the Hitachi Replication Plug-in for Containers operator is ready and the
next steps is to install and test with a stateful app or just create a PVC and Pod that
consumes the PVC.
Procedure
1. First, we need to add the Bitnami repository by running the following command:
helm repo add bitnami https://charts.bitnami.com/bitnami
Search for the MySQL Helm chart by running the following command:
2. Next, create a namespace for the stateful app for the demo. Use the following command
to create a namespace (project):
We can use the following commands to check the status of the MySQL pod and its
corresponding Persistent Volume.
The following data has been inserted into the MySQL database with the purpose to test
and verify replicated PVC to the secondary site:
Note: A StorageClass with the same name must exist on the secondary site
(examples below). Also, a namespace with the same name must be created on
the secondary site before creating the Replication CR.
cat hspc_v1_msqldb_replication.yaml
apiVersion: hspc.hitachi.com/v1
kind: Replication
metadata:
name: replication-mysqldb1
spec:
persistentVolumeClaimName: data-mysql-hrpc-example-0
storageClassName: vsp-hrpc-sc
Here is an example of the StorageClass CR for the VSP storage on the Primary site:
cat vsp-hrpc-sc.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: vsp-hrpc-sc
annotations:
kubernetes.io/description: Hitachi Storage Plug-in for Containers
provisioner: hspc.csi.hitachi.com
reclaimPolicy: Delete
volumeBindingMode: Immediate
allowVolumeExpansion: true
parameters:
serialNumber: "40016"
poolID: "1"
portID : CL1-D,CL4-D
connectionType: fc
csi.storage.k8s.io/fstype: ext4
csi.storage.k8s.io/node-publish-secret-name: "secret-vsp-113"
csi.storage.k8s.io/node-publish-secret-namespace: "test"
csi.storage.k8s.io/provisioner-secret-name: "secret-vsp-113"
csi.storage.k8s.io/provisioner-secret-namespace: "test"
csi.storage.k8s.io/controller-publish-secret-name: "secret-vsp-113"
csi.storage.k8s.io/controller-publish-secret-namespace: "test"
csi.storage.k8s.io/node-stage-secret-name: "secret-vsp-113"
csi.storage.k8s.io/node-stage-secret-namespace: "test"
csi.storage.k8s.io/controller-expand-secret-name: "secret-vsp-113"
csi.storage.k8s.io/controller-expand-secret-namespace: "test"
Here is an example of the StorageClass CR for the VSP storage on the Secondary site:
cat vsp-hrpc-sc.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: vsp-hrpc-sc
annotations:
kubernetes.io/description: Hitachi Storage Plug-in for Containers
provisioner: hspc.csi.hitachi.com
reclaimPolicy: Delete
volumeBindingMode: Immediate
allowVolumeExpansion: true
parameters:
serialNumber: "30595"
poolID: "12"
portID : CL1-B,CL4-B
connectionType: fc
csi.storage.k8s.io/fstype: ext4
csi.storage.k8s.io/node-publish-secret-name: "secret-vsp-112"
csi.storage.k8s.io/node-publish-secret-namespace: "test"
csi.storage.k8s.io/provisioner-secret-name: "secret-vsp-112"
csi.storage.k8s.io/provisioner-secret-namespace: "test"
csi.storage.k8s.io/controller-publish-secret-name: "secret-vsp-112"
csi.storage.k8s.io/controller-publish-secret-namespace: "test"
csi.storage.k8s.io/node-stage-secret-name: "secret-vsp-112"
csi.storage.k8s.io/node-stage-secret-namespace: "test"
csi.storage.k8s.io/controller-expand-secret-name: "secret-vsp-112"
csi.storage.k8s.io/controller-expand-secret-namespace: "test"
We can see that both StorageClasses have the same name, and each points to the
respective VSP storage in each site. As we can see in these examples, the StorageClass CR
is similar that the one used for non-replicated environment.
On the VSP Storage systems, we can verify that UR pairs have been automatically created
as well:
UR pairs on primary storage system:
The following command provides more details from the Replication CR, like storage serial
number, LDEV Name for both primary and secondary site, and it can easily be correlated with
the LDEVs seen on the UR Pairs.
First confirm the status of the Replication CR is Ready and Operation value is none.
Then use the command below to edit the Replication CR and change the
spec.desiredPairState to split.
After the edit, make sure the Replication CR status is split and the operation value is none.
--set primary.persistence.existingClaim=data-mysql-hrpc-example-0 \
bitnami/mysql
We can use the following commands to check the status of the MySQL pod and its
corresponding Persistent Volume on the Secondary site.
The next step is to connect to the MySQL database and verify the same data that was
created from the primary site.
The following query confirms that the PVC/ MySQL database contains the same data that
was inserted on the primary site.
Now that we have confirmed the same data on the PVC on the secondary site, we can
uninstall the MySQL helm chart, the PVC will remain there since it is controlled by the
Replication CR.
After the edit, make sure the Replication CR status is Ready and the Operation value is none.
Requirements
■ Install the Kubernetes or Red Hat OpenShift Container Platform.
■ Download the Storage Plug-in for Prometheus installation media kit from the Hitachi
Support Connect Portal: https://support.hitachivantara.com/en/user/home.html. A Hitachi
login credential is required.
■ Install Hitachi Storage Plug-in for Containers in Kubernetes or Red Hat OpenShift
Container Platform.
■ Configure StorageClass for Hitachi Storage Plug-in for Containers in Kubernetes or Red
Hat OpenShift Container Platform.
oc get nodes
Procedure
1. Download and extract the installation media.
2. Load the Storage Plug-in for Prometheus image into the repository.
3. Update the exporter.yaml with the corresponding registry hostname and port for the
cluster.
4. Update the secret-sample.yaml using the info from the VSP storage: Serial
Number, Storage System API URL, user and password.
apiVersion: v1
kind: Secret
metadata:
name: storage-exporter-secret
namespace: hspc-monitoring-system
type: Opaque
stringData:
storage-exporter.yaml: |-
storages:
- serial: 40016
url: https://172.25.47.x
user: MaintenanceUser
password: PasswordForUser
oc apply -f yaml/namespace.yaml
oc apply -f yaml/scc-for-openshift.yaml
oc get pods
Procedure
1. In the grafana-prometheus-sample.yaml file, replace StorageClass of with your
own StorageClass.
2. (Optional) Modify the Grafana service.
The grafana-prometheus-sample.yaml file exposes Grafana as a NodePort with a
random nodeport. If you want to expose Grafana in a different way, modify the
grafana-prometheus-sample.yaml file.
3. Deploy Grafana and Prometheus.
oc apply -f yaml/grafana-prometheus-sample.yaml
oc get pods
4. Access Grafana.
If you use NodePort, access Grafana with <Your Node IP Address>:<Grafana Port>.
You can identify <Grafana Port> by using the following command.
oc get svc
If you expose the Grafana, please get endpoint by yourself. The Grafana user/password
are admin/secret.
oc expose svc/grafana
oc get routes
The HSPC Volumes Dashboard shows metrics for Persistent Volumes such as Capacity,
Response Time, IOPS, Read/Write Transfer Rate, and Cache Hit Rate.
These metrics can be presented by Namespace, Persistent Volume Claims VC, Storage
Class, Storage Serial Number, or Storage Pool ID.
When doing performance testing you can view specific metrics as shown.
Deploying a private container registry using Red Hat Quay and Hitachi
HCP CS S3
Some environments are not allowed to have Internet connected access to public registry for
images. In other cases, having a private registry is a security practice that some customers
adopt.
Red Hat Quay is a distributed and highly available container image registry platform that,
when integrated with Hitachi HCP CS S3, provides a secure storage, distribution, and
governance of containers on any infrastructure. This is available as a standalone component
or running on top of OCP cluster.
Requirements
Before starting with the deployment of Red Hat Quay Operator on the OCP cluster, consider
the following:
■ The OCP cluster is using OpenShift 4.5 or later.
■ Ensure the OCP cluster has sufficient compute resources for Quay deployment, see Red
Hat Quay documentation for specific requirements.
■ Ensure an Object Storage is available. For this demo we are using Hitachi HCP CS S3
storage.
Procedure
1. Log in to the OCP console, select OperatorHub under Operators, and then select the
Red Hat Quay Operator.
2. Select Install and the operator installation page appears. Select Install one more time.
After a short time, you will see the installed operator is ready for use. The operator can
be seen on the Installed Operators page as well.
Procedure
1. Open a web browser, navigate to your Hitachi Content Platform for cloud scale web
interface, and log in.
2. Browse the contents and create a bucket for the Quay repository.
The following example an onprem-ocp-quay-repo bucket that has been created for this
purpose.
cat config_hcpcs.yaml
DISTRIBUTED_STORAGE_CONFIG:
s3Storage:
- S3Storage
- host: tryhcpforcloudscale.hitachivantara.com
s3_access_key: Hitachi_HCP_CS_access_key_here
s3_secret_key: Hitachi_HCP_CS_secret_key_here
s3_bucket: onprem-ocp-quay-repo
storage_path: /
DISTRIBUTED_STORAGE_DEFAULT_LOCATIONS: []
DISTRIBUTED_STORAGE_PREFERENCE:
- s3Storage
cat quayregistry_hcpcs.yaml
apiVersion: quay.redhat.com/v1
kind: QuayRegistry
metadata:
name: example-registry
namespace: openshift-operators
spec:
configBundleSecret: config-bundle-secret
components:
- kind: objectstorage
managed: false
- kind: clair
managed: false
- kind: horizontalpodautoscaler
managed: false
- kind: mirror
managed: false
- kind: monitoring
managed: false
oc apply -f quayregistry_hcpcs.yaml
After a short time, the following pods for the Quay Registry will be deployed. Note that these
are the components deployed for a minimum deployment. The number of PODs might vary
depending on the type of deployment.
Also, we can see that a PVC was automatically created for the Quay database:
Procedure
1. In the OCP console, navigate to Operators > Installed Operators, with the appropriate
namespace/project.
2. Click on the newly installed Quay Registry, to view the details:
3. Navigate to the URL of the Registry EndPoint, for example in this demo:
hhps://example-registry-quay-openshift-operators.apps.jpc0.ocp.hvlab.local/
4. Select Create Account in the Quay registry UI to create a user account.
For this test we are going to use the image for Hitachi Storage Plug-in for Prometheus which
has been downloaded to a local folder and Podman will be used to push the image to the
Quay registry. First, we load the image, assign a tag, and then push the image to the registry:
Conclusion
Hitachi Unified Compute Platform, Hitachi Virtual Storage Platform, Hitachi Content Platform
for cloud scale, Hitachi Storage Plug-ins for Containers, VMware CSI and VASA plugins, and
Red Hat OpenShift Container Platform combine to create a powerful and flexible Kubernetes
ecosystem.
This reference architecture highlights recommended approaches for using Red Hat OpenShift
in a Hitachi infrastructure environment (UCP and/or VSP) while taking advantage of various
Hitachi storage platforms and data storage integrations to achieve a highly resilient and
protected platform to deliver Kubernetes clusters and containers at scale.
Product descriptions
This section provides information about the hardware and software components used in this
solution for OpenShift on Hitachi Unified Compute Platform.
Hardware components
These are the hardware components available for Hitachi Unified Compute Platform.
This enterprise-class, flash array evolution storage, Hitachi Virtual Storage Platform 5000
series (VSP) has an innovative, scale-out design optimized for NVMe and storage class
memory. It achieves the following:
■ Agility using NVMe: Speed, massive scaling with no performance slowdowns, intelligent
tiering, and efficiency.
■ Resilience: Superior application availability and flash resilience. Your data is always
available, mitigating business risk.
■ Storage simplified: Do more with less, integrate AI (artificial intelligence) and ML
(machine learning), simplify management, and save money and time with consolidation.
Hitachi Virtual Storage Platform G series offers support for containers to accelerate cloud-
native application development. Provision storage in seconds, and provide persistent data
availability, all the while being orchestrated by industry leading container platforms. Move
these workloads into an enterprise production environment seamlessly, saving money while
reducing support and management costs.
Software components
These are the software components used in this reference architecture.
Red Hat Enterprise Linux High Availability Add-On allows a service to fail over from 1 node to
another with no apparent interruption to cluster clients, evicting faulty nodes during transfer to
prevent data corruption. This Add-On can be configured for most applications (both off-the-
shelf and custom) and virtual guests, supporting up to 16 nodes. The High Availability Add-
On features a cluster manager, lock management, fencing, command-line cluster
configuration, and a Conga administration tool.
VMware vSphere
VMware vSphere is a virtualization platform that provides a datacenter infrastructure. It helps
you get the best performance, availability, and efficiency from your infrastructure and
applications. Virtualize applications with confidence using consistent management.