Single Node OpenShift
Single Node OpenShift
Summary
Single Node OpenShift (SNO) is a configuration of a standard OpenShift cluster that consists
of a single control plane node that is configured to run workloads on it. This configuration
offers both control and worker node functionality, allowing users to deploy a smaller
OpenShift footprint and have minimal to no dependence on a centralized management
cluster. SNO can run autonomously when needed, making it a useful solution for resource-
constrained environments, demos, proof of concepts, or even on-premises deployments.
By deploying SNO, users can experience the benefits of OpenShift in a more compact
environment that requires fewer resources. It can also provide a simple and efficient way to
test new features or applications in a controlled environment. However, it's important to
keep in mind that SNO lacks high availability, so it may not be suitable for mission-critical
workloads that require constant uptime.
Overall, SNO offers a flexible and convenient way to deploy OpenShift for a variety of use
cases.
Highlights
Supports all valid combinations of industry solutions and add-ons based on the
compatibility matrix.
Supports 70 concurrent users.
Entitement needed for the official support.
Supported on bare metal, vSphere, Red Hat OpenStack, and Red Hat Virtualization
platforms.
If you want to use Persistent Volumes, you’ll need an additional disk, an SSD preferably, and
configre ODS LVM Operator to use it.
When to use Single Node OpenShift?
For edge sites or scenarios where OpenShift clusters are required, but high availability is
not critical, Single Node OpenShift can be an appropriate solution.
For developers who want to experience a "real" cluster environment, Single Node
OpenShift is a good option. It enables them to develop and deploy applications in a
cluster environment, providing a "small" OpenShift experience.
It's important to note that Single Node OpenShift lacks high availability, which is a
tradeoff that should be considered.
I have Single Node OpenShift running on a baremetal environment with 16 Cores, 64GB
RAM and 2 SSDs with MAS 8.10 and Manage 8.6. The first SSD has the OS, and the second
disk is configured to be used by the LVM Operator.
Use Cases
Small MAS and Manage-only implementations that range from 70 concurrent users
Satellite / Disconnected deployments, possibly connected to a big MAS. It can sync data
to Central Data Center for Maximo EAM
Upgrading small Maximo customers to MAS
Demo & PoC
Requirements
OpenShift: 4.10+
vCPU: 16Cores
RAM: 64Gb
IBM entitlement Key : Log in to the IBM Container Library with a user ID that has
software download rights for your company’s Passport Advantage entitlement to get the
entitlement key.
Openshift pull secret file (pull-secret). It can be downloaded from [here]
(https://access.redhat.com/management). You need a valid redhat account for
downloading.
MAS license file (license.dat): Access IBM License Key Center to the Get Keys menu select
IBM AppPoint Suites. Select IBM MAXIMO APPLICATION SUITE AppPOINT LIC. more details can be
found in here
Docker/Podman
AWS
Valid AWS access key id
Secret access key: If you don't it, ask your aws account admin to create one in IAM
service
Domain or subdomain: If you don't have one, ask your aws account admin to register
one through AWS Route53
Bare metal/vSphere:
Requirements link
Note
If you are installing ODS LVM, you’ll need an additional disk, an SSD preferably, and
configre ODS LVM Operator to use it.
Openshift Installation
mkdir ~/sno
cd ~/sno
docker pull quay.io/ibmmas/cli
docker run -dit --name sno quay.io/ibmmas/cli:latest bash
Log into the docker container; create a folder for mas configuration; then exit the
container
Copy pull-secret and mas license file into the docker container
AWS
Available commands:
- mas install to launch a MAS install pipeline
- mas provision-fyre to provision an OCP cluster on IBM DevIT Fyre (internal)
- mas provision-roks to provision an OCP cluster on IBMCloud Red Hat OpenShift Service (ROKS)
- mas provision-aws to provision an OCP cluster on AWS
- mas provision-rosa to provision an OCP cluster on AWS Red Hat OpenShift Service (ROSA)
- mas setup-registry to setup a private container registry on an OCP cluster
- mas mirror-images to mirror container images required by mas to a private registry
- mas configure-ocp-for-mirror to configure a cluster to use a private registry as a mirror
Run the command to provision SNO AWS Cluster. It will automatically detect the single
enode.
Enter your AWS credentials:
AWS API Key ID
AWS Secret Access Key
AWS Secret Access Key
Cluster Name
AWS Region
AWS Base Domain
mas provision-aws
OCP Version:
1. 4.10 EUS
Select Version > 1
Bare Metal/vSphere
Installation
Storage Class
Note
You’ll need an additional disk, an SSD preferably, and configre ODS LVM Operator to use
it.
In the OpenShift Console UI, go to Storage -> StorageClasses using the left menu. You
should see odf-lvm-vg1 .
Click on it, in the next screen click on the YAML tab.
Add storageclass.kubernetes.io/is-default-class: "true" under the annotations.
The YAML should look like this:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: odf-lvm-vg1
uid: 55909d9c-882c-4cbb-962d-e7dbed289946
resourceVersion: '7200873'
creationTimestamp: '2023-03-26T02:15:25Z'
annotations:
description: Provides RWO and RWOP Filesystem & Block volumes
storageclass.kubernetes.io/is-default-class: 'true'
managedFields:
provisioner: topolvm.cybozu.com
parameters:
csi.storage.k8s.io/fstype: xfs
topolvm.cybozu.com/device-class: vg1
reclaimPolicy: Delete
allowVolumeExpansion: true
volumeBindingMode: WaitForFirstConsumer
You can also use CLI command to set the storageclass as the default:
You need to enable the image registry for building and pushing of images. Link: configuring
the registry for bare metal
In the OpenShift Console UI, Home->Search for config
Click cluster . Go to the YAML tab. Click on the top right Action drop down and select
Edit Config .
Update the cluster yaml:
managementState: Removed
to
managementState: Managed
rolloutStrategy: RollingUpdate
to
rolloutStrategy: Recreate
Set Storage:
storage: {}
to
storage:
pvc:
claim: ''
You can also use oc edit to update the cluster yaml using command line:
$ oc edit configs.imageregistry/cluster
Check if the image-storage-registry PVC is bound. If it is in pending status, please follow the
steps in "Troubleshooting" section before installing MAS and Manage.
MAS and Manage Installation
Login to your OpenShift: Use the OpenShift Console top right pulldown menu to get the
login command to OpenShift.
Click on the Copy login command , the click on the “Display Token” word that will be shown
in the page that just opened, and then copy the login command shown under Log in with
this token :
You have access to 76 projects, the list has been suppressed. You can list all projects with 'oc
mas install
mas install
Current Limitations
1. Support for airgap installation is limited to MAS 8.8 (core only) at present
3. Configure Installation
MAS Instance ID > sno
Use online catalog? [y/N] y
MAS Version:
1. 8.10
2. 8.9
Select Subscription Channel > 1
6. Application Selection
Install IoT [y/N]
Install Manage [y/N] y
Custom Subscription Channel > 8.6.x-dev
+ Create demo data [Y/n]
+ Configure JMS [y/N]
+ Customize database settings [y/N] y
Schema > maximo
Tablespace > maximo
Indexspace > maximo
Install Optimizer [y/N]
Install Visual Inspection [y/N]
Install Predict [y/N]
Install Health & Predict - Utilities [y/N]
Install Assist [y/N]
7. Configure Db2
The installer can setup one or more IBM Db2 instances in your OpenShift cluster for the use of ap
System Database configuration for IoT is not required because the application is not being instal
8. Additional Configuration
Additional resource definitions can be applied to the OpenShift Cluster during the MAS configurat
The primary purpose of this is to apply configuration for Maximo Application Suite itself, but yo
Select the ReadWriteOnce storage classes to use from the list below:
- odf-lvm-vg1
11 R i S tti
11. Review Settings
View progress:
https://console-openshift-console.apps.sno.buyermas4aws.com/pipelines/ns/mas-sno-pipelines
Tekton Pipeline
You can see the installation progess and logs from OpenShift Console in the mas--pipelines
namespace. Select Pipelines menu in the left navigation bar and click on on PipelinesRuns
tab and select pipeline run. You can click on any task and view logs.
Troubleshooting
BareMetal/VSphere
To enable building and pushing of images, the image-storage-registry PVC should be in the
bound status. If the image-registry-storage PVC is in Pending status, you need to follow the
steps below to update the image-storage-registry PVC:
In the OpenShift Console UI, go to Storage->PersistentVolumeCLaims. Select image-
storage-registry PVC.
Go the YAML tab and download it using the Download button on the botton right.
Update the downloaded file.