0% found this document useful (0 votes)
136 views7 pages

Mop ZTS

1. This document outlines the steps to deploy the ZTS application on a Kubernetes cluster, including verifying node labels, checking networking and storage configuration, downloading and extracting the ZTS package, creating tenants and users, uploading images and charts, generating certificates, configuring Helm values files, and deploying the Helm charts. 2. Key steps include verifying the cluster configuration, uploading the ZTS package, configuring tenants and users in NCS, uploading images and charts, generating certificates, configuring Helm values files, and deploying the Helm charts to install the ZTS application. 3. Once deployed, the document instructs to check that the Helm charts are installed correctly and pods

Uploaded by

Manoj Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
136 views7 pages

Mop ZTS

1. This document outlines the steps to deploy the ZTS application on a Kubernetes cluster, including verifying node labels, checking networking and storage configuration, downloading and extracting the ZTS package, creating tenants and users, uploading images and charts, generating certificates, configuring Helm values files, and deploying the Helm charts. 2. Key steps include verifying the cluster configuration, uploading the ZTS package, configuring tenants and users in NCS, uploading images and charts, generating certificates, configuring Helm values files, and deploying the Helm charts to install the ZTS application. 3. Once deployed, the document instructs to check that the Helm charts are installed correctly and pods

Uploaded by

Manoj Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 7

ZTS DEPLOYMENT MOP

1. 1ST verify taints and labels on ncs nodes

Kubectl get nodes –show-labels


is_edge=true on edge nodes
is_worker=true o worker nodes
is_control=true on control nods

[root@stl-ncs-lab-4thjuly-control-01 cloud-user]# kubectl get nodes --show-labels

NAME STATUS ROLES AGE VERSION LABELS

stl-ncs-lab-4thjuly-control-01 Ready <none> 6d21h v1.21.9 bcmt_storage_node=true,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=c8_r16_d600-


Controller,beta.kubernetes.io/os=linux,cpu_pooler_active=false,dynamic_local_storage_node=false,failure-domain.beta.kubernetes.io/region=regionOne,failure-
domain.beta.kubernetes.io/zone=zone-ovs,is_control=true,is_edge=false,is_storage=true,is_worker=false,kubernetes.io/arch=amd64,kubernetes.io/hostname=stl-ncs-lab-
4thjuly-control-01,kubernetes.io/os=linux,local_storage_node=false,ncs.nokia.com/group=group_01,ncs.nokia.com/multus_node=true,node.kubernetes.io/instance-
type=c8_r16_d600-Controller,rook_storage2=false,rook_storage=false,topology.cinder.csi.openstack.org/zone=zone-ovs,topology.kubernetes.io/
region=regionOne,topology.kubernetes.io/zone=zone-ovs

stl-ncs-lab-4thjuly-control-02 Ready <none> 6d21h v1.21.9 bcmt_storage_node=true,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=c8_r16_d600-


Controller,beta.kubernetes.io/os=linux,cpu_pooler_active=false,dynamic_local_storage_node=false,failure-domain.beta.kubernetes.io/region=regionOne,failure-
domain.beta.kubernetes.io/zone=zone-ovs,is_control=true,is_edge=false,is_storage=true,is_worker=false,kubernetes.io/arch=amd64,kubernetes.io/hostname=stl-ncs-lab-
4thjuly-control-02,kubernetes.io/os=linux,local_storage_node=false,ncs.nokia.com/group=group_01,ncs.nokia.com/multus_node=true,node.kubernetes.io/instance-
type=c8_r16_d600-Controller,rook_storage2=false,rook_storage=false,topology.cinder.csi.openstack.org/zone=zone-ovs,topology.kubernetes.io/
region=regionOne,topology.kubernetes.io/zone=zone-ovs

stl-ncs-lab-4thjuly-control-03 Ready <none> 6d21h v1.21.9 bcmt_storage_node=true,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=c8_r16_d600-


Controller,beta.kubernetes.io/os=linux,cpu_pooler_active=false,dynamic_local_storage_node=false,failure-domain.beta.kubernetes.io/region=regionOne,failure-
domain.beta.kubernetes.io/zone=zone-ovs,is_control=true,is_edge=false,is_storage=true,is_worker=false,kubernetes.io/arch=amd64,kubernetes.io/hostname=stl-ncs-lab-
4thjuly-control-03,kubernetes.io/os=linux,local_storage_node=false,ncs.nokia.com/group=group_01,ncs.nokia.com/multus_node=true,node.kubernetes.io/instance-
type=c8_r16_d600-Controller,rook_storage2=false,rook_storage=false,topology.cinder.csi.openstack.org/zone=zone-ovs,topology.kubernetes.io/
region=regionOne,topology.kubernetes.io/zone=zone-ovs

2. Check our application VLAN/networks on Edge nodes


kubectl get nodes
ssh -i bcmt-cluster.pem cloud-user@stl-ncs-lab-4thjuly-edge-01 ip a

Verify host device and subnet details on edge nodes ( like eth2,eth3,vlan111 ….)
3. Check storage class as per deployment models ( CN-A, CN-B)

In this case, we are working on CN-A model so requirement is glusterfs

4. Download ZTS package from NOLS --- ZTS_1.16.119.tar.gz


5. Upload this package to control node/deploy server - FTP
6. Untar Package

tar -xvf ZTS_1.16.119.tar.gz

7. Create tenant user for ZTS


ncs config set --endpoint=https://10.60.10.9:8082/ncm/api/v1 --- NCS GUI
ncs user login --username=cnf_user --password=Cnf#76543 ---- NCS user must have admin rights
ncs tenant create --config ntas-tenant.json
Json Content
cat zts-tenant.json

[root@stl-ncs-lab-4thjuly-control-01 TAS]# ncs tenant create --config ntas-tenant.json

{"task-status": "running"}

ncs user login --username zcfx-admin --password Nokia@123


....

run the o/p of above command for password reset


{"task-status":"done"}

__After reset login again with updated password

ncs user login --username=tas-admin --password =Nokia@111


create tenant successfully

8. Onboarding IMAGE & CHARTS

tar -xvf ZTS_1.12.144.tar.gz


cd ZTS_1.12.144
cd INSTALL_MEDIA/
ls -lrth
cd MISC/
chmod 744 ncm_onboarding_script.sh
sh ncm_onboarding_script.sh zts <tenent name> >>>> take approx 90-100min

Sample output
Verify IMAGE uploaded

#ncs tenant-app-resource image list -- zcfx-admin


ncs tenant-app-resource image list ---tenant_name <tenant name>

CHART UPLOAD

sh ncm_onboarding_script.sh harbor_chart <tenant name>

9. Generate ca.pem file for deployment

kubectl get secret citm-cert-tls -n ncms -o yaml | grep ca.crt

echo "ca.crt content" |base64 -d ****Put this optput in ca.pem file in CHARTS folder*****

kubectl get secret citm-cert-tls -n ncms -ojsonpath="{.data['ca\.crt']}" | base64 -d

10. Prepare zts.yaml(Acord file) and zts_ref/yaml(Plato-conf file) and copy both files in CHARTS folder
11. Now go on CHART folder and untar helm charts

tar -xvf helmchart.tar.gz

DEPLOY HELM CHARTS

helm3 install multus ./multuschart --version 1.15.128(chart version) -f configztsok.yaml(acord file) -f


ztsaccordok.yaml(plato file) --timeout 15m --namespace zts-admin-ns(zts namespace) --ca-file ca.pem(pem file)

helm3 install multus ./multuschart --version 1.16.63 -f ztsconfig23.yaml -f ztsacord23.yaml --timeout 15m --
namespace ztas-admin-ns --ca-file ca.pem

helm3 install ztc-ancillary ./ztc-ancillary --version 1.16.21 -f ztsconfig23.yaml -f ztsacord23.yaml --timeout 15m --
namespace ztas-admin-ns --ca-file ca.pem

helm3 install cmdb ./cmdb --version 1.16.12 -f ztsconfig23.yaml -f ztsacord23.yaml --timeout 15m --namespace ztas-
admin-ns --ca-file ca.pem

helm3 install caserver ./masterca --version 1.16.161 -f ztsconfig23.yaml -f ztsacord23.yaml --timeout 15m --
namespace ztas-admin-ns --ca-file ca.pem

helm3 install lcm ./lcmservice --version 1.16.157 -f ztsconfig23.yaml -f ztsacord23.yaml --timeout 15m --namespace
ztas-admin-ns --ca-file ca.pem

helm3 install um-sd ./um-sd --version 1.16.119 -f ztsconfig23.yaml -f ztsacord23.yaml --timeout 15m --namespace
ztas-admin-ns --ca-file ca.pem

helm3 install ztsl ./ztsl --version 1.16.119 -f ztsconfig23.yaml -f ztsacord23.yaml --timeout 15m --namespace ztas-
admin-ns --ca-file ca.pem

helm3 install clustermonitorservice ./clustermonitorservice --version 1.16.133 -f ztsconfig23.yaml -f ztsacord23.yaml


--timeout 15m --namespace ztas-admin-ns --ca-file ca.pem

helm3 install cliserver ./cliserver --version 1.16.75 -f ztsconfig23.yaml -f ztsacord23.yaml --timeout 15m --namespace
ztas-admin-ns --ca-file ca.pem

helm3 install zts-sa ./zts-sa --version 1.16.166 -f ztsconfig23.yaml -f ztsacord23.yaml --timeout 15m --namespace
ztas-admin-ns --ca-file ca.pem

helm3 install lms ./lms --version 1.16.84 -f ztsconfig23.yaml -f ztsacord23.yaml --timeout 15m --namespace ztas-
admin-ns --ca-file ca.pem

helm3 install efsclient ./efsclient --version 1.15.90 -f ztsconfig23.yaml -f ztsacord23.yaml --timeout 15m --namespace
ztas-admin-ns --ca-file ca.pem

12. Check all helm charts are deployed and all pod are in running state.
13. Login into keyclock GUI

https://envoylbip:9090

helm3 ls -a -n ztas-admin-ns

helm3 delete caserver cliserver clustermonitorservice cmdb lcm lms multus um-sd ztc-ancillary zts-sa ztsl -n ztas-
admin-ns --no-hooks

kubectl get pods -n ztas-admin-ns

kubectl delete pvc --all -n ztas-admin-ns

kubectl get pods -n ztas-admin-ns


kubectl delete service --all -n ztas-admin-ns

kubectl delete jobs --all -n ztas-admin-ns

kubectl delete cm --all -n ztas-admin-ns

kubectl delete rolebinding --all -n ztas-admin-ns

kubectl delete role --all -n ztas-admin-ns

kubectl delete sa --all -n ztas-admin-ns

You might also like