0% found this document useful (0 votes)
48 views

SmartEdgeOpen ImplementationGuide

This document provides an overview of Viettel's implementation plan for Smart Edge Open. It outlines a 4 cluster lab deployment with variations for on-prem edge, access edge, regional edge, and central DC scenarios. The implementation tasks are split into 3 parts - prerequisites and cluster infrastructure preparation, Smart Edge Open cluster setup, and application onboarding. An example lab setup is provided with IP addresses and hostnames for the controller and 2 edge nodes. OS requirements and installation are defined for all nodes.

Uploaded by

vuductrung
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
48 views

SmartEdgeOpen ImplementationGuide

This document provides an overview of Viettel's implementation plan for Smart Edge Open. It outlines a 4 cluster lab deployment with variations for on-prem edge, access edge, regional edge, and central DC scenarios. The implementation tasks are split into 3 parts - prerequisites and cluster infrastructure preparation, Smart Edge Open cluster setup, and application onboarding. An example lab setup is provided with IP addresses and hostnames for the controller and 2 edge nodes. OS requirements and installation are defined for all nodes.

Uploaded by

vuductrung
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Author Clarence Anslem

Change Authority Intel NCS

History
Version Issue Date Status Reason for Change
No.
V0.1 22 September 2021 Draft 1st Draft

Review
Reviewer’s Details Version No. Date
Ranjan Mishra 0.1

Document Conventions

Alerts readers to take note. Notes contain helpful suggestions or references to material not
covered in the document.

Alerts readers to be careful. In this situation, you might do something that could result in
equipment damage or loss of data.

Alerts the reader that they can save time by performing the action described in the paragraph
affixed to this icon.

Alerts the reader that the information affixed to this icon will help them solve a problem. The
information might not be troubleshooting or even an action, but it could be useful information
similar to a Timesaver.

September 23, 2021 NCS APJ – nstallation

Page 2 of 16
1 Introduction

1.1 Preface
The Smart Edge Open solution is built on top of Kubernetes, which is a production-grade container
orchestration environment. A typical Smart Edge Open-based deployment consists of an Smart Edge
Open Kubernetes Control Plane and an Smart Edge Open Edge Node.

➢ Smart Edge Open Kubernetes Control Plane: This node consists of microservices and
Kubernetes extensions, enhancements, and optimizations that provide the functionality to
configure one or more Smart Edge Open Edge Nodes and the application services that run on
those nodes (Application Pod Placement, Configuration of Core Network, etc).

➢ Smart Edge Open Edge Node: This node consists of microservices and Kubernetes
extensions, enhancements, and optimizations that are needed for edge application and
network function deployments. It also consists of APIs that are often used for the discovery of
application services.

➢ Another key ingredient is the 4G/5G core network functions that enable a private or public
edge. Smart Edge Open uses reference network functions to validate this end-to-end edge
deployment. This is key to understanding and measuring edge Key Performance Indicators
(KPIs).

September 23, 2021 NCS APJ – nstallation

Page 3 of 16
1.2 Architecture

The Smart Edge Open Kubernetes Control Plane consists of Vanilla Kubernetes Control Plane
components along with Smart Edge Open microservices that interact with the Kubernetes Control
Plane using Kubernetes defined APIs.

The following are the high-level features of the Smart Edge Open Kubernetes Control Plane building
blocks:

• Configuration of the hardware platform that hosts applications and network functions
• Configuration of network functions (4G, 5G, and WiFi*)
• Detection of various hardware and software capabilities of the edge cluster and use for
scheduling applications and network functions
• Setup of network and DNS policies for applications and network functions
• Enable collection of hardware infrastructure, software, and application monitoring
• Expose edge cluster capabilities northbound to a controller

The Smart Edge Open Edge Node consists of Vanilla Kubernetes Node components along with Smart
Edge Open Building Blocks that interact with Kubernetes node using Kubernetes defined APIs.

The following are the high-level features of the Smart Edge Open Kubernetes node building blocks:

• Container runtime (Docker*) and virtualization infrastructure (libvirt*, Open vSwitch (OVS)*,
etc.) support
• Platform pods consisting of services that enable the configuration of a node for a particular
deployment, device plugins enabling hardware resource allocation to an application pod, and
detection of interfaces and reporting to the Control Plane.
• System pods consisting of services that enable reporting the hardware and software features
of each node to the Control Plane, resource isolation service for pods, and providing a DNS
service to the cluster
• Telemetry consisting of services that enable hardware, operating system, infrastructure, and
application-level telemetry for the edge node
September 23, 2021 NCS APJ – nstallation

Page 4 of 16
• Support for real-time kernel for low latency applications and network functions like 4G and 5G
base station and non-real-time kernel

The Smart Edge Open Network functions are the key 4G and 5G functions that enable edge cloud
deployment. Smart Edge Open provides these key reference network functions and the configuration
agent in the Intel Distribution of Smart Edge Open.

1.3 Scope
This document is intended for use by Viettels starting to use Smart Edge Open platform to onboard
applications in their Cloud native environments. This document will give an overview on
implementation and architecture of the platform. Viettel will need to provide requirements and solution
overview of the workloads / usecases they intend to onboard or implement.

1.4 Assumptions
The following assumptions are made:

The user of this document has the necessary understanding of the Linux administration and some
knowledge of technologies described in the document.
The user should also have access to equipment in a lab / sandbox environment to follow this guide.

1.5 Related Documents

Doc # Title Link / Version


1. List of Doc & links https://github.com/open-ness/ido-specs
2. Architetcure https://github.com/open-ness/ido-
specs/blob/master/doc/architecture.md

3. Cluster Setup https://github.com/open-


ness/specs/blob/master/doc/getting-started/Smart Edge
Open-cluster-setup.md#exchanging-ssh-keys-between-
hosts

4. Application https://github.com/open-
Onboarding ness/specs/blob/master/doc/applications-onboard/network-
edge-applications-onboarding.md

5. Edge Software Hub

September 23, 2021 NCS APJ – nstallation

Page 5 of 16
2 Lab Setup Overview

2.1 Viettel Lab deployment guidance


Diagram below is a guide to use for the connectivity of the Viettel lab.
As there are 2 switches in Viettel lab environment both devices can be used to provide connectivity
for Smart Edge Clusters.

Below we see 4 clusters with 3 different variations. These clusters can be considered as simulating
different edge locations.

1) Cluster 1 (On Prem Edge for Private Wireless and IoT Workloads)
- 1 X Controller Node
- 2 X Edge Node

2) Cluster 2 - Single Node Cluster (Access Edge for vRAN / O-RAN Workloads)
- Combined Controller & Edge node in a single node

3) Cluster 3 – (Regional Edge for CU or distributed UPF instance)


- 1 X Controller Node
- 2 X Edge Node

4) Cluster 4 – (Central DC for 5G Core central UPF / CPF)


- 1 X Controller Node
- 2 X Edge Node

September 23, 2021 NCS APJ – nstallation

Page 6 of 16
Deployment flow

There are 3 main sections to setting up the Smart Edge Open Cluster.

1) Node preparation (HW, Connectivity, OS, Utilities)


2) Smart Edge Open Cluster Installation
3) Application onboarding & integration

September 23, 2021 NCS APJ – nstallation

Page 7 of 16
3 Implementation Plan

3.1 Implementation Tasks Overview


Implementation of (Smart Edge Open) can be split to 3 main parts

Step Task
No.
1 Prequisites and Cluster Infra preparation
2 Smart Edge Open Cluster Setup
3 Application Onboarding

Intel Lab Setup of installation:

Controller Edge Node 1 Edge Node 2


(Host1) (Host2)
IP Address 192.168.1.132 192.168.1.131 192.168.1.133
(MGMT)
Hostname Host 3 Host1 Host2

3.2 Prequisites and Cluster Infra preparation


OS Requirements & Installation
CentOS* 7.9.2009 must be installed on all the nodes (the controller and edge nodes) where the
product is deployed. It is highly recommended to install the operating system using a minimal ISO
image on nodes that will take part in deployment (obtained from inventory file). Also, do not make
customizations after a fresh manual install because it might interfere with Ansible scripts and give
unpredictable results during deployment.

--------------------@Host3 Controller
!
[root@host3 ~]# cat /etc/os-release
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"

CENTOS_MANTISBT_PROJECT="CentOS-7"
September 23, 2021 NCS APJ – nstallation

Page 8 of 16
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"

--------------------@Host 1 Edge-Node

[root@host1 ~]# cat /etc/os-release


NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"

CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"

Hostname Specification
Hosts for the Edge Controller (Kubernetes control plane) and Edge Nodes (Kubernetes nodes) must
have proper and unique hostnames (i.e., not localhost). This hostname must be specified in /etc/hosts
(refer to Setup static hostname).

--------------------@Host3 Controller!

[root@host3 ~]# hostnamectl set-hostname host3


[root@host3 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
host3
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
host3
192.168.1.131 Host1

! Also add local entries for the other hosts “host1, host2..hostn” so mgmt will
be easier
! Reload for changes to take effect

--------------------@Host 1 Edge-Node

[root@host1 ~]# hostnamectl set-hostname host1


[root@host1 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
host1
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
host3
192.168.1.132 Host3

! Reload for changes to take effect

September 23, 2021 NCS APJ – nstallation

Page 9 of 16
SSH Keygen & Exchange
Exchanging SSH keys between hosts permits a password-less SSH connection from the host running
Ansible to the hosts being set up.

In the first step, the host running Ansible (usually the Edge Controller host) must have a generated
SSH key. The SSH key can be generated by executing ssh-keygen and obtaining the key from the
output of the command.

The following is an example of a key generation, in which the key is placed in the default directory
(/root/.ssh/id_rsa), and an empty passphrase is used.
--------------------@Host3 Controller!

# ssh-keygen

Generating public/private rsa key pair.


Enter file in which to save the key (/root/.ssh/id_rsa): <ENTER>
Enter passphrase (empty for no passphrase): <ENTER>
Enter same passphrase again: <ENTER>
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:vlcKVU8Tj8nxdDXTW6AHdAgqaM/35s2doon76uYpNA0 root@host
The key's randomart image is:
+---[RSA 2048]----+
| .oo.==*|
| . . o=oB*|
| o . . ..o=.=|
| . oE. . ... |
| ooS. |
| ooo. . |
| . ...oo |
| . .*o+.. . |
| =O==.o.o |
+----[SHA256]-----+

In the second step, the generated key must be copied to every host from the inventory, including the
host on which the key was generated (i.e Controller), if it appears in the inventory. It is done by
running ssh-copy-id.

--------------------@Host3 Controller!

# ssh-copy-id [email protected]

/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed:


"/root/.ssh/id_rsa.pub"
The authenticity of host '<IP> (<IP>)' can't be established.
ECDSA key fingerprint is SHA256:c7EroVdl44CaLH/IOCBu0K0/MHl8ME5ROMV0AGzs8mY.
ECDSA key fingerprint is MD5:38:c8:03:d6:5a:8e:f7:7d:bd:37:a0:f1:08:15:28:bb.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out
any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted
now it is to install the new keys
root@host's password:

Number of key(s) added: 1

Now, try logging into the machine, with: "ssh 'root@host'"


and check to make sure that only the key(s) you wanted were added.

September 23, 2021 NCS APJ – nstallation

Page 10 of 16
Proxy Setup

Proxy was not used in our test lab. If there is a proxy requirement for your environment,
please refer to the following link below:
https://github.com/open-ness/specs/blob/master/doc/getting-started/Smart Edge Open-
cluster-setup.md#setting-proxy

Configuring Time
To allow for correct certificate verification, Smart Edge Open requires system time to be synchronized
among all nodes and controllers in a system.

--------------------@Host3 Controller

[root@host3 ~]# timedatectl set-timezone 'Asia/Singapore'


[root@host3 ~]# timedatectl
Local time: Fri 2021-09-17 13:07:42 +08
Universal time: Fri 2021-09-17 05:07:42 UTC
RTC time: Fri 2021-09-17 05:07:42
Time zone: Asia/Singapore (+08, +0800)
NTP enabled: yes
NTP synchronized: yes
RTC in local TZ: no
DST active: n/a

--------------------@Host 1 Edge-Node

# [root@host1 ~]# timedatectl set-timezone 'Asia/Singapore'


[root@host1 ~]# timedatectl
Local time: Fri 2021-09-17 13:07:11 +08
Universal time: Fri 2021-09-17 05:07:11 UTC
RTC time: Fri 2021-09-17 05:07:11
Time zone: Asia/Singapore (+08, +0800)
NTP enabled: yes
NTP synchronized: yes
RTC in local TZ: no
DST active: n/a

/ Smart Edge Open provides the possibility to synchronize a machine's time with the NTP server.

In this lab setup we used local time. If NTP setting requireed changes need to be made in the
.yml file. Intructions in link below:
https://github.com/open-ness/specs/blob/master/doc/getting-started/Smart Edge Open-
cluster-setup.md#configuring-time

Note that Github repository needs to be cloned first for the required .yml file, instructions in
next section 3.3.1

September 23, 2021 NCS APJ – nstallation

Page 11 of 16
3.3 Cluster setup
Installation is done by running helper deployment ansible scripts from the installation host (i.e
Controller or another separate host/jumphost). In our case we are using the controller as the
installation host (Hots3)

All installation files are located on Github. Start by cloning the Converged Edge Experience kits

Install Git & Clone Converged Edge Experience Kit Repo


For lab purposes, we are using root for all setup and installation. Modifications may be need if you
plan to install as a different user in your environment.

--------------------@Host3 Controller

[root@host3 ~]# sudo yum install git


[root@host3 ~]# git clone --recursive https://github.com/open-ness/ido-converged-edge-
experience-kits.git

Run Helper Precheck Script


Once repo is cloned you have the required files to run the Smart Edge Installation
--------------------@Host3 Controller
[root@host3 ~]# sudo sh /root/ido-converged-edge-experience-kits/scripts/ansible-
precheck.sh

! Let the script complete without errors


!! If script fails due to Ansible, Python3 run the following:

[root@host3 ~]#yum install ansible


[root@host3 ~]#python3 -m pip install sh
[root@host3 ~]#python3 -m pip install PyYAML

Prepare Ansible playbook


Playbooks can be executed by running helper deployment scripts from the Ansible host. These scripts
require that the Edge Controller and Edge Nodes be configured on different hosts. This is done by
configuring the Ansible playbook inventory, as described later in this document.

The deployment scripts automate deployment of the cluster with different flavours.
These flavours determine what components (Opensource) are deployed on the cluster.
In this example below we are deploying with the “Minimal” flavour.

--------------------@Host3 Controller
[root@host3 ido-converged-edge-experience-kits]# pwd
/root/ido-converged-edge-experience-kits
[root@host3 ido-converged-edge-experience-kits]# ls -lrt | grep inventory
lrwxrwxrwx. 1 root root 18 Sep 9 14:53 inventory.yml -> ceek/inventory.yml

! Inventory.yml is the required file


! Edit with required parameters as below:

all:

September 23, 2021 NCS APJ – nstallation

Page 12 of 16
vars:
cluster_name: Viettel_Smartedge !! Set cluster edge name as required.
flavor: minimal !! Set to flavor as per the flavor directory
single_node_deployment: false !! Set this to false, as we have a 2
node cluster
limit:
controller_group:
hosts:
host3: !! Set controller hostname
ansible_host: 192.168.1.132 !! Controller IP address,
ansible_user: root !! User which will exec script (i.e root)
edgenode_group:
hosts:
host1: !! Set edge-node hostname
ansible_host: 192.168.1.131 !! Set edge-node hostname
ansible_user: root !! User which will exec script (i.e root)

node02.Smart Edge Open.org: !! Remove references to other nodes not part


of deployment
ansible_host: 10.102.227.79 !! Remove references to other nodes not
part of deployment, script will trey and fail if not removed
ansible_user: Smart Edge Open
edgenode_vca_group:
hosts:
ptp_master:
hosts:
ptp_slave_group:
hosts:

Additionaly you can enable the Kubernetes Dashboard GUI.

[root@host3 all]# pwd


/root/ido-converged-edge-experience-kits/inventory/default/group_vars/all/

[root@host3 all]# ls -l
total 4
lrwxrwxrwx. 1 root root 64 Sep 9 14:53 10-open.yml ->
../../../../ceek/inventory/default/group_vars/all/10-default.yml
-rw-r--r--. 1 root root 2496 Sep 9 14:53 20-enhanced.yml

[root@host3 all]# vim 10-open.yml !! Edit the 10-open.yml file


k8s_device_plugins_enable: false
# Kubernetes Dashboard
kubernetes_dashboard_enable: true !! Change setting to true

Run Helper Script

--------------------@Host3 Controller

[root@host3 ido-converged-edge-experience-kits]# pwd


/root/ido-converged-edge-experience-kits
[root@host3 ido-converged-edge-experience-kits]# python3 deploy.py !! Helper
Scripts
. !! Note during installation we may see some failure + retry, that is normal.
. !! Wait For Script to finish
.
2021-09-18 00:23:04.670 INFO: Viettel_Smartedge network_edge.yml: succeed.
2021-09-18 00:23:05.673 INFO: ====================
2021-09-18 00:23:05.674 INFO: DEPLOYMENT RECAP:
2021-09-18 00:23:05.674 INFO: ====================
2021-09-18 00:23:05.674 INFO: DEPLOYMENT COUNT: 1
2021-09-18 00:23:05.677 INFO: SUCCESSFUL DEPLOYMENTS: 1

September 23, 2021 NCS APJ – nstallation

Page 13 of 16
2021-09-18 00:23:05.677 INFO: FAILED DEPLOYMENTS: 0
2021-09-18 00:23:05.677 INFO: DEPLOYMENT "Viettel_Smartedge": SUCCESSFUL

!! At this point the Kubernetes cluster has been setup and you can check what
components have been installed

[root@host3 ido-converged-edge-experience-kits]# kubectl get pods -A


NAMESPACE NAME READY
STATUS RESTARTS AGE
harbor harbor-app-harbor-chartmuseum-7d7dc7fd6-rw7kb 1/1
Running 0 42m
harbor harbor-app-harbor-clair-779df4555b-dwpwf 2/2
Running 4 42m
harbor harbor-app-harbor-core-9f9c98d7c-rjrn9 1/1
Running 0 42m
harbor harbor-app-harbor-database-0 1/1
Running 0 42m
harbor harbor-app-harbor-jobservice-8b8466df6-49s9j 1/1
Running 0 42m
harbor harbor-app-harbor-nginx-5cc6f67785-v2nnx 1/1
Running 0 42m
harbor harbor-app-harbor-notary-server-77b647468-lgqjt 1/1
Running 0 42m
harbor harbor-app-harbor-notary-signer-7d57d98c9f-kssz2 1/1
Running 0 42m
harbor harbor-app-harbor-portal-fd5ff4bc9-5qjf9 1/1
Running 0 42m
harbor harbor-app-harbor-redis-0 1/1
Running 0 42m
harbor harbor-app-harbor-registry-c4c7d67bb-wkn59 2/2
Running 0 42m
harbor harbor-app-harbor-trivy-0 1/1
Running 0 42m
kafka cluster-entity-operator-55894648cb-58v44 3/3
Running 0 24m
kafka cluster-kafka-0 2/2
Running 0 24m
kafka cluster-zookeeper-0 1/1
Running 0 27m
kafka strimzi-cluster-operator-68b6d59f74-525qv 1/1
Running 2 29m
kube-system calico-kube-controllers-66548c7664-jml2s 1/1
Running 0 42m
kube-system calico-node-kprrd 1/1
Running 0 29m
kube-system calico-node-vnhjw 1/1
Running 0 42m
kube-system coredns-74ff55c5b-4l47q 1/1
Running 0 42m
kube-system coredns-74ff55c5b-xj9vl 1/1
Running 0 42m
kube-system descheduler-cronjob-1631896560-lmx4z 0/1
Completed 0 4m24s
kube-system descheduler-cronjob-1631896680-8f8jl 0/1
Completed 0 2m24s
kube-system descheduler-cronjob-1631896800-j9flb 0/1
Completed 0 23s
kube-system etcd-host3 1/1
Running 0 42m
kube-system kube-apiserver-host3 1/1
Running 0 42m
kube-system kube-controller-manager-host3 1/1
Running 0 42m
kube-system kube-proxy-5mjbf 1/1
Running 0 42m
kube-system kube-proxy-npxzb 1/1
Running 0 29m
September 23, 2021 NCS APJ – nstallation

Page 14 of 16
kube-system kube-scheduler-host3 1/1
Running 2 33m
kube-system kubernetes-dashboard-55bb65dd99-h5bg9 1/1
Running 0 36m
Smart Edge Open certsigner-6cb79468b5-stj5z
1/1 Running 0 21m
Smart Edge Open eaa-69c7bb7b5d-6nblw
1/1 Running 0 21m
Smart Edge Open edgedns-sc7k6
1/1 Running 0 21m
Smart Edge Open nfd-release-node-feature-discovery-master-689fd7d7c4-b6kwq
1/1 Running 0 35m
Smart Edge Open nfd-release-node-feature-discovery-worker-fvg5w
1/1 Running 1 29m
telemetry cadvisor-6dhgb 2/2
Running 0 29m
telemetry collectd-fz226 2/2
Running 0 29m
telemetry custom-metrics-apiserver-5677d4ff98-vt88j 1/1
Running 0 34m
telemetry grafana-6647c6c4d5-rd7lf 2/2
Running 0 32m
telemetry otel-collector-649fb7bf69-6g6rm 2/2
Running 0 34m
telemetry prometheus-node-exporter-psqd2 1/1
Running 3 29m
telemetry prometheus-server-548ddd7959-n5rmh 3/3
Running 0 34m
telemetry telemetry-aware-scheduling-64cccd8f74-kh24v 2/2
Running 0 33m
telemetry telemetry-collector-certs-q89tw 0/1
Completed 0 34m
telemetry telemetry-node-certs-5gnhf 1/1
Running 0

!! Note in case of roll back required. Use the deploy script with “-clean” option.
!! This will also trigger a reload of Controller + Edge node

Below is a view of the components installed as part of the helper script with certain flavours.
Note different flavours will deploy different components. These need to be indentified by the App
provider (ISV + Viettel) on what components are required.
The components depend on what is required by the workloads. The CERA’s provide guidance on this.

September 23, 2021 NCS APJ – nstallation

Page 15 of 16
If dashboard was enabled, dashboard can be accessed with https on controller node
port:30553. A token needs to be generated to gain access to the dashboard

--------------------@Host3 Controller

[root@host3 all]# kubectl describe secret -n kube-system $(kubectl get secret -n


kube-system | grep 'kubernetes-dashboard-token' | awk '{print $1}') | grep 'token:'
| awk '{print $2}'
eyJhbGciOiJSUzI1NiIsImtpZCI6Im1ucTYwSGRPMGZWTzVEMXRWQWRCa25JNTA5SUJRUC05X0t2b1VSUDR
nZ1UifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWN
lYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bn
Qvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1mcTU0MiIsImt1YmVybmV0ZXMua
W8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIs
Imt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImYwNTVmMTQ5LTl
jOTYtNGEyNC05ZTVmLWUyMTM0ZmI2M2U1OSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLX
N5c3RlbTprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.lNIQbBW9RgujFImXpYHx_9nLVBdYLIVpn9VDwMoTVF1V
c37iMh-TVRantAPGbjWKFbL8H-
snoGRoIiASaXR05QzwusV2HkzkG_dQJ6aDFVR68QGBxDBVOXtOh9J4YkHa8lXKUQtQMlIEpBVFAF2Y1XiM6
AYwh-rYDFEo8RitiRDoBSrVwP01PP_zZHHUXzTJ8Z234IhU6sMBz3TJCaL-
D7d3YafcEVHfNQwtg0Ng56wyUTwKe1kNuDL-
tHhVoMUZfOwxNXFqZiHANGAdFYkoDslpoayQRdCzQnGoJ8deoOoDT1KQ7s4KJHzfP_jEjjxrwzwGX2cSKh-
Y4uEIeAn2Lg

!! Above is the token required to access to Dashboard via https

September 23, 2021 NCS APJ – nstallation

Page 16 of 16
3.4 Application Onboarding

September 23, 2021 NCS APJ – nstallation

Page 17 of 16

You might also like