SmartEdgeOpen ImplementationGuide
SmartEdgeOpen ImplementationGuide
History
Version Issue Date Status Reason for Change
No.
V0.1 22 September 2021 Draft 1st Draft
Review
Reviewer’s Details Version No. Date
Ranjan Mishra 0.1
Document Conventions
Alerts readers to take note. Notes contain helpful suggestions or references to material not
covered in the document.
Alerts readers to be careful. In this situation, you might do something that could result in
equipment damage or loss of data.
Alerts the reader that they can save time by performing the action described in the paragraph
affixed to this icon.
Alerts the reader that the information affixed to this icon will help them solve a problem. The
information might not be troubleshooting or even an action, but it could be useful information
similar to a Timesaver.
Page 2 of 16
1 Introduction
1.1 Preface
The Smart Edge Open solution is built on top of Kubernetes, which is a production-grade container
orchestration environment. A typical Smart Edge Open-based deployment consists of an Smart Edge
Open Kubernetes Control Plane and an Smart Edge Open Edge Node.
➢ Smart Edge Open Kubernetes Control Plane: This node consists of microservices and
Kubernetes extensions, enhancements, and optimizations that provide the functionality to
configure one or more Smart Edge Open Edge Nodes and the application services that run on
those nodes (Application Pod Placement, Configuration of Core Network, etc).
➢ Smart Edge Open Edge Node: This node consists of microservices and Kubernetes
extensions, enhancements, and optimizations that are needed for edge application and
network function deployments. It also consists of APIs that are often used for the discovery of
application services.
➢ Another key ingredient is the 4G/5G core network functions that enable a private or public
edge. Smart Edge Open uses reference network functions to validate this end-to-end edge
deployment. This is key to understanding and measuring edge Key Performance Indicators
(KPIs).
Page 3 of 16
1.2 Architecture
The Smart Edge Open Kubernetes Control Plane consists of Vanilla Kubernetes Control Plane
components along with Smart Edge Open microservices that interact with the Kubernetes Control
Plane using Kubernetes defined APIs.
The following are the high-level features of the Smart Edge Open Kubernetes Control Plane building
blocks:
• Configuration of the hardware platform that hosts applications and network functions
• Configuration of network functions (4G, 5G, and WiFi*)
• Detection of various hardware and software capabilities of the edge cluster and use for
scheduling applications and network functions
• Setup of network and DNS policies for applications and network functions
• Enable collection of hardware infrastructure, software, and application monitoring
• Expose edge cluster capabilities northbound to a controller
The Smart Edge Open Edge Node consists of Vanilla Kubernetes Node components along with Smart
Edge Open Building Blocks that interact with Kubernetes node using Kubernetes defined APIs.
The following are the high-level features of the Smart Edge Open Kubernetes node building blocks:
• Container runtime (Docker*) and virtualization infrastructure (libvirt*, Open vSwitch (OVS)*,
etc.) support
• Platform pods consisting of services that enable the configuration of a node for a particular
deployment, device plugins enabling hardware resource allocation to an application pod, and
detection of interfaces and reporting to the Control Plane.
• System pods consisting of services that enable reporting the hardware and software features
of each node to the Control Plane, resource isolation service for pods, and providing a DNS
service to the cluster
• Telemetry consisting of services that enable hardware, operating system, infrastructure, and
application-level telemetry for the edge node
September 23, 2021 NCS APJ – nstallation
Page 4 of 16
• Support for real-time kernel for low latency applications and network functions like 4G and 5G
base station and non-real-time kernel
The Smart Edge Open Network functions are the key 4G and 5G functions that enable edge cloud
deployment. Smart Edge Open provides these key reference network functions and the configuration
agent in the Intel Distribution of Smart Edge Open.
1.3 Scope
This document is intended for use by Viettels starting to use Smart Edge Open platform to onboard
applications in their Cloud native environments. This document will give an overview on
implementation and architecture of the platform. Viettel will need to provide requirements and solution
overview of the workloads / usecases they intend to onboard or implement.
1.4 Assumptions
The following assumptions are made:
The user of this document has the necessary understanding of the Linux administration and some
knowledge of technologies described in the document.
The user should also have access to equipment in a lab / sandbox environment to follow this guide.
4. Application https://github.com/open-
Onboarding ness/specs/blob/master/doc/applications-onboard/network-
edge-applications-onboarding.md
Page 5 of 16
2 Lab Setup Overview
Below we see 4 clusters with 3 different variations. These clusters can be considered as simulating
different edge locations.
1) Cluster 1 (On Prem Edge for Private Wireless and IoT Workloads)
- 1 X Controller Node
- 2 X Edge Node
2) Cluster 2 - Single Node Cluster (Access Edge for vRAN / O-RAN Workloads)
- Combined Controller & Edge node in a single node
Page 6 of 16
Deployment flow
There are 3 main sections to setting up the Smart Edge Open Cluster.
Page 7 of 16
3 Implementation Plan
Step Task
No.
1 Prequisites and Cluster Infra preparation
2 Smart Edge Open Cluster Setup
3 Application Onboarding
--------------------@Host3 Controller
!
[root@host3 ~]# cat /etc/os-release
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-7"
September 23, 2021 NCS APJ – nstallation
Page 8 of 16
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
--------------------@Host 1 Edge-Node
CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
Hostname Specification
Hosts for the Edge Controller (Kubernetes control plane) and Edge Nodes (Kubernetes nodes) must
have proper and unique hostnames (i.e., not localhost). This hostname must be specified in /etc/hosts
(refer to Setup static hostname).
--------------------@Host3 Controller!
! Also add local entries for the other hosts “host1, host2..hostn” so mgmt will
be easier
! Reload for changes to take effect
--------------------@Host 1 Edge-Node
Page 9 of 16
SSH Keygen & Exchange
Exchanging SSH keys between hosts permits a password-less SSH connection from the host running
Ansible to the hosts being set up.
In the first step, the host running Ansible (usually the Edge Controller host) must have a generated
SSH key. The SSH key can be generated by executing ssh-keygen and obtaining the key from the
output of the command.
The following is an example of a key generation, in which the key is placed in the default directory
(/root/.ssh/id_rsa), and an empty passphrase is used.
--------------------@Host3 Controller!
# ssh-keygen
In the second step, the generated key must be copied to every host from the inventory, including the
host on which the key was generated (i.e Controller), if it appears in the inventory. It is done by
running ssh-copy-id.
--------------------@Host3 Controller!
# ssh-copy-id [email protected]
Page 10 of 16
Proxy Setup
Proxy was not used in our test lab. If there is a proxy requirement for your environment,
please refer to the following link below:
https://github.com/open-ness/specs/blob/master/doc/getting-started/Smart Edge Open-
cluster-setup.md#setting-proxy
Configuring Time
To allow for correct certificate verification, Smart Edge Open requires system time to be synchronized
among all nodes and controllers in a system.
--------------------@Host3 Controller
--------------------@Host 1 Edge-Node
/ Smart Edge Open provides the possibility to synchronize a machine's time with the NTP server.
In this lab setup we used local time. If NTP setting requireed changes need to be made in the
.yml file. Intructions in link below:
https://github.com/open-ness/specs/blob/master/doc/getting-started/Smart Edge Open-
cluster-setup.md#configuring-time
Note that Github repository needs to be cloned first for the required .yml file, instructions in
next section 3.3.1
Page 11 of 16
3.3 Cluster setup
Installation is done by running helper deployment ansible scripts from the installation host (i.e
Controller or another separate host/jumphost). In our case we are using the controller as the
installation host (Hots3)
All installation files are located on Github. Start by cloning the Converged Edge Experience kits
--------------------@Host3 Controller
The deployment scripts automate deployment of the cluster with different flavours.
These flavours determine what components (Opensource) are deployed on the cluster.
In this example below we are deploying with the “Minimal” flavour.
--------------------@Host3 Controller
[root@host3 ido-converged-edge-experience-kits]# pwd
/root/ido-converged-edge-experience-kits
[root@host3 ido-converged-edge-experience-kits]# ls -lrt | grep inventory
lrwxrwxrwx. 1 root root 18 Sep 9 14:53 inventory.yml -> ceek/inventory.yml
all:
Page 12 of 16
vars:
cluster_name: Viettel_Smartedge !! Set cluster edge name as required.
flavor: minimal !! Set to flavor as per the flavor directory
single_node_deployment: false !! Set this to false, as we have a 2
node cluster
limit:
controller_group:
hosts:
host3: !! Set controller hostname
ansible_host: 192.168.1.132 !! Controller IP address,
ansible_user: root !! User which will exec script (i.e root)
edgenode_group:
hosts:
host1: !! Set edge-node hostname
ansible_host: 192.168.1.131 !! Set edge-node hostname
ansible_user: root !! User which will exec script (i.e root)
[root@host3 all]# ls -l
total 4
lrwxrwxrwx. 1 root root 64 Sep 9 14:53 10-open.yml ->
../../../../ceek/inventory/default/group_vars/all/10-default.yml
-rw-r--r--. 1 root root 2496 Sep 9 14:53 20-enhanced.yml
--------------------@Host3 Controller
Page 13 of 16
2021-09-18 00:23:05.677 INFO: FAILED DEPLOYMENTS: 0
2021-09-18 00:23:05.677 INFO: DEPLOYMENT "Viettel_Smartedge": SUCCESSFUL
!! At this point the Kubernetes cluster has been setup and you can check what
components have been installed
Page 14 of 16
kube-system kube-scheduler-host3 1/1
Running 2 33m
kube-system kubernetes-dashboard-55bb65dd99-h5bg9 1/1
Running 0 36m
Smart Edge Open certsigner-6cb79468b5-stj5z
1/1 Running 0 21m
Smart Edge Open eaa-69c7bb7b5d-6nblw
1/1 Running 0 21m
Smart Edge Open edgedns-sc7k6
1/1 Running 0 21m
Smart Edge Open nfd-release-node-feature-discovery-master-689fd7d7c4-b6kwq
1/1 Running 0 35m
Smart Edge Open nfd-release-node-feature-discovery-worker-fvg5w
1/1 Running 1 29m
telemetry cadvisor-6dhgb 2/2
Running 0 29m
telemetry collectd-fz226 2/2
Running 0 29m
telemetry custom-metrics-apiserver-5677d4ff98-vt88j 1/1
Running 0 34m
telemetry grafana-6647c6c4d5-rd7lf 2/2
Running 0 32m
telemetry otel-collector-649fb7bf69-6g6rm 2/2
Running 0 34m
telemetry prometheus-node-exporter-psqd2 1/1
Running 3 29m
telemetry prometheus-server-548ddd7959-n5rmh 3/3
Running 0 34m
telemetry telemetry-aware-scheduling-64cccd8f74-kh24v 2/2
Running 0 33m
telemetry telemetry-collector-certs-q89tw 0/1
Completed 0 34m
telemetry telemetry-node-certs-5gnhf 1/1
Running 0
!! Note in case of roll back required. Use the deploy script with “-clean” option.
!! This will also trigger a reload of Controller + Edge node
Below is a view of the components installed as part of the helper script with certain flavours.
Note different flavours will deploy different components. These need to be indentified by the App
provider (ISV + Viettel) on what components are required.
The components depend on what is required by the workloads. The CERA’s provide guidance on this.
Page 15 of 16
If dashboard was enabled, dashboard can be accessed with https on controller node
port:30553. A token needs to be generated to gain access to the dashboard
--------------------@Host3 Controller
Page 16 of 16
3.4 Application Onboarding
Page 17 of 16