Xrdocs Io CNBNG Tutorials Inception Server Deployment Guide
Xrdocs Io CNBNG Tutorials Inception Server Deployment Guide
Gurpreet S
TME MIG Follow
Save to PDF
O N T H I S PA G E
INTRODUCTION
S U M M A R Y S T E P S F O R I N S TA L L AT I O N O F I N C E P T I O N S E R V E R
INCEPTION VM DIMENSIONING
P R E R E Q U I S I T E ( O P T I O N A L , I F D C A L R E A DY E X I S T S )
S T E P 1 : I N C E P T I O N V M D E P LOY M E N T U S I N G B A S E I S O I M A G E P R O V I D E D B Y S M I
S T E P 2 : I N S TA L L AT I O N O F S M I C L U S T E R D E P LOY E R U S I N G TA R B A L L
V E R I F I C AT I O N S
Introduction
Inception Server is used to deploy the SMI cluster. It packages Ansible Playbooks which makes the deployment
of SMI with all addons as a one click operation. The server also has a confD, which o ers a user and machine
friendly interfaces for SMI deployment. The main interfaces o ered are Netconf, Restconf and CLI. In this tutorial
we will deploy Inception Server in VMWare ESXi environment. The VMware vSphere Hypervisor (ESXi) >=6.5 has
been fully tested and meets performance benchmarks.
Customized Base Ubuntu ISO Image File (provided by SMI) e.g. smi-install-disk.iso
Cluster Deployer Tarball e.g. cluster-deployer-2024.02.1.14.tar
Inception VM Dimensioning
The following are the minimum hardware requirements for deploying Inception VM using the SMI Base Image ISO
on VMware:
CPU: 8 vCPU
Memory: 24 G
Storage: 200 GB
NIC interfaces: The number of NIC interfaces required is based on the K8s network and VMware host network
reachability. Refer to cnBNG Concepts Part-1 document for more details on networking.
The hardware requirements for Inception VM canbe reduced for non-production deployment:
CPU: 4 vCPU
Memory: 16 G
Storage: 100 GB
NIC interfaces: The number of NIC interfaces required is based on the K8s network and VMware host network
reachability. Refer to cnBNG Concepts Part-1 document for more details on networking
Download the SMI base ISO image. This is the image which will be used for Ubuntu VM deployment. This is
customized Ubuntu for SMI. Now, follow below steps to deploy Ubuntu VM using the downloaded ISO le:
Download the SMI Base ISO le and copy the le to the VM Datastore
In the vCenter, select Create a New Virtual Machine
Specify name of the VM and select the Datacenter
Next select the host for the VM
Select the datastore
Select compatibility (ESXi 6.7 or later)
Select guest OS: Guest Family- Linux, Guest OS Version- Ubuntu Linux (64-bit)
Customize Hardware:
vCPU: 8, Memory: 16GB, New Hard disk: 100Gb (this will depend on the dimensioning selected)
Under Network: select management network (“VM Network” in most cases
Click New CD/DVD Drive and do the following:
Select Datastore ISO le option to boot from the SMI Base .iso le. Browse to the location of the .iso le on
the datastore set in Step 1.
In the Device Status eld, select Connect at power on checkbox.
After the VM boots up: login to the VM (user: cloud-user, password: Cisco_123). You will be prompted to
change the password immediately. New password must be 14 characters.
Now setup Networking by editing “/etc/netplan/50-cloud-init.yaml” le. Here is a sample le con g:
network:
ethernets:
eno1:
dhcp4: true
enp0s3:
dhcp4: true
ens160:
dhcp4: false
addresses:
- {inception-VM-IP}/{your-subnet}
gateway4: {your-gateway-IP}
nameservers:
search: [{your-domain}]
addresses: [{your-dns-server}]
ens3:
dhcp4: true
eth0:
dhcp4: true
version: 2
Note1: Sometimes interface is not shown as ens160, in that case it is a good idea to search for the interface using ifcon g -a command.
Generally lower ens number is the rst NIC attached to the VM, and higher number is the last.
Logout of the VM and login again to see hostname changes are re ected
Make the hostname persistent even after reload by adding “preserve_hostname: true” to /etc/cloud/cloud.cfg
le if not added already or change the setting to true from false if already present.
Replace default hostname for VM with the one you set (in above step) inside /etc/hosts le. This will avoid
hostname resolv error.
Make a temporary folder and copy the o ine SMI cluster deployer products tarball
cloud-user@inception:/var/tmp/offline-cm$ ls -altr
total 2984112
drwxrwxrwt 6 root root 4096 Apr 12 20:57 ..
-rw-rw-r-- 1 cloud-user cloud-user 3055718400 Apr 12 21:08 cluster-deployer-2020-04-12.tar
drwxrwxr-x 2 cloud-user cloud-user 4096 Apr 12 21:50 .
Note: Sometimes there could a disk space problem if les are kept in /var/tmp folder. In that case temporary folder, to keep install les, can be
created in user directory itself i.e. /home/cloud-user/
After tar xvf nishes: cluster Deployer les will be copied to “data” folder in the same temporary folder where
tarball was copied
cloud-user@inception:/var/tmp/offline-cm/data$ ls -altr
total 20
drwxr-xr-x 3 cloud-user cloud-user 4096 Apr 12 21:04 docker
drwxrwxr-x 2 cloud-user cloud-user 4096 Apr 12 21:07 charts
drwxrwxr-x 3 cloud-user cloud-user 4096 Apr 12 21:07 deployer-inception
drwxr-xr-x 5 cloud-user cloud-user 4096 Apr 12 21:07 .
drwxrwxr-x 3 cloud-user cloud-user 4096 Apr 12 22:00 ..
Deploy utility which deploys the SMI Cluster Deployer can be located inside data/deployer-inception folder. We
will be using this utility to deploy SMI Cluster Deployer
cloud-user@inception:/var/tmp/offline-cm/data$ cd deployer-inception/
cloud-user@inception:/var/tmp/offline-cm/data/deployer-inception$ ls -altr
total 36
-rwxrwxr-x 1 cloud-user cloud-user 102 Apr 12 21:07 stop
-rwxrwxr-x 1 cloud-user cloud-user 5636 Apr 12 21:07 start
-rwxrwxr-x 1 cloud-user cloud-user 6297 Apr 12 21:07 deploy
drwxrwxr-x 3 cloud-user cloud-user 4096 Apr 12 21:07 compose-offline
-rw-rw-r-- 1 cloud-user cloud-user 60 Apr 12 21:07 README.md
drwxr-xr-x 5 cloud-user cloud-user 4096 Apr 12 21:07 ..
drwxrwxr-x 3 cloud-user cloud-user 4096 Apr 12 21:07 .
Run the deploy command as shown below. External-ip is the IP address from K8s Network con gured on
Inception VM. It will be used to host your ISO and your o ine le tars to be downloaded to the remote hosts
Note: Rerun of deploy utility doesnot work/ x issues with previous deployment. It’s always a good idea to clean up containers manually and
before attemting to rerun ./deploy
Veri cations
We can login to the SMI Cluster Deployer CLI, using “ssh admin@localhost -p 2022” on Inception VM
cloud-user@inception:~$ ssh admin@localhost -p 2022
admin@localhost's password:
We can also connect to netconf interface of the SMI Deployer through ssh at default port 830
cloud-user@inception:~$ ssh admin@localhost -p 830 -s netconf
admin@localhost's password:
<?xml version="1.0" encoding="UTF-8"?>
<hello xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<capabilities>
<capability>urn:ietf:params:netconf:base:1.0</capability>
<capability>urn:ietf:params:netconf:base:1.1</capability>
<capability>urn:ietf:params:netconf:capability:confirmed-commit:1.1</capability>
<capability>urn:ietf:params:netconf:capability:confirmed-commit:1.0</capability>
<capability>urn:ietf:params:netconf:capability:candidate:1.0</capability>
<capability>urn:ietf:params:netconf:capability:rollback-on-error:1.0</capability>
<capability>urn:ietf:params:netconf:capability:url:1.0?scheme=ftp,sftp,file</capability>
<capability>urn:ietf:params:netconf:capability:validate:1.0</capability>
<capability>urn:ietf:params:netconf:capability:validate:1.1</capability>
<capability>urn:ietf:params:netconf:capability:xpath:1.0</capability>
<capability>urn:ietf:params:netconf:capability:notification:1.0</capability>
<capability>urn:ietf:params:netconf:capability:interleave:1.0</capability>
<capability>urn:ietf:params:netconf:capability:partial-lock:1.0</capability>
<capability>urn:ietf:params:netconf:capability:with-defaults:1.0?basic-mode=explicit&also-supported=repo
<capability>urn:ietf:params:netconf:capability:yang-library:1.0?revision=2019-01-04&module-set-id=4677c2
<capability>urn:ietf:params:netconf:capability:yang-library:1.1?revision=2019-01-04&content-id=4677c22a6
<capability>http://tail-f.com/ns/netconf/actions/1.0</capability>
<capability>http://tail-f.com/ns/aaa/1.1?module=tailf-aaa&revision=2018-09-12</capability>
<capability>http://tail-f.com/ns/common/query?module=tailf-common-query&revision=2017-12-15</capability>
<capability>http://tail-f.com/ns/confd-progress?module=tailf-confd-progress&revision=2020-06-29</capabil
<capability>http://tail-f.com/ns/kicker?module=tailf-kicker&revision=2020-11-26</capability>
<capability>http://tail-f.com/ns/netconf/query?module=tailf-netconf-query&revision=2017-01-06</capabilit
<capability>http://tail-f.com/ns/webui?module=tailf-webui&revision=2013-03-07</capability>
<capability>http://tail-f.com/yang/acm?module=tailf-acm&revision=2013-03-07</capability>
<capability>http://tail-f.com/yang/common?module=tailf-common&revision=2020-11-26</capability>
<capability>http://tail-f.com/yang/common-monitoring?module=tailf-common-monitoring&revision=2019-04-09<
<capability>http://tail-f.com/yang/confd-monitoring?module=tailf-confd-monitoring&revision=2019-10-30</c
<capability>http://tail-f.com/yang/netconf-monitoring?module=tailf-netconf-monitoring&revision=2019-03-2
<capability>http://tail-f.com/yang/xsd-types?module=tailf-xsd-types&revision=2017-11-20</capability>
<capability>urn:ietf:params:xml:ns:netconf:base:1.0?module=ietf-netconf&revision=2011-06-01&features
<capability>urn:ietf:params:xml:ns:netconf:partial-lock:1.0?module=ietf-netconf-partial-lock&revision=20
<capability>urn:ietf:params:xml:ns:yang:iana-crypt-hash?module=iana-crypt-hash&revision=2014-08-06&f
<capability>urn:ietf:params:xml:ns:yang:ietf-inet-types?module=ietf-inet-types&revision=2013-07-15</capa
<capability>urn:ietf:params:xml:ns:yang:ietf-netconf-acm?module=ietf-netconf-acm&revision=2018-02-14</ca
<capability>urn:ietf:params:xml:ns:yang:ietf-netconf-monitoring?module=ietf-netconf-monitoring&revision=
<capability>urn:ietf:params:xml:ns:yang:ietf-netconf-notifications?module=ietf-netconf-notifications&rev
<capability>urn:ietf:params:xml:ns:yang:ietf-netconf-with-defaults?module=ietf-netconf-with-defaults&rev
<capability>urn:ietf:params:xml:ns:yang:ietf-restconf-monitoring?module=ietf-restconf-monitoring&revisio
<capability>urn:ietf:params:xml:ns:yang:ietf-x509-cert-to-name?module=ietf-x509-cert-to-name&revision=20
<capability>urn:ietf:params:xml:ns:yang:ietf-yang-metadata?module=ietf-yang-metadata&revision=2016-08-05
<capability>urn:ietf:params:xml:ns:yang:ietf-yang-types?module=ietf-yang-types&revision=2013-07-15</capa
</capabilities>
<session-id>278399</session-id></hello>]]>]]>
SHARE ON
Leave a Comment
2 Comments
1 Login
M
Martin B − ⚑
2 years ago
Hi Gurpreet,
0 0 Reply ⥅
Alex Y − ⚑
2 years ago
Hi Gurpreet,
First of all, great guide!
For manual/openstack deployment when AIO VM is manually instantiated had to add a
few more steps to make everything work correctly.
Before running the "deploy" command "docker" group has to be created and the user
should be added to this group.
tar xvf "tarball.tgz" extracts les with the default permissions, which with minimalistic
cloud-init appeared to be 700. I have ended up with multiple synchronisation errors. Had
to add the -p ag to maintain le permissions. The command would look like this
tar xvfp <cluster-deployer-blah-blah.tar>
0 0 Reply ⥅
This site is maintained by Cisco Systems, Inc. employees. Powered by Jekyll & Minimal Mistakes.