VPC SI System Architecture
VPC SI System Architecture
Introduction to VPC-SI
This chapter introduces Cisco Virtualized Packet CoreಧSingle Instance (VPC-SI). VPC-SI addresses the
need for virtualized cloud architectures that enable the accelerated deployment of new applications and
services in the mobile market.
Product Description
VPC-SI consolidates the operations of physical Cisco ASR 5x00 chassis running StarOS into a single Virtual
Machine (VM) able to run on commercial off-the-shelf (COTS) servers. Each VPC-SI VM operates as an
independent StarOS instance, incorporating the management and session processing capabilities of a physical
chassis.
This chapter describes the StarOS VPC-SI architecture and interaction with external devices.
ವ Mobile Control Plane PCRF (Policy and Charging Rule Function), application gateway, analytics,
services orchestration, abstraction and control functions
ವ Small cell gateways
თHNBGW Home NodeB Gateway
თHeNBGW evolved Home NodeB Gateway
თSAMOG S2a Mobility over GTP combine CGW (Converged Access Gateway) and Trusted WLAN
AAA Proxy (TWAP) functions on a single service node
Mobile Cloud Network (MCN) is a network infrastructure that includes Infrastructure as a Service (IaaS), the
orchestration mechanisms, analytics mechanisms etc., upon which the VPC-SI as well as other services are
deployed.
VM Interconnect Architecture
This figure below shows basic L2/L3 interconnection as supported by VPC-SI.
In the figure above, a virtual switch is embedded within the hypervisor to support SDN L2 capabilities across
the data center. The virtual switch is interconnected to other virtual switches using 802.1Q trunks (VLANs).
Typically, the virtual switch is a dynamically loaded kernel module.
Standalone Instance
VPC-SI is essentially StarOS running within a Virtual Machine (VM) on a COTS platform. It can be used as
a stand-alone single VM within an enterprise, remote site, or customer data center. Alternatively, VPC-SI can
be integrated as a part of a larger service provider orchestration solution.
The Single Instance architecture is best suited for low capacity scenarios. Scaling the VPC-SI Virtual Network
Function (VNF) requires significant network level configurations for certain VNF types (such as, P-GW,
S-GW, MME, PCRF, Firewall and NAT). For example, if a new VPC-SI P-GW is added or removed, various
Diameter peers must be configured with this information DNS is provisioned or de-provisioned with this
information.
VPC-SI only interacts with supported hypervisors KVM (Kernel-based Virtual Machine) and VMware ESXi.
It has little or no knowledge of physical devices.
Typically, VPC-SI should be deployed in Interchassis Session Recovery (ICSR) pairs to provide physical
redundancy in case of hardware or hypervisor failure.
Each VPC-SI VM takes on the roles of an entire StarOS system. The only interfaces exposed outside the VM
are those for external management and service traffic. Each VM is managed independently.
Each VPC-SI VM performs the following StarOS functions:
ವ Controller tasks
ವ Out-of-band management for CLI and Logging (vSerial and vKVM)
ವ Local context vpnmgr
ವ Local context management (vNICs)
ವ System boot image and configuration storage on vHDD
ವ Record storage on vHDD
ವ NPU simulation via fastpath and slowpath
ವ Non-local context (vNICs, 1 to 12).
ವ Demux and vpnmgr for session processing
ವ Crypto processing
Feature Set
ವ It is critical to confirm that the interfaces listed in the supported hypervisors line up with the KVM BR
group or VMware vSwitch in the order in which you want them to match the VPC-SI interfaces.
Note You cannot be guaranteed that the order of the vNICs as listed in the hypervisor CLI/GUI is the same as
how the hypervisor offers them to VPC-SI. On initial setup you must use the show hardware CLI command
to walk through the MAC addresses shown on the hypervisor's vNIC configuration and match them up
with the MAC addresses learned by VPC-SI. This will confirm that the VPC-SI interfaces are connected
to the intended BR group/Vswitch.
Encryption
VMs within a VPC-SI instance perform software-based encryption and tunneling of packets (as opposed to
the higher-throughput hardware-based services). Call models that make heavy use of encryption for bearer
packets or have significant PKI (Public Key Infrastructure) key generation rates may require significant
compute resources.
Security
Security of external traffic including tunneling, encryption, Access Control Lists (ACLs), context separation,
and user authentication function as on existing StarOS platforms. User ports and interfaces on the CFs and
SFs are protected through StarOS CLI configuration.
The virtual system adds additional security concerns on the customer because network communication travel
over the DI network on datacenter equipment.
The DI network must be isolated from other hosts within the datacenter by limiting membership in the system
network's VLAN to VMs within that specific VPC-SI instance. Unauthorized access to the DI network through
other hosts being inadvertently added to that network or the compromise of a router, switch or hypervisor
could disrupt or circumvent the security measures of StarOS. Such disruptions can result in failures, loss of
service, and/or exposure of control and bearer packets. Properly securing access to the DI network is beyond
the control of StarOS.
Communication between DI network component (e.g. CF and SF) VMs is now only possibley via authentication
over externally supplied SSH keys. In addition, the system enforces public/private key-based SSH authentication
for logins within the DI network. No passwords, keys or LI information are stored or sent in clear text.
If an operator requires physical separation of networks, such as management versus bearer versus LI (Lawful
Intercept), then physical separation of the DI network should also be done since it carries sensitive data. In a
virtualized environment, the physical separation of networks may not be possible or practical. Operators that
have these requirements may need to qualify their hypervisor and infrastructure to confirm that it will provide
sufficient protection for their needs.
Platform Requirements
The virtual system relies on the underlying hardware and hypervisor for overall system redundancy and
availability.
The hardware and hypervisor should provide:
ವ Redundant hardware components where practical (such as power supplies, disks)
ವ Redundant network paths (dual fabric/NICs, with automatic failover)
ವ Redundant network uplinks (switches, routers, etc.)
High availability can only be achieved if the underlying infrastructure (hosts, hypervisor, and network) can
provide availability and reliability that exceeds expected values. The system is only as reliable as the
environment on which it runs.
Interchassis Session Recovery (ICSR) is also recommended to improve availability and recovery time in the
case of a non-redundant hardware failure (such as CPU, memory, motherboard, hypervisor software). ICSR
provides redundancy at the session level for gateways only.
ICSR Support
VPC-SI supports ICSR between two instances for services that support ICSR in the StarOS software release.
When more than one service type is in use, only those services that support ICSR will be able to use ICSR.
ICSR supports redundancy for site/row/rack/host outages, and major software faults. To do so, the two instances
should be run on non-overlapping hosts and network interconnects. ICSR is supported only between
like-configured instances. ICSR between a VPC-SI instance and another type of platform (such as an ASR
5x00) is not supported.
L3 ICSR is supported.
For additional information, refer to the Interchassis Session Recovery chapter in this guide.
Hypervisor Requirements
VPC-SI has been qualified to run under the following hypervisors:
ವ Kernel-based Virtual Machine (KVM) - QEMU emulator 2.0. The VPC-SI StarOS installation build
includes a libvirt XML template and ssi_install.sh for VM creation under Ubuntu Server14.04.
ವ KVM - Red Hat Enterprise Linux 7.2: The VPC-SI StarOS installation build includes an install script
called qvpc-si_install.sh.
ವ VMware ESXi 6.0 The VPC-SI StarOS installation build includes OVF (Open Virtualization Format)
and OVA (Open Virtual Application) templates for VM creation via the ESXi GUI.
VM Configuration
VPC-SI requires that the VM be configured with:
ವ X vCPUs (see #unique_28 )
ವ Y vRAM (see #unique_28 )
ವ First vNIC is the management port (see vNIC Options, on page 8)
ವ Second and subsequent vNICs are service ports; one vNIC is required and up to 12 are supported by the
VPC, but this number may be limited by the hypervisor
ವ First vHDD is for boot image and configuration storage (4 GB recommended)
ವ Second vHDD is for record storage [optional] (16 GB minimum)
Note The number vCPUs per VM should never exceed the maximum number of vCPUs supported by the
platform CPU.
To maximize performance, it may be desirable to adjust the number of vCPUs or vRAM to align with the
underlying hardware. SF supports varied vCPU and vRAM combinations, however all SFs must share the
same combination within an instance.
Software will determine the optimal number of SESSMGR tasks per SF on startup of the SF based on the
number of vCPUs and amount of vRAM on that SF.
Note Dynamic resizing of vCPU count, vRAM size or vNIC type/count (via hotplug, ballooning, etc.) is not
supported. If these values need to be changed after provisioning, all VMs must be shut down and
reconfigured. Reconfiguration can be performed only on all VMs at once. VMs cannot be reconfigured
one at a time since the CPUs and RAM would not match the other instances.
vNIC Options
In this release the supported vNIC options include:
ವ VMXNET3ಧParavirtual NIC for VMware
ವ VIRTIOಧParavirtual NIC for KMV
ವ ixgbeಧIntel 10 Gigabit NIC virtual function
ವ enicಧCisco UCS NIC
VMware SCSI 0:0:0:0 SCSI 0:0:1:0 Raw disk hd-local1 and hd-remote1 use RAID1
For record storage (CDRs and UDRs) the CF VM should be provisioned with a second vHDD sized to meet
anticipated record requirements (minimum 16GB). Records will be written to /records on the second vHDD.
(IFTASK) is a software component that is responsible for packet input and output operations and provides a
fast path for packet processing in the user space by bypassing the Linux kernel. During the VPC-SI boot
process, a proportion of the vCPUs are allocated to IFTASK and the remainder are allocated to application
processing.
To determine which vCPUs are used by IFTASK and view their utilization, use the show npu utilization
table command as shown here:
[local]mySystem# show npu utilization table
QUEUE_THREAD_VNPU: | | | | | | |
|
QUEUE_CRYPTO_RX: | | | | | | |
|
QUEUE_CRYPTO_TX: | | | | | | |
|
QUEUE_THREAD_IPC: | | | | | | |
|
MCDMA_FLUSH: | | | 68%| 69%| 67%| |
|
QUEUE_THREAD_TYPE_MAX: | | | | | | |
|
900-Sec Avg: lcore00|lcore01|lcore02|lcore03|lcore04|lcore05|lcore06|lcore07|
To view CPU utilization for the VM without the IFTASK vCPUs, use the show cpu info command. For more
detailed information use the verbose keyword.
[local]mySystem# show cpu info
Card 1, CPU 0:
Status : Active, Kernel Running, Tasks Running
Load Average : 8.99, 9.50, 8.20 (11.89 max)
Total Memory : 16384M
Kernel Uptime : 0D 0H 49M
Last Reading:
CPU Usage : 16.6% user, 10.5% sys, 0.0% io, 4.6% irq, 68.3% idle
Poll CPUs : 5 (1, 2, 3, 4, 5)
Processes / Tasks : 234 processes / 54 tasks
Network : 353.452 kpps rx, 3612.279 mbps rx, 282.869 kpps tx, 2632.760 mbps
tx
File Usage : 2336 open files, 1631523 available
Memory Usage : 4280M 26.1% used, 42M 0.3% reclaimable
Maximum/Minimum:
CPU Usage : 23.2% user, 11.2% sys, 0.1% io, 5.5% irq, 61.5% idle
Poll CPUs : 5 (1, 2, 3, 4, 5)
Processes / Tasks : 244 processes / 54 tasks
Network : 453.449 kpps rx, 4635.918 mbps rx, 368.252 kpps tx, 3483.816 mbps
tx
File Usage : 3104 open files, 1630755 available
Memory Usage : 4318M 26.4% used, 46M 0.3% reclaimable
Note The <version> variable appears in the filename beginning with release 16.1 and subsequent releases. For
additional information, see the StarOS Version Numbering appendix.