Hardware Architecture Guide For HPE Virtualized NonStop On VMware-a00064673enw
Hardware Architecture Guide For HPE Virtualized NonStop On VMware-a00064673enw
CONTENTS
Introduction ................................................................................................................................................................................................................................................................................................................................. 3
Scope .......................................................................................................................................................................................................................................................................................................................................... 3
HPE NonStop X architecture....................................................................................................................................................................................................................................................................................... 3
HPE Virtualized NonStop ............................................................................................................................................................................................................................................................................................... 4
HPE Virtualized NonStop on VMware ......................................................................................................................................................................................................................................................... 4
HPE Virtualized NonStop architecture guide ........................................................................................................................................................................................................................................ 5
Sharing server hardware between multiple HPE vNS systems ..........................................................................................................................................................................................11
Hardware implementation guide ...................................................................................................................................................................................................................................................................11
Connectivity.......................................................................................................................................................................................................................................................................................................................13
Rack ..........................................................................................................................................................................................................................................................................................................................................14
VMware requirements..............................................................................................................................................................................................................................................................................................14
HPE Virtualized NonStop and support for hardware product-lines ..............................................................................................................................................................................15
Appendix A: HPE vNS hardware support matrix ...................................................................................................................................................................................................................................16
Section 1: Server models .......................................................................................................................................................................................................................................................................................16
Section 2: Fabric NICs...............................................................................................................................................................................................................................................................................................16
Section 3: NICs supporting SR-IOV for network interface in IP and Telco vCLIMs .........................................................................................................................................16
Section 4: NICs supporting PCI passthrough for network interface in IP and Telco vCLIMs..................................................................................................................16
Section 5: Storage products usable with HPE vNS .......................................................................................................................................................................................................................16
Section 6: Ethernet switches ..............................................................................................................................................................................................................................................................................16
Appendix B: Storage considerations..................................................................................................................................................................................................................................................................17
Multipath access between CLIM and storage volumes .............................................................................................................................................................................................................17
Appendix C: System requirements ......................................................................................................................................................................................................................................................................19
Appendix D: HPE Virtualized NonStop—System configurations ...........................................................................................................................................................................................20
Appendix E: Building hardware Bill of Materials (BoM)—A real-world example ......................................................................................................................................................21
Step 1: Determine the number and type of physical servers needed ..........................................................................................................................................................................21
Step 2: Build the storage........................................................................................................................................................................................................................................................................................24
Step 3: Networking the servers and storage .......................................................................................................................................................................................................................................26
Step 4: Build the rack ................................................................................................................................................................................................................................................................................................28
Step 5: Software licenses .......................................................................................................................................................................................................................................................................................28
Connections .............................................................................................................................................................................................................................................................................................................................28
Fabric network.................................................................................................................................................................................................................................................................................................................28
Storage Network ...........................................................................................................................................................................................................................................................................................................28
Maintenance Network ..............................................................................................................................................................................................................................................................................................29
Management LAN .......................................................................................................................................................................................................................................................................................................29
Enterprise LAN ...............................................................................................................................................................................................................................................................................................................29
Backup LAN.......................................................................................................................................................................................................................................................................................................................29
Rack Layout .......................................................................................................................................................................................................................................................................................................................31
Bill of materials ................................................................................................................................................................................................................................................................................................................32
Fabric Connectivity for the HPE vNS system .....................................................................................................................................................................................................................................33
Storage Networking Connectivity for the HPE vNS system..................................................................................................................................................................................................34
Small hardware footprint to host two HPE vNS systems.........................................................................................................................................................................................................34
References.................................................................................................................................................................................................................................................................................................................................36
Architecture guide Page 3
INTRODUCTION
The HPE Virtualized NonStop (vNS) system introduces a whole new way of implementing HPE NonStop solutions in today’s enterprise IT.
It allows the ability to deploy an HPE NonStop system as a guest in a virtualized IT infrastructure or in a private cloud environment. This
opens up the implementation choices for HPE NonStop solutions to a wide variety of hardware products available in the market.
To support HPE NonStop fundamentals of high availability, scalability, and security, Hewlett Packard Enterprise requires the virtualized
hardware environment to meet a set of rules so that the HPE vNS system offers the same features and benefits as available in the
HPE NonStop converged system (HPE NonStop X).
This document describes the requirements and rules for deploying an HPE vNS system in a virtualized environment. The document is
intended to help customers prepare the underlying environment and deploy HPE vNS systems in compliance to these rules and guidelines.
Scope
The scope of the document is to:
1. Specify the hardware components (such as server, storage, networking, and connectivity) of the infrastructure eligible to host an
HPE vNS system in a generic fashion.
2. State the rules governing the distribution and configuration of HPE vNS virtual machines (VMs) on the virtualized hosts.
3. Provide information about hardware configurations that HPE vNS has been implemented on. This is essentially a reference to the readers
of an HPE vNS system implementation in order to help them design systems for their specific requirements.
4. This document covers only the VMware® based virtualization environment.
This is a live document and will get updated periodically. The latest version of this document is available for download at
hpe.com/info/nonstop-ldocs.
A key feature of the HPE NonStop X architecture is redundancy against a single point of failure. The system interconnect consists of two
independent physical fabrics. Each storage volume is provisioned in two mirrored drives, each of which is connected to two CLIM storage
controllers to protect against failure of a single drive or a single CLIM storage controller. Network interfaces can be configured as failover
pairs to provide continuous connectivity against failure of one of the interfaces. The HPE NonStop software too is highly fault tolerant.
Two processes can be run on two separate logical processors in a primary-backup mode (called “process-pair”) with the primary process
sending regular status updates to the backup process (a method called “check-pointing”). Such an architecture is the cornerstone of the
near-continuous availability of HPE NonStop systems.
This cluster of VMs and its associated resources (but not the hardware they are hosted on) is brought under the management of one or a
pair of HPE NonStop System consoles. The role of an orchestrator is critical for a clean HPE vNS system creation and eventual shutdown in
an intuitive and user-friendly manner. An orchestrator is a tool available in cloud environments, which helps administrators to automate the
tasks of VM definition, configuration, provisioning, sequencing, instantiation, connectivity, and such others through simple workflows aided
by a powerful graphical interface.
The objective of this AG is to ensure that the HPE vNS system implementation is able to offer the same benefits to the customer as an
HPE NonStop system shipped from the factory.
While the HPE Pointnext Services organization offers professional services to do these tasks, it’s important that the rules governing the
implementation of HPE Virtualized NonStop systems are clearly understood by customers.
The HPE vNS AG specifies the requirements and guidelines that its implementation should meet for it to be supported by HPE. An HPE vNS
system built-in compliance with this AG should provide the same availability and scalability advantages as does an HPE NonStop X system.
The AG covers each component of an HPE vNS system such as HPE vNS CPUs (or interchangeably referred to as CPUs in this document),
storage vCLIMs, network, system interconnect fabric, and HPE vNSCs.
The following sections explain the requirements for each of these HPE vNS elements.
Compute servers
As represented in the Figure 2, in an HPE vNS system, the HPE vNS CPUs and vCLIMs are VMs created in physical servers virtualized by
hypervisors—ESXi in the VMware environment. The HPE vNS CPUs, vCLIMs, and HPE vNSCs run as guest operating systems inside these
VMs and are distributed over several physical servers. The VMs for HPE vNS CPUs and vCLIMs are connected together by a system
interconnect using Ethernet switches and adapters that support RoCE v2 protocol for low latency data transfer. Using Ethernet for the
HPE vNS fabric enables HPE vNS to be deployed in standard IT environment where Ethernet is pervasive.
The AG for servers for deploying an HPE vNS system are:
1. The servers require one or more Intel® Xeon® x86 processors from one of the following processor families:
a. E5 v3 or E7 v3 (Haswell)
b. E5 v4 or E7 v4 (Broadwell)
Architecture guide Page 6
NOTE
Throughout this document, when the term “core” is used in the context of a VM, a full processor core is implied (and not a hyper-thread).
4. HPE vNS VMs cannot share the processor cores with other VMs.
These VMs are highly sensitive to latency and require that the cores be dedicated to the HPE vNS VMs and not shared with others.
5. HPE vNS VMs require physical memory to be dedicated to the VMs.
For the same reasons as mentioned in (4) and for security, HPE vNS VMs require physical memory to be dedicated and not shared with
other VMs.
See Appendix A Section 1: Server models for the list of Intel processor models and HPE servers that have been used for HPE vNS
qualification by HPE.
Implementation notes—fabric
1. One fabric NIC for each physical processor in a server is recommended for production systems. Such a configuration provides an
HPE vNS CPU or vCLIM hosted on any processor in a server with direct access to the system interconnect fabric instead of indirect
access through other processors in the server. Indirect fabric access through another processor in the same server can lead to a
performance penalty versus a direct fabric access.
2. To implement X and Y fabrics, use two separate NIC ports. On hosts having a single fabric NIC connect one port to the X fabric and the
other port to the Y fabric. On hosts having two fabric NICs, connect both the ports of one of the NICs to X fabric and those of the other
NIC to Y fabric. This also enhances availability of the fabric paths.
Figure 3 illustrates the logical view of a sample HPE vNS system deployed on virtualized server hardware. In this example, one HPE vNS
system (with a unique system number) consisting of four HPE vNS CPUs, four IP vCLIMs, four storage vCLIMs, and two HPE vNSCs are
deployed on four physical servers based on Intel Xeon processors.
2. An HPE vNS CPU requires the physical processor cores and memory to be dedicated to itself and not shared with any other VMs. Due to
its sensitivity to latency and performance, a HPE vNS CPU VM should have dedicated cores.
3. See the section “Selecting processor SKUs for servers” later in this document for guidance on processor selection for servers hosting
HPE vNS CPUs.
IP or Telco vCLIMs
An IP or Telco CLIM offloads network I/O processing from the CPUs in an HPE NonStop system. It terminates TCP/IP sessions between
external entities and an HPE NonStop system. The IP or Telco CLIM function is provided by the respective vCLIMs in an HPE vNS system.
Similar to high-end HPE NonStop X systems, a high-end HPE vNS system supports between 2 and 54 IP or Telco vCLIMs. Similarly,
an entry-class HPE NonStop X system and an entry-class HPE vNS system support between 2 and 4 IP/Telco vCLIMs.
Physical IP and Telco CLIMs in a HPE NonStop X system provide failover features to handle the failure of hardware ports (intra CLIM
failover) and failure of the entire CLIM (CLIM-to-CLIM failover).
The AG for IP and Telco vCLIMs are:
1. An IP or Telco vCLIM requires the physical processor cores and memory to be dedicated to itself and not shared with any other VMs.
Due to its sensitivity to latency and performance, an IP or Telco vCLIM should have dedicated cores and memory and the underlying
processor should have hyper-threading enabled.
2. IP and Telco vCLIM VMs require 4 or 8 dedicated cores. All IP or Telco vCLIMs from the same HPE vNS system should have the same
number of dedicated cores.
The default configuration for IP and Telco vCLIMs has eight dedicated cores. If the HPE vNS system is not expected to have heavy
network traffic, four cores may be dedicated to IP or Telco vCLIMs instead of eight. This flexibility eases deployment of IP or Telco
vCLIMs in development and test systems as they require fewer cores from the underlying server.
3. IP and Telco vCLIMs belonging to the same failover pair should not be deployed in the same physical server. More than one IP or Telco
vCLIM may be deployed on the same physical server if those IP or Telco vCLIMs belong to different failover pairs or are from different
HPE vNS systems.
If two vCLIMs belonging to the same failover pair are deployed in a physical server, should that server fail, both the primary and backup
vCLIM will fail simultaneously, leading to an outage.
4. IP or Telco vCLIMs can be configured to provide one of the following three types of network interfaces:
a. VMXNET3—this is the VMware paravirtualized network interface, which allows any network I/O card supported by VMware to be used
by an IP or Telco vCLIM. For the list of I/O cards supported by VMware, refer to vmware.com/resources/compatibility/pdf/vi_io_guide.pdf.
Of the three network connection types, VMXNET3 provides the lowest network throughput for a given Ethernet wire speed due to
the virtualization overhead. Network interfaces in IP and Telco vCLIMs that use VMXNET3 do not support CLIM failover features.
b. SR-IOV—in this type of interface, a physical port in a NIC is directly accessed and shared by multiple network interfaces belonging to
one or more VMs. It requires a NIC with sets of virtual functions and registers to allow such an access. As with the VMXNET3
interface, network throughput in a SR-IOV connection is divided between the network interfaces sharing the NIC. The aggregate
throughput of the network interfaces sharing the NIC using SR-IOV is closer to the wire speed of the NIC due to their direct access to
the NIC port.
Network interfaces of IP vCLIMs using SR-IOV based NIC access support CLIM to CLIM IP-address failover but do not support
intra-CLIM failover.
IP or Telco vCLIM support for SR-IOV-based network interface has dependency on specific device drivers and hence is limited to
specific NIC models. See Section 3: NICs supporting SR-IOV in Appendix A for more information.
c. PCI passthrough—this provides an IP or Telco vCLIM with exclusive direct access to a physical port in a NIC which it uses to provide
one network interface. Such a network interface offers the highest throughput compared to VMXNET3 or SR-IOV based network
interface types because the entire NIC port is dedicated to that interface.
PCI passthrough supports both intra-CLIM and CLIM-to-CLIM failover. Of the three network connection types, PCI passthrough
provides the closest match to the feature-set available in physical CLIMs.
IP or Telco vCLIM support for PCI passthrough network interface is limited to specific NIC models. See Section 4: NICs supporting PCI
passthrough for network interface in IP and Telco vCLIMs in Appendix A for more information.
Architecture guide Page 9
Storage vCLIMs
A storage CLIM offloads low-level storage I/O processing from the CPUs in an HPE NonStop system. The storage CLIM function is provided
by the VMs (vCLIMs) in an HPE vNS system. Similar to a high-end HPE NonStop X system, a high-end HPE vNS system can have between 2
and 54 storage vCLIMs. Likewise, as in an entry-class HPE NonStop X system, an entry-class HPE vNS system can have between 2 and 4
storage vCLIMs. The number of storage vCLIMs in an HPE vNS system must be an even number.
Storage drives can be connected to physical servers hosting storage vCLIMs as internal drives in the server or as external drives in one or
more network storage systems.
In a virtualized environment, the hypervisor intermediates between a VM and physical hardware resources. For VM access to physical
storage drives, VMware provides the means to create virtual disks from the physical storage drives and presents the virtual disks to the VM.
VMware vSphere provides several virtual SCSI storage controllers for VMs to access the virtual disks. For the HPE vNS storage vCLIM, the
VMware Paravirtual SCSI (PVSCSI) controller provides the best storage performance and is the recommended controller for the vCLIM.
The AG for storage vCLIMs are:
1. On the HPE Virtualized NonStop (vNS) systems, the storage vCLIMs should be assigned dedicated processor cores and memory.
For similar reasons as stated earlier for CPU VMs and IP vCLIMs, storage vCLIMs also require dedicated processor cores and memory
that are not shared with other VMs by the hypervisor.
2. A storage vCLIM can be provisioned with either 4 or 8 processor cores. All storage vCLIMs should have the same number of cores
assigned.
This flexibility helps with making efficient use of the available hardware resources. Use of eight processor cores is the default
configuration and is required if volume level encryption (VLE) is implemented. Use of four processor cores supports systems with lower
storage requirements.
3. Storage vCLIMs belonging to the same failover pair should not be deployed on the same physical server. As a corollary, each of the
storage vCLIMs deployed on a physical server must belong to separate failover pairs.
Similar to the explanation for the IP vCLIMs of a system, if two storage vCLIMs belonging to the same failover pair are hosted on a
physical server, an outage of that server will cause an outage of the system.
4. A pair of storage vCLIMs can connect between one and 50 mirrored storage devices (up to a total of 100 LUNs).
5. If VLE is used, storage vCLIMs require connectivity to an Enterprise Secure Key Manager (ESKM). This is an IP connection, which can be
provisioned over a VMXNET3 interface of the storage vCLIM.
6. Storage CLIMs require storage I/O cards supported by VMware as specified in vmware.com/resources/compatibility/pdf/vi_io_guide.pdf.
HPE vNS uses VMware PVSCSI controller to connect to storage volumes. Hence any storage I/O card supported by VMware for block
level storage access will work for HPE vNS.
7. For connecting to external SAN storage a physical server is recommended to have one storage NIC for each storage vCLIM deployed
on it.
8. HPE vNS requires block storage devices that are supported by VMware. For external storage options, refer to
vmware.com/resources/compatibility/pdf/vi_san_guide.pdf.
9. HPE vNS systems may be configured with multiple paths to storage volumes in either 2 CLIM-2 Disk (2C2D) or 4 CLIM-2 Disk (4C2D)
configurations. See the section Multipath access between CLIM and storage volumes in Appendix B: Storage considerations for more
information.
If the storage volumes are on local storage, support for multiple paths to storage volumes requires VMware vSAN™. In the absence of
vSAN, only configuration option (1) described in Appendix B: Storage considerations is supported.
The RAID 1 mirroring feature of vSAN offers another possible configuration wherein the storage vCLIM failover pairs connect to the
same virtual disk which in turn is configured as RAID 1. This provides equivalent availability as that of a 2C2D configuration on NonStop.
Note: Do not use Volume Level Encryption (VLE) with this configuration because key-rotation cannot be performed using the supported
procedure.
Architecture guide Page 10
10. For external storage connectivity, you may use either iSCSI (Ethernet) or Fibre Channel (FC) networks.
Since HPE vNS uses VMware PVSCSI controller, the deployment of storage volumes could use any storage networking technology
supported by VMware. Historically, the use of FC was popularized as a faster alternative to Ethernet networks for storage access.
However, the advent of faster Ethernet technologies coupled with the ubiquitous nature of Ethernet networks in enterprise data centers
has led to the increasing adoption of Ethernet networks for storage when compared to FC. You may implement either of these storage
networking options for connecting your servers to external storage.
11. For backup requirements, HPE vNS only supports virtual tapes and requires HPE NonStop BackBox.
HPE vNS does not support physical tapes. For backup needs, HPE vNS supports virtual tapes and requires either a virtual BackBox or a
physical BackBox VTC. You can connect multiple HPE vNS or converged HPE NonStop systems to a virtual BackBox or to a physical
BackBox VTC.
These storage requirements are in addition to the storage requirement for VMware products such as vSphere, vRealize Orchestrator, and
vCenter which will be applicable on servers they’re deployed on. For information on storage requirements for VMware products go to:
docs.vmware.com.
4. For storage that mandates RAID configuration to protect against drive failures, consider RAID 5 unless your storage vendor and/or
storage architect have a different recommendation. RAID 5 protects against a single drive failure, provides higher write performance than
RAID 6 and uses physical storage capacity more efficiently than RAID 1.
5. Network storage systems can have storage overhead for high availability beyond RAID, which reduces the usable storage capacity. The
network storage system storage sizing tool must be used to determine usable storage capacity (such as NinjaSTARS for HPE 3PAR and
HPE Primera storage systems).
6. A network storage system can distribute the logical storage volumes across the entire set of drives in the storage system by default. The
primary and mirror HPE vNS volumes can thus be provisioned on the same set of drives. To protect against two drive failures, the
primary and mirror HPE vNS volumes should be provisioned on mutually exclusive set of drives (preferably inside separate enclosures)
and supported by separate controllers.
Refer to the storage system vendor documentation to understand the implications of points 4 to 6.
See the section “Selecting processor SKUs for servers” later in this document for guidance on processor selection for storage vCLIMs.
Architecture guide Page 11
HPE vNSC
An HPE Virtualized NonStop System Console (HPE vNSC) should be hosted on a Windows VM. The HPE vNSC is a set of applications and not an
appliance. Customers need to install it on a VM with separately licensed Windows Server 2012 or Windows Server 2016 or Windows Server 2019.
The AG for HPE vNSC are:
1. An HPE vNS system should be managed by an HPE vNSC or a pair of HPE vNSCs.
An HPE vNSC is required to perform installation, configuration, and management tasks for an HPE vNS system. In an HPE NonStop X
system, NSC provides the HPE NonStop Halted State Service (HSS) software image to network boot a CPU before the HPE NonStop OS
is loaded on it. Two NSCs provide high availability for the HSS network boot server function. In an HPE vNS system, the HSS and the
HPE NonStop OS images are hosted in an independent management plane and critical functions such as HPE vNS CPU VM reloads do
not require access to the HPE vNSC. Hence one HPE vNSC instance is sufficient to manage a HPE vNS system.
2. One instance of HPE vNSC can manage up to eight HPE vNS systems.
3. If a server is hosting only vCLIMs, such a server may use processor SKUs different from the ones used by the servers hosting HPE vNS
CPUs. Such servers may use lower (and less expensive) processor SKUs to save on cost.
4. The processor core frequency is the primary factor that influences HPE vNS CPU performance. Processors with higher core frequency
and faster memory bus provide higher HPE vNS CPU performance. For best system performance, typically required for production
systems, use processors with higher core frequency (>= 3.2 GHz) and faster memory bus in servers which host HPE vNS CPUs.
High core frequency processors have higher cost than lower core frequency processors. For servers hosting HPE vNS CPUs of systems not
having demanding performance requirements (such as development systems) or for servers hosting only vCLIMs (storage or IP), you may
use processors with lower core frequency. Even for such servers, it’s recommended to use processors with a core frequency >= 2 GHz.
Intel Xeon Scalable processor family uses names of precious metals to indicate processor performance. Accordingly, the names Platinum
and Gold are used along with the processor model numbers to identify faster processors and these are good candidates for servers
hosting HPE vNS CPUs for production systems. The next processor tier called Silver is a good candidate for servers hosting HPE vNS
CPUs for development systems or for servers hosting only vCLIMs (of a production or a development system). Bronze processors are
not recommended for the servers hosting any of the HPE NonStop VMs.
As an example, you may refer to the HPE ProLiant DL380 QuickSpecs to see the list of Intel Xeon SKUs orderable with HPE ProLiant
DL380 servers. Based on your target configuration, you may select the appropriate processor SKU by referring to this document.
5. In servers with two processors, installing all of the memory required by the HPE vNS CPU VM in the DIMM sockets of one of the two
processors provides higher performance than splitting the same memory between the two processors as in a usual balanced memory
configuration. The amount of memory installed in the DIMM sockets of the second processor could be reduced to lower the server
hardware cost. This unbalanced memory configuration provides higher performance by allowing the HPE vNS CPU VM to access all of its
memory without having to access the second processor. For example, in a server with two processors that hosts a HPE vNS CPU VM with
256 GB memory and a vCLIM with 16 GB memory, 256 GB + overhead could be installed in DIMM sockets of one processor while 16 GB
+ overhead could be installed in DIMM sockets of the second processor. See the section Server memory for additional considerations.
6. The total number of cores required in a server should be equal to or greater than the sum of the cores required by all the VMs hosted in
the server and the cores required by ESXi. See Appendix C: System requirements for information on the number of cores required by
various constituents of a server hosting a HPE vNS system.
7. Use the following guidelines to arrive at the number of cores required by ESXi:
a. Compute 21% of the total number of cores required by the VMs hosted in the server that require dedicated cores and round up to
next higher whole number.
b. For example, since HPE vNS CPUs and vCLIMs require dedicated cores, if those VMs consume 20 cores in a server, take 21% of 20,
which is 4.2. Round up to 5 and add to the 20 cores dedicated to HPE vNS CPU and vCLIM VMs for a total minimum of 25 cores that
would be required by ESXi in order to deploy the HPE vNS CPU and vCLIM VMs.
8. In general, faster processor SKUs and higher core count may have higher power rating and non-linear price increase for the processor SKU.
9. Add free cores to the number of cores required to support future expansion or if you plan to use NSDC to dynamically scale HPE vNS CPU
cores.
10. For optimum performance, all the cores used by a HPE vNS CPU or a vCLIM VM should be deployed in the same processor instead of
being split across two or more processors. Splitting the cores of a HPE vNS CPU or a vCLIM VM across two or more processors will have
a performance penalty associated with the data transfer between the processors.
Server memory
As mentioned in earlier sections, HPE vNS VMs require dedicated memory which cannot be shared with other VMs. For such VMs, ESXi
reserves an additional 20% of memory. To arrive at the total memory required in a physical server, mark up the memory required by
HPE vNS VMs by an additional 20% and add up the memory required by other VMs deployed in the server. Server vendors provide DIMM
population rules for using the full available memory bandwidth of the processor for best performance. HPE servers provide documentation
on recommended DIMM configuration in the QuickSpecs. It is recommended to comply with these guidelines.
Laying out HPE vNS VMs in servers
The following is a sample layout of the VMs in an HPE vNS system with two HPE vNS CPUs (2-cores each), 2 IP vCLIMs, and 4 storage
vCLIMs. In this example, the vCLIMs have been assigned with 8 cores each. Servers 1 and 2 could be loaded with a higher performance
Intel® Xeon® Gold processor SKU (for example, Intel Xeon Gold 6246R processor) since it hosts a HPE vNS CPU. Servers 3 and 4 could be
loaded with a lower cost Intel® Xeon® Silver processor SKU (for example, Intel Xeon Silver 4214 processor) since it does not host a
HPE vNS CPU.
Architecture guide Page 13
Compute node 4
Socket 1 Usage
Socket 2 Usage
FIGURE 4. Sample core layout of VMs in physical processors of a server
NOTE
The layout is only a logical representation. The actual core assignment is determined by ESXi at the time of VM deployment. The HPE vNS
deployment tools have limited influence over it.
Connectivity
An HPE NonStop system contains several elements, spanning technologies such as servers, storage, and networking, which work in tandem
to present as a “single system” to the outside world. Proper connectivity between these elements becomes highly critical for the correct
operation of the system. Following sections explain these connection setups and best practices.
Fabric connection between HPE vNS CPUs and vCLIMs
HPE vNS CPUs and vCLIMs connect over the high-speed RoCE v2 fabric. The architecture guide for HPE vNS fabric is explained in an earlier
section. Only specific NIC models are supported for the fabric connection as explained in Section 2: Fabric NICs. These NICs are used in the
servers hosting HPE vNS CPU VMs and vCLIM VMs. The fabric switches should support the fabric speed requirement (25GbE to 100GbE)
and support data center bridging (DCB) and 802.3x Pause frames for flow control.
Two separate switches are required to support X and Y fabric for redundancy. In other words, the X and Y fabric cannot share the same
physical switch. On servers having a single 2-port fabric NIC, connect one of the ports to X and the other to Y. On servers having two 2-port
fabric NICs, connect both ports of one of the NICs to X and those of the other NIC to Y. This provides better availability characteristic. The
fabric switches should have adequate number of ports to support all the fabric NICs in the system.
Figure 15 illustrates the fabric connections for the sample BoM which may be used as a reference.
External Storage network
For external storage connectivity, you may use either iSCSI (Ethernet) or FC networks. You may use a dedicated storage network for your
HPE vNS system or connect to your enterprise SAN to meet the storage needs of your HPE vNS system. The latter offers cost benefits
through storage consolidation. These are standard storage connectivity options and no special considerations are necessary for HPE vNS.
For redundancy, the paths to storage devices hosting primary and mirror volumes should be separated to ensure availability in case of single
point failures. If you’re using external storage arrays, it’s recommended to:
a) Use separate storage arrays for hosting primary and mirror volumes
b) Have separate, redundant, connection paths between the storage vCLIMs and storage devices. See Figure 16 for a possible connection
diagram.
If you are using dedicated external storage and use iSCSI connection, you may share the fabric switch for storage networking as well. This is
achievable if the switch model chosen supports 10GbE (for iSCSI) and the fabric speed (25GbE to 100GbE). This saves the cost of
dedicated 10GbE switches for the storage network. However, in such a configuration, it is highly recommended to isolate the fabric and iSCSI
network using VLANs for security purposes.
Architecture guide Page 14
Maintenance LAN
The HPE vNS maintenance LAN is used by HPE vNSC to connect to the HPE vNS CPUs, and the vCLIMs, to securely perform
commissioning and configuration tasks. This LAN is also used to perform CLIM administration tasks from HPE NonStop host using
CLIMCMD from TACL. A dedicated 1GbE switch is used for this LAN.
The OSM tools in HPE vNSC only manage the software in HPE vNS systems.
Separate VLANs should be used to isolate these networks from one another. This not only logically isolates the traffic between the VMs,
it can also be used to implement QoS for throughput and latency-sensitive traffic such as storage access.
Rack
HPE vNS does not require a dedicated rack. Depending on the target environment considerations (security and space), you may host the
HPE vNS hardware in a dedicated rack or share it with other hardware. 2U high servers are recommended as they provide more PCIe slots
for I/O cards.
VMware requirements
HPE vNS requires following three VMware products:
1. VMware vSphere 6.7 and above
2. VMware vCenter 6.7 and above
3. VMware vRealize Orchestrator 7.3 and above
ESXi is the virtualization software that should be run on each physical server. HPE vNS requires vSphere Enterprise Plus Edition, which
supports SR-IOV for VM I/Os. For deploying a system, vRealize Orchestrator is needed, which is an appliance available in the vSphere
Enterprise Plus bundle.
VMware vCenter is required for managing and administering the VMware environment. HPE vNS does not require a dedicated vCenter. You
may integrate the hardware running an HPE vNS system into an existing vCenter managed environment. For production use, vCenter High
Availability (HA) is required, which involves running three vCenter instances (active, standby, and witness) on separate physical servers. One
license of vCenter Standard allows you to run the vCenter HA configuration consisting of the three instances.
While arriving at the HPE vNS hardware configuration, consider the resource requirements for the three VMware software. They are
documented at docs.vmware.com/.
Architecture guide Page 15
3rd Gen Intel Xeon Scalable processors (Gold and Silver) HPE ProLiant DL380 Gen10 Plus
2nd Gen Intel Xeon Scalable processors (Gold and Silver) HPE ProLiant DL380 Gen10
Intel Xeon Scalable processors (Gold and Silver) HPE ProLiant DL380 Gen10
Intel Xeon Broadwell (E5-nnnn v4 and E7-nnnn v4) HPE ProLiant DL380 Gen9
Intel Xeon Haswell (E5-nnnn v3 and E7-nnnn v4) HPE ProLiant DL380 Gen9
Mellanox ConnectX-6 VPI or Mellanox MCX65nnnnA-AAAA or MCX61nnnnA-AAAA HPE IB HDR100/EN 100G 2p 940QSFP56 Adptr
ConnectX-6 EN (where “n” is a numeral and “A” is an alphabet)
Mellanox ConnectX-4 2p—VPI MCX456A-ECAT HPE 840QSFP28 IB EDR/Ethernet 100 Gb 2-port
Mellanox ConnectX-4Lx Adapter—EN MCX4121A-ACUT HPE 640SFP28 25GbE 2p ConnectX-4Lx Adapter—EN
Section 3: NICs supporting SR-IOV for network interface in IP and Telco vCLIMs
NIC processor Tested with NICs Tested on servers
Section 4: NICs supporting PCI passthrough for network interface in IP and Telco vCLIMs
NIC processor Tested with NIC Tested on servers
NOTE
This table is given as an example. You may use any other switch that supports the fabric speed (25GbE to 100GbE) and DCB (802.3x).
1
Hardware Requirements for the VMware vCenter Server® Appliance™ (docs.vmware.com)
2
Hardware Requirements for the VMware vRealize® Orchestrator Appliance™ (docs.vmware.com)
3
ESXi hardware requirements (docs.vmware.com)
Architecture guide Page 20
We will build the hardware in four steps: 1) Server hardware 2) Storage hardware 3) Networking hardware 4) Rack and peripherals.
Since there are 4 CPUs in this configuration, we need at least 4 physical servers. Apart from the CPU, we can use these servers to host:
1. One Storage CLIM
2. One Network CLIM
In addition to the above, we need resources to host one HPE vNSC and one vBB. We will use two of the four servers to host these VMs.
Note that increasing the number of VMs that are deployed in a host server increases the fault zone of that server. If the server encounters a fault
and goes down, the fault will bring down all the VMs deployed on the server. Hence, even though more than one Storage or one Network CLIM
can be hosted on a server (as long as they don’t belong to the same failover pair), such a configuration should generally be avoided.
dedicated cores i.e., 18 * 0.21 = 4 (rounded up). Thus, a total of 24 cores (or 26 cores if CPUs are provisioned with 2 additional cores
as stated above) will be required in each server.
b. I/O slots: These servers will require at least 4 I/O slots. One for system interconnect fabric NIC (100GbE), one for Storage CLIM NIC
(10GbE), two for Network CLIM NICs (10GbE). The HPE vNSC and vBB require 1GbE ports. If the server chassis does not have an
embedded LOM with 1GbE ports, an additional 1GbE NIC would be required. Since both vBB and storage vCLIM are hosted on the
same physical server, a dedicated NIC for connectivity between vBB and the storage vCLIM is not required.
c. Memory: Each server requires 152 GB of memory dedicated for HPE vNS VMs (128 GB for CPUs, 8 GB for storage CLIM, and 16 GB
for network CLIM). We recommend 20% additional memory for effective ESXi operation. In addition, 8 GB each is need by ESXi, vBB
and HPE vNSC, adding to a total of 152 * 1.20 + 24 = 207 GB.
2. Two servers hosting one CPU and a storage CLIM each. The resource requirements for these servers are as below:
a. Cores: These servers will require 10 (or 12 for future upgrade of HPE vNS CPUs) dedicated cores for HPE vNS VMs (2 or 4 for the
HPE vNS CPU and 8 for the vCLIM) and an additional 10*0.21=3 cores for ESXi i.e., a total of 13 or 15 cores.
b. I/O slots: These servers will require at least 2 I/O slots. One for system interconnect fabric NIC (100GbE) and one for Storage CLIM
NIC (10GbE).
c. Memory: Each server requires 136 GB of memory dedicated for HPE vNS VMs (128 GB for CPUs and 8 GB for storage CLIM). We
recommend 20% additional buffer for dedicated memory. This and 8 GB of memory need by ESXi adds to a total of 136 * 1.2 + 8 =
172 GB.
NOTE
Server vendors provide rules to populate DIMMs in the servers for optimum performance. Follow those rules in addition to the above to
determine the final configuration of the DIMMs.
iSCSI NIC
Each server will need to access external storage, either to access NSK volumes (for storage CLIMs) or to access OS disk of the CLIM VMs or
Halted State Service (HSS) software for the HPE vNS CPUs. Thus, each server will require a NIC dedicated for iSCSI storage traffic. A NIC
with hardware iSCSI offload does not require the server processor to handle the iSCSI protocol and can possibly improve the server
processor performance for the HPE vNS VMs. In this example, an HPE FlexFabric 10Gb 2P 534FLR-SFP+ converged network adapter
(CNA) which has hardware iSCSI offload is used.
Ethernet NIC
Network vCLIMs can support up to five Ethernet interfaces. In this example, four Ethernet interfaces will be provided for each Network
vCLIM. Network vCLIMs support a limited selection of Ethernet NICs due to the need for CLIM drivers for PCI Passthrough access to the NIC.
From the list in Appendix A, Section 4, HPE Ethernet 10Gb 2P 530T Adapter is selected for the example HPE vNS system. Each 530T NIC
supports two Ethernet Passthrough interfaces. The example HPE vNS system Network vCLIM requirement is for four PCI Passthrough
interfaces, so two of the 530T NICs are needed in each server that will host a Network vCLIM. The example HPE vNS system has two IP
vCLIMs which will be hosted in the two servers with two processors. Hence we shall have two 530T NICs in each of these two servers.
NOTE
If the interface bonding feature for the Ethernet interfaces in the Network vCLIMs to support NIC failover is not required, then any Ethernet
NIC supported by VMware may be used.
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 ESXi
Compute node 1 HPE vNS CPU
Socket 1 Usage IP vCLIM
Socket 2 Usage S CLIM
HPE vNSC
Compute node 2 vBB
Socket 1 Usage Unused
Socket 2 Usage
Compute node 3
Socket 1 Usage
Socket 2 Not populated
Compute node 4
Socket 1 Usage
Socket 2 Not populated
FIGURE 8. Layout of a possible core assignment to HPE vNS VMs and ESXi
Architecture guide Page 24
The I/O slots of the DL380 Gen10 server will be filled as shown below.
FIGURE 9. Layout of the NICs in the I/O slots of DL380 Gen10 servers
NOTE
The above figure depicts the DL380 Gen10 system having LOM with four 1GbE ports
While a detailed discussion of these three choices is beyond the scope of this document, it is appropriate to note here that option (1) does
not provide the 4-paths access to NSK volumes as discussed in Appendix B: Storage Considerations. However, this may be an attractive
option for development systems since it provides all HPE NonStop storage functions except path switching and is a fair trade-off between
cost, performance, and availability characteristics.
2a. Selecting the storage product
Selecting the “right storage” for your HPE NonStop system requires careful considerations. It involves making several decisions, based on
factors that may conflict with each other. For this discussion, we will use storage array-based technology. Storage arrays come in different
levels of sophistication. HPE NonStop has its own storage performance and latency requirements, and implements host based mirrored disks
analogous to RAID 1 arrays for availability. Hence a basic storage array product such as HPE MSA 2060 is sufficient for HPE vNS storage,
particularly if the storage array is dedicated to the HPE vNS system. For best storage performance in production systems, consider using
SSDs. For our example HPE vNS system, the HPE MSA 2060 Storage Array will be used.
Architecture guide Page 25
A large part of the storage requirement comes from NSK volumes (40 TB in our example) where the volumes exist in pairs of primary and
mirror drives. According to the rules of deployment explained in the section External Storage network, primary and mirror volumes should be
provisioned on physically separate hardware for availability. Hence the total storage requirement should be grouped into two halves
(20.755 TB in our example), and provisioned using physically separate hardware.
2c. Build one half of target storage
Due to Storage Mirroring, the storage can be specified for half of the total storage requirement (20.755 TB in the example) and then
duplicated for the other half. While the HPE NonStop system requires a total of 20 TB of usable storage, the storage volumes used by a
HPE vNS system are typically of the order of a few hundred GB. Hence the HPE NonStop system in our example uses a large number of
such volumes.
The storage I/O performance depends on several factors, one of which is the number of I/Os to a physical drive. If a large number of storage
volumes are provisioned on a storage drive, the I/O performance may be throttled due to a large number of I/Os simultaneously accessing a
specific drive. By using a larger number of smaller capacity drives instead of a smaller number of larger capacity drives for equivalent total
storage capacity, the same number of I/Os can be spread across more physical drives and improve storage performance.
The other significant factor affecting I/O performance is the storage drive hardware—HDD vs. SSD. The HPE vNS example system shall use
SSDs for their superior price vs. performance.
In MSA arrays, the smallest drive size available at the time of this writing is 960 GB. In order to provision 20.755 TB of storage, a minimum of
20.755/0.960 = 22 drives are required. Storage arrays use RAID technology for availability. For our example implementation, we shall use
RAID 5 and divide the 22 drives into four groups of 6 drives each to evenly balance the drives across two MSA arrays. This configuration
requires 4 x 6 = 24 drives for data storage. Each RAID 5 group requires one additional drive for parity. The final implementation requires
four RAID 5 groups with 7 drives per group (6 drives for data and 1 drive for parity) for a total of 4 x 7 = 28 960 GB drives which will
provide 23.04 TB of storage.
A single MSA array can host up to a total of 24 drives. For our example implementation, two MSA 2060 arrays will be used with each array
containing two of the four RAID 5 groups.
2d. Build the other half of target storage
The two halves of storage should be identical to each other in every respect. Hence, to build the other half, duplicate the configuration that
was specified for the first half.
The schematic figure below illustrates the storage hardware configuration for primary and mirror arrays.
Architecture guide Page 26
1 2 3 4 5 6 7 8 9 10 11 12 13 14 1 2 3 4 5 6 7 8 9 10 11 12 13 14
1 2 3 4 5 6 7 8 9 10 11 12 13 14 1 2 3 4 5 6 7 8 9 10 11 12 13 14
FIGURE 10. Layout of storage drives in the four MSA 2060 arrays
The following sections describe these networks and the specification of the network hardware.
Step 3a: Build the fabric network
The key task in this step is the selection of the switch. As per the section System interconnect fabric we need two physically isolated fabric
networks X and Y and hence two fabric switches are required for these. The next task is to count the number of fabric ports required on the
switch which mainly depends on the number of fabric ports required for the system. In Step 1, six fabric NICs were specified for the four
servers. Each NIC has two 100GbE ports—one for X fabric and one for Y fabric. Hence the X and Y fabric switches each require at least six
100GbE ports.
The HPE vNS system requires the fabric switches to support IEEE 802.3x Pause frames and DCB protocols for flow control.
For the example HPE vNS system, we will use a pair of HPE SN2410bM 48SFP+ 8QSFP28 P2C switches. This switch model has eight
100GbE ports and, in addition, has 48 10GbE ports. In step 3b, the 10GbE ports will be used for the SAN.
Step 3b: Build the Storage Area Network (SAN)
In Step 2, we built the external storage hardware to provision storage for the system. These need to be connected to the servers over an
iSCSI or Fibre Channel (FC) network. The example HPE vNS system will use iSCSI for the SAN since it can be implemented over Ethernet
switches instead of requiring separate Fibre Channel switches.
For availability considerations, a pair of switches are required in order to have an alternate access path to storage arrays in case of a switch
failure. In Step 1, each server was configured with an HPE 534FLR-SFP+ NIC to be used by the storage vCLIM, and each NIC has two ports.
The two ports of the storage NIC are connected to the two switches for high availability. The example HPE vNS system has four host servers
with one storage NIC per server for a total of 8 ports to be connected to the two switches for the SAN. Thus, each switch requires 4 ports to
implement these connections.
Each of the four MSAs from Step 2 has four ports for a total of 16 ports to be connected to the two switches for the SAN. Thus, each switch
requires 8 ports to implement these connections.
Architecture guide Page 27
Together, the four host servers and four MSAs require 12 ports on each of the two switches for the SAN. The SN2410bM 48SFP+ 8QSFP28
P2C switch selected in step 3a has 48 10GbE ports which supports the HPE vNS storage requirement for switch ports. Thus, the same pair
of SN2410bM 48SFP+ 8QSFP28 P2C switches will be used to implement both the fabric and the SAN switch requirements of the example
HPE vNS system.
Please note, as per section Security Considerations, a switch that is used for the HPE vNS system interconnect fabric shall not be used to
support any other network usage besides the HPE vNS system. In this example HPE vNS system implementation, the storage arrays are
dedicated to the HPE vNS system and hence comply with this requirement. The SAN and the fabric traffic should be isolated using separate
VLANs.
Similar availability and security considerations as above may be applied if you’re building an FC network for storage or access storage from a
central pool available in your data center.
Step 3c: Build the maintenance LAN
The maintenance LAN is required to establish a private, dedicated network for the HPE vNSC to communicate with HPE vNS VMs. This is a
low speed 1GbE LAN connected to the servers hosting HPE vNS VMs and HPE vNSC. In our configuration, all these VMs are hosted on the
four physical servers and one such 1GbE interface from each of these servers can be used for this purpose. If the server you use has 1GbE
LoM ports, you may use that as the interface for maintenance LAN. Else, a dedicated 1GbE NICs should be included in the servers.
We need a switch that has at least four 1GbE ports to connect to the four servers and we will use Aruba 2930F 24G Switch for this purpose.
Step 3d: Connectivity to the enterprise network
In steps 3a and 3c, we created networks that are typically dedicated for the HPE vNS system whereas in step 3b, you may have either
created a dedicated storage network for your HPE NonStop or connected the servers to a dedicated storage pool using an enterprise-wide
SAN. The next set of networks we create allow the HPE NonStop system to connect to the various types of datapaths running in a typical
enterprise and their connections to resource pools such as for backup.
The connection to a HPE NonStop system from Enterprise LAN is through IP CLIMs. The HPE NonStop system under consideration here
has two IP CLIMs, as they’re always in pairs, and each IP CLIM has four 10GbE interfaces. We will connect these eight interfaces to the
production LAN to allow external applications connect to the HPE vNS host.
Step 3e: Connectivity for virtual BackBox (for data backups)
The HPE vNS system has a virtual BackBox (vBB) which is an application that runs on a Windows VM on one of the host servers. It does
data backup and restore functionality for the system. It presents backup volumes to the storage vCLIMs connected to it. If vBB is not
co-located on the same host as the storage vCLIM, you need to establish an iSCSI path between the two.
For data backup, vBB connects to Enterprise backup solutions such as HPE StoreOnce. Typically, the connectivity to backup devices is
established through a separate network. For this purpose, a 1GbE network is sufficient. As mentioned earlier, we will use one of the
remaining three 1GbE LoMs for the purpose. The network port used for this connectivity will be connected to the data center backup
network, similar to how that is done for Ethernet ports of IP vCLIMs in Step 3d above.
Step 3f: Connectivity to the management network
The hardware hosting HPE vNS resources (servers, switches, storage arrays) need to be managed using customer’s system management
solutions (e.g., HPE OneView). vSphere ESXi running on hosts also need to be managed using a vCenter. All these management typically
happens from an operations center which is a part of the same data center hosting the hardware or may be remote from it. Each of these
hardware products provide a separate management port for this purpose (e.g., HPE iLO of HPE ProLiant servers). There are two ways to
achieve this:
1. Connect the management ports of hardware products and HPE iLO of DL380 servers to the maintenance switch. The external
management appliances reach this hardware through the maintenance switch. For security reasons, isolate the maintenance and
management networks at layer 2 using VLANs. The BoM in this document uses this approach.
2. Use a separate 1GbE switch to connect the management ports of hardware products and HPE iLO of DL380 servers. The external
management appliances reach the hardware through this (dedicated) switch. The physical separation in this option offers superior
security compared to option (1) at the cost of an additional hardware.
This completes the networking configuration for the hardware. See the connectivity diagrams Figure 15 Fabric Connectivity for the HPE vNS
system and Figure 16 Storage Networking Connectivity for the HPE vNS system for illustration.
Architecture guide Page 28
CONNECTIONS
An important aspect of putting the hardware together for HPE vNS is the connectivity among the various elements. A HPE vNS system
requires a large number of cable connections. The following sections describe the cabling for the HPE vNS system hardware.
Fabric network
A HPE vNS system requires two fabric networks, X and Y, dedicated to the HPE vNS system and not shared with any other use. The
switches and cables of X and Y networks should be physically separate.
We will connect the NICs to the switches in the following two ways (refer point 2 in section Implementation notes—fabric):
1. The NIC in servers with one processor will be connected such that one port connects to the “X Switch” and the other port connects the
“Y switch”.
2. The two NICs in servers with two processors will be connected such that both the ports of one NIC connect to the “X switch” and those of
the other NIC connect to the “Y switch”. Availability of two NICs in a server gives us the option to physically isolate X and Y traffic at the
NIC level rather than at the port level. This provides higher availability by supporting the fabric traffic of all VMs in the host server on one
fabric if the NIC for the other fabric should fail.
Storage Network
The storage network in the example HPE vNS system establishes a redundant path between the host servers running HPE NonStop VMs
and the storage arrays providing redundant storage drives. The example HPE vNS system also shares the switches between the system
interconnect fabric and storage network. This is allowed because the storage network is dedicated to the HPE vNS system and is physically
isolated from any external network. Please note that the specified switch should have enough switch processing capacity to support both
HPE vNS fabric traffic and storage traffic. Separate pairs of switches, one dedicated to the system interconnect fabric and the other
dedicated to the storage network, can be specified to overcome switch processing capacity limitations.
Each host server uses a 534FLR-SFP+ two-port adapter to support storage iSCSI traffic. For high availability, one of the two adapter ports is
connected to the “X switch” and the other adapter port is connected to the “Y switch”. This storage network connection configuration
ensures connectivity between the storage vCLIMs in the host servers and the storage arrays if one of the two switches should fail.
Each MSA has two controllers and each controller has four SFP+ ports. The HPE vNS system configuration does not require all 8 SFP+ ports
in each MSA storage array to be connected to support HPE vNS storage traffic. For high availability, one SFP+ port in each MSA controller is
connected to the “X switch” and a second SFP+ port in each MSA controller is connected to the “Y switch”. The remaining two SFP+ ports in
each MSA controller are left unconnected. This storage network connection configuration maintains access between both storage pools in
the MSA storage arrays and the storage vCLIMs in the host servers if one of the two switches should fail.
Overall, the storage network connection configuration supports continued access between the storage vCLIMs and the MSA storage arrays
after a NIC adapter port failure, storage network switch failure, or MSA controller failure.
Refer to the HPE MSA 2060 1060/2060/2062 Installation Guide section for an illustration of a SAN cable connection configuration that
could be used for the network between the HPE vNS system host servers and the MSA storage arrays.
Architecture guide Page 29
Since the fabric switches in the example HPE vNS system are used for both system interconnect fabric and storage networks, the two
networks should be logically separated using VLANs in the switches.
Maintenance Network
Having built the fabric and storage network, let’s now build the maintenance LAN. Remember that this network is used to establish a secure,
private, and physically isolated network between HPE vNSC and HPE NonStop VMs hosted on servers. As mentioned earlier, we will use one
of the LoM ports of each of these servers to the switch for this purpose
Management LAN
As described in Step 3f, this setup will share the switch between maintenance network and management network. The HPE iLO port of
DL380 servers, the management ports of the SN2410bM and Aruba 2530 switches, and the management ports of each of the MSA
controllers (two per MSA) are connected to the Aruba switch which in turn will be connected to the data center management LAN to
establish connectivity between management applications and the hardware.
Enterprise LAN
The Enterprise LAN connectivity is used for provisioning external connectivity to the HPE vNS system. The Ethernet ports of IP CLIMs are
used for the purpose. The network resources are provided by the data center. The Ethernet ports of network CLIMs are connected to the
data center production LAN.
Backup LAN
Last but not the least is the connectivity required to connect the virtual BackBox to the data center backup network. This could be done by
connecting one of the remaining two 1GbE LoM ports of the server hosting the vBB to the data center backup network. From there, it can be
connected to the backup network through a separate cable.
The following schematic diagrams illustrates how all the connectivity requirements are implemented:
For servers with two processors
These host one vNS CPU, one IP vCLIM, one storage vCLIM and a vNSC or a vBB. These require Windows licenses in addition to vSphere ESXi
For MSA
Rack Layout
Bill of materials
Qty Product # Product description Purpose Remarks
4 R0Q76A HPE MSA 2060 10GbE iSCSI SFF Storage Storage MSAs
4 R0Q76A#0D1 Factory Integrated Storage
HPE MSA 960GB SAS 12G Read Intensive SFF
56 R0Q46A Storage Each MSA to have two RAID 5 groups of 6 + 1 each
(2.5in) M2 3yr Wty SSD
56 R0Q46A#0D1 Factory Integrated Storage
Architecture guide Page 33
Note: The actual core assignment is done by ESXi and may vary from the above
FIGURE 17. Layout of VMs over processor cores for the two HPE vNS systems
Architecture guide Page 35
FIGURE 18. Layout of servers and switches for the two HPE vNS systems
Hardware bill of materials to host two HPE vNS systems on a small footprint
Qty Product # Product description Purpose
2 P01366-B21 HPE 96W Smart Storage Battery (up to 20 Devices) with 145mm Cable Kit Server
Architecture guide Page 36
REFERENCES
HPE Virtualized NonStop Deployment and Configuration Guide for VMware available at HPESC (hpe.com/info/nonstop-ldocs).
LEARN MORE AT
hpe.com/info/nonstop
© Copyright 2022 Hewlett Packard Enterprise Development LP. The information contained herein is subject to change without
notice. The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements
accompanying such products and services. Nothing herein should be construed as constituting an additional warranty.
Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein.
Intel, Intel Xeon, Intel Xeon Gold, and Intel Xeon Silver are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or
other countries. Linux is the registered trademark of Linus Torvalds in the U.S. and other countries. Windows and Windows Server
are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries. VMware,
VMware ESXi, VMware vCenter Server, VMware vCenter Server Appliance, VMware vCloud, VMware vSAN, VMware vSphere,
VMware vSphere Enterprise Plus Edition, VMware vRealize Orchestrator Appliance, VMware vRealize Orchestrator, and VMware
vCenter are registered trademarks or trademarks of VMware, Inc. and its subsidiaries in the United States and other jurisdictions.
All third-party marks are property of their respective owners.
a00064673ENW, Rev. 3