0% found this document useful (0 votes)
196 views42 pages

x410cm7 Specsheet

The Cisco UCS X410c M7 Compute Node is designed for the Cisco UCS X-Series Modular System, supporting up to eight nodes in a 7RU chassis with high-density compute, I/O, and storage capabilities. It features the latest 4th Gen Intel Xeon Scalable Processors, up to 16TB of memory, and flexible storage options, including hot-pluggable SSDs and NVMe drives. The node also includes advanced networking options, security features, and is optimized for hybrid cloud infrastructure management through Cisco Intersight.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
196 views42 pages

x410cm7 Specsheet

The Cisco UCS X410c M7 Compute Node is designed for the Cisco UCS X-Series Modular System, supporting up to eight nodes in a 7RU chassis with high-density compute, I/O, and storage capabilities. It features the latest 4th Gen Intel Xeon Scalable Processors, up to 16TB of memory, and flexible storage options, including hot-pluggable SSDs and NVMe drives. The node also includes advanced networking options, security features, and is optimized for hybrid cloud infrastructure management through Cisco Intersight.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 42

Spec Sheet

Cisco UCS X410c M7


Compute Node
A printed version of this document is only a copy
and not necessarily the latest version. Refer to
the following link for the latest released version:
https://www.cisco.com/c/en/us/products/servers-unified-
computing/ucs-x-series-modular-system/datasheet-
listing.html

CISCO SYSTEMS PUBLICATION HISTORY


170 WEST TASMAN DR.
SAN JOSE, CA, 95134 REV A.02 MAY 06, 2023
WWW.CISCO.COM
OVERVIEW . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
DETAILED VIEWS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Cisco UCS X410c M7 Compute Node Front View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5
COMPUTE NODE STANDARD CAPABILITIES and FEATURES . . . . . . . . . . . . . . . 6
CONFIGURING the Cisco UCS X410c M7 Compute Node . . . . . . . . . . . . . . . . . 9
STEP 1 CHOOSE BASE Cisco UCS X410c M7 Compute Node SKU . . . . . . . . . . . . . . . . . . . . 10
STEP 2 CHOOSE CPU(S) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
STEP 3 CHOOSE MEMORY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Memory configurations and mixing rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
STEP 4 CHOOSE REAR mLOM ADAPTER . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
STEP 5 CHOOSE OPTIONAL REAR MEZZANINE VIC/BRIDGE ADAPTERS . . . . . . . . . . . . . . . . 21
STEP 6 CHOOSE OPTIONAL FRONT MEZZANINE ADAPTER . . . . . . . . . . . . . . . . . . . . . . . . 23
STEP 7 CHOOSE OPTIONAL GPU PCIe NODE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
STEP 8 CHOOSE OPTIONAL GPUs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
STEP 9 CHOOSE OPTIONAL DRIVES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
STEP 10 ORDER M.2 SATA SSDs AND RAID CONTROLLER . . . . . . . . . . . . . . . . . . . . . . . . 29
STEP 11 CHOOSE OPTIONAL TRUSTED PLATFORM MODULE . . . . . . . . . . . . . . . . . . . . . . 30
STEP 12 CHOOSE OPERATING SYSTEM AND VALUE-ADDED SOFTWARE . . . . . . . . . . . . . . . 31
STEP 13 CHOOSE OPTIONAL OPERATING SYSTEM MEDIA KIT . . . . . . . . . . . . . . . . . . . . . . 34
SUPPLEMENTAL MATERIAL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Simplified Block Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
System Board . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
UPGRADING or REPLACING CPUs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
UPGRADING or REPLACING MEMORY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
TECHNICAL SPECIFICATIONS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Dimensions and Weight . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Environmental Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

2 Cisco UCS X410c M7 Compute Node


OVERVIEW

OVERVIEW
The Cisco UCS X-Series Modular System simplifies your data center, adapting to the unpredictable needs of
modern applications while also providing for traditional scale-out and enterprise workloads. It reduces the
number of server types to maintain, helping to improve operational efficiency and agility as it helps reduce
complexity. Powered by the Cisco Intersight™ cloud operations platform, it shifts your thinking from
administrative details to business outcomes with hybrid cloud infrastructure that is assembled from the
cloud, shaped to your workloads, and continuously optimized.

The Cisco UCS X410c M7 Compute Node is the computing device to integrate into the Cisco UCS X-Series
Modular System. Up to eight compute nodes can reside in the 7-Rack-Unit (7RU) Cisco UCS X9508 Chassis,
offering one of the highest densities of compute, IO, and storage per rack unit in the industry.

The Cisco UCS X410c M7 Compute Node harnesses the power of the latest 4th Gen Intel® Xeon® Scalable
Processors (Sapphire Rapids), and offers the following:

■ CPU: Four 4th Gen Intel® Xeon® Scalable Processors (codenamed Sapphire Rapids) with up to 60 cores
per processor.
■ Memory: Up to 16TB with 64 x 256GB1 DDR5-4800MT/s DIMMs, in a 4-sockets configuration.
■ Storage: Up to 6 hot-pluggable, Solid-State Drives (SSDs), or Non-Volatile Memory Express (NVMe)
2.5-inch drives with a choice of enterprise-class Redundant Array of Independent Disks (RAID) or
pass-through controllers with four lanes each of PCIe Gen 4 connectivity and up to 2 M.2 SATA drives
for flexible boot and local storage capabilities.
■ mLOM virtual interface cards:
■ Cisco UCS Virtual Interface Card (VIC) 15420 occupies the server's Modular LAN on
Motherboard (mLOM) slot, enabling up to 50Gbps (2 x25Gbps) of unified fabric connectivity
to each of the chassis Intelligent Fabric Modules (IFMs) for 100Gbps connectivity per server.
■ Cisco UCS Virtual Interface Card (VIC) 15231 occupies the server's Modular LAN on
Motherboard (mLOM) slot, enabling up to 100Gbps of unified fabric connectivity to each of
the chassis Intelligent Fabric Modules (IFM) for 200Gbps (2x 100Gbps) connectivity per
server.
■ Optional Mezzanine card:
■ Cisco UCS Virtual Interface Card (VIC) 15422 can occupy the server's mezzanine slot at the
bottom rear of the chassis. An included bridge card extends this VIC's 100Gbps (4 x 25Gbps)
of network connections through IFM connectors, bringing the total bandwidth to 100Gbps
per VIC 15420 and 15422 (for a total of 200Gbps per server). In addition to IFM connectivity,
the VIC 15422 I/O connectors link to Cisco UCS X-Fabric technology.
■ Cisco UCS PCI Mezz card for X-Fabric can occupy the server's mezzanine slot at the bottom
rear of the chassis. This card's I/O connectors link to Cisco UCS X-Fabric modules and enable
connectivity to the X440p PCIe Node.
■ Security: Includes secure boot silicon root of trust FPGA, ACT2 anti-counterfeit provisions, and
optional Trusted Platform Model (TPM).

Figure 1 on page 5 shows a front view of the Cisco UCS X410c M7 Compute Node.

Notes

1. Available Post FCS (First Customer Shipment)

Cisco UCS X410c M7 Compute Node 3


OVERVIEW

Figure 1 Cisco UCS X410c M7 Compute Node

Front View with Drives

4 Cisco UCS X410c M7 Compute Node


DETAILED VIEWS

DETAILED VIEWS
Cisco UCS X410c M7 Compute Node Front View
Figure 2 shows a front view of the Cisco UCS X410c M7 Compute Node.

Figure 2 Cisco UCS X410c M7 Compute Node Front View (Drives option)

Storage Drives Option

1 Power button/LED 4 Locater LED/Switch


2 System Activity LED 5 External Optical Connector (Oculink) that
supports local console functionality.
3 System Health LED 6 Drive Bay slots 1-6

Cisco UCS X410c M7 Compute Node 5


COMPUTE NODE STANDARD CAPABILITIES and FEATURES

COMPUTE NODE STANDARD CAPABILITIES and FEATURES


Table 1 lists the capabilities and features of the base Cisco UCS X410c M7 Compute Node. Details about how
to configure the compute node for a listed feature or capability (for example, number of processors, disk
drives, or amount of memory) are provided in CONFIGURING the Cisco UCS X410c M7 Compute Node on
page 9.

Table 1 Capabilities and Features

Capability/Feature Description

Chassis The Cisco UCS X410c M7 Compute Node mounts in a Cisco UCS X9508 chassis.
CPU ■ Four 4th Gen Intel® Xeon® Scalable Processors (codenamed Sapphire
Rapids).
■ Each CPU has 8 channels with up to 2 DIMMs per channel, for up to 16
DIMMs per CPU.
■ UPI Links: Up to Four at 16GT/s

Chipset Intel® C741 series chipset

Memory ■ 64 total DDR5-4800 MT/s DIMM slots (16 per CPU)


■ 50% peak bandwidth increase over DDR4-3200, with on-die ECC; all
densities are Registered DIMMs (RDIMMs)
■ Up to 16TB DDR5-4800 MT/s Memory DIMM capacity (64x 256GB1 DIMMs)

Mezzanine Adapter ■ An optional Cisco UCS Virtual Interface Card 15422 can occupy the
(Rear) server’s mezzanine slot at the bottom of the chassis. A bridge card
extends this VIC’s 2x 50Gbps of network connections up to the mLOM slot
and out through the mLOM’s IFM connectors, bringing the total bandwidth
to 100Gbps per fabric—a total of 200Gbps per server.
■ An optional UCS PCIe Mezz card for X-Fabric is also supported in the
server’s mezzanine slot. This card’s I/O connectors link to the Cisco UCS
X-Fabric modules for UCS X-series Gen4 PCIe node access.

mLOM The modular LAN on motherboard (mLOM) cards (the Cisco UCS VIC 15231 and
15420) is located at the rear of the compute node.
■ The Cisco UCS VIC 15420 is a Cisco designed PCI Express (PCIe) based card
that supports two 2x25G-KR network interfaces to provide Ethernet
communication to the network by means of the Intelligent Fabric Modules
(IFMs) in the Cisco UCS X9508 chassis. The Cisco UCS VIC 15420 mLOM can
connect to the rear mezzanine adapter card with a bridge connector.
■ The Cisco UCS VIC 15231 is a Cisco designed PCI Express (PCIe) based card
that supports two 2x100G-KR network interfaces to provide Ethernet
communication to the network by means of the Intelligent Fabric Modules
(IFMs) in the Cisco UCS X9508 chassis

6 Cisco UCS X410c M7 Compute Node


COMPUTE NODE STANDARD CAPABILITIES and FEATURES

Table 1 Capabilities and Features (continued)

Capability/Feature Description

Mezzanine Adapters One front mezzanine connector that supports:


(Front) ■ Up to 6 x 2.5-inch SAS and SATA RAID-compatible SSDs
■ Up to 6 x 2.5-inch NVMe PCIe drives
■ A mixture of up to six SAS/SATA or NVMe drives

Note: Drives require a RAID or pass-through controller in the front


mezzanine module slot

Additional Storage Dual 80 mm SATA 3.0 M.2 cards (up to 960 GB per card) on a boot-optimized
hardware RAID controller

Video Video uses a Matrox G200e video/graphics controller.


■ Integrated 2D graphics core with hardware acceleration
■ DDR4 memory interface supports up to 512 MB of addressable memory
(16 MB is allocated by default to video memory)
■ Supports display resolutions up to 1920 x 1200 32 bpp@ 60Hz
■ Video is available with an Oculink connector on the front panel. An
adapter cable (PID UCSX-C-DEBUGCBL) is required to connect the
OCuLink port to the transition serial USB and video (SUV) octopus cable.

Front Panel Interfaces OCuLink console port. Note that an adapter cable is required to connect the
OCuLink port to the transition serial USB and video (SUV) octopus cable.

Power subsystem Power is supplied from the Cisco UCS X9508 chassis power supplies. The
Cisco UCS X410c M7 Compute Node consumes a maximum of 2500W.

Fans Integrated in the Cisco UCS X9508 chassis.

Integrated management The built-in Cisco Integrated Management Controller enables monitoring of
processor Cisco UCS X410c M7 Compute Node inventory, health, and system event logs.

Baseboard Management ASPEED Pilot IV


Controller (BMC)

ACPI Advanced Configuration and Power Interface (ACPI) 6.2 Standard Supported.
ACPI states S0 and S5 are supported. There is no support for states S1
through S4.

Front Indicators ■ Power button and indicator


■ System activity indicator
■ Location button and indicator

Management Cisco Intersight software (SaaS, Virtual Appliance and Private Virtual
Appliance)
Fabric Interconnect Compatible with the Cisco UCS 6454, 64108 and 6536 fabric interconnects

Chassis Compatible with the Cisco UCS 9508 X-Series Server Chassis

Cisco UCS X410c M7 Compute Node 7


COMPUTE NODE STANDARD CAPABILITIES and FEATURES

Notes:
1. Available post first customer ship (FCS).

8 Cisco UCS X410c M7 Compute Node


CONFIGURING the Cisco UCS X410c M7 Compute Node

CONFIGURING the Cisco UCS X410c M7 Compute Node


Follow these steps to configure the Cisco UCS X410c M7 Compute Node:

■ STEP 1 CHOOSE BASE Cisco UCS X410c M7 Compute Node SKU, page 10
■ STEP 2 CHOOSE CPU(S), page 11
■ STEP 3 CHOOSE MEMORY, page 13
■ STEP 4 CHOOSE REAR mLOM ADAPTER, page 18
■ STEP 5 CHOOSE OPTIONAL REAR MEZZANINE VIC/BRIDGE ADAPTERS, page 21
■ STEP 6 CHOOSE OPTIONAL FRONT MEZZANINE ADAPTER, page 23
■ STEP 7 CHOOSE OPTIONAL GPU PCIe NODE, page 24
■ STEP 8 CHOOSE OPTIONAL GPUs, page 25
■ STEP 9 CHOOSE OPTIONAL DRIVES, page 26
■ STEP 10 ORDER M.2 SATA SSDs AND RAID CONTROLLER, page 29
■ STEP 11 CHOOSE OPTIONAL TRUSTED PLATFORM MODULE, page 30
■ STEP 12 CHOOSE OPERATING SYSTEM AND VALUE-ADDED SOFTWARE, page 31
■ STEP 13 CHOOSE OPTIONAL OPERATING SYSTEM MEDIA KIT, page 34
■ SUPPLEMENTAL MATERIAL, page 35

Cisco UCS X410c M7 Compute Node 9


CONFIGURING the Cisco UCS X410c M7 Compute Node

STEP 1 CHOOSE BASE Cisco UCS X410c M7 Compute Node SKU


Top Level ordering product ID (PID) of the Cisco UCS X410c M7 Compute Node as shown inTable 3
Table 2 Top level ordering PID

Product ID (PID) Description

UCSX-M7-MLB UCSX M7 Modular Server and Chassis MLB

Verify the product ID (PID) of the Cisco UCS X410c M7 Compute Node as shown in Table 4.

Table 3 PID of the Base Cisco UCS X410c M7 Compute Node

Product ID (PID) Description

UCSX-X410C-M7 Cisco UCS X410c M7 Compute Node 4S Intel 4th Gen CPU without CPU, memory,
drive bays, drives, VIC adapter, or mezzanine adapters (ordered as a UCS X9508
chassis option)

UCSX-X410C-M7-U Cisco UCS X410c M7 Compute Node 4S Intel 4th Gen CPU without CPU, memory,
drive bays, drives, VIC adapter, or mezzanine adapters (ordered standalone)

A base Cisco UCS X410c M7 Compute Node ordered in Table 3 does not include any components
or options. They must be selected during product ordering.

Please follow the steps on the following pages to order components such as the following, which
Please follow the steps on the following pages to order components such as the following, which
are required in a functional compute node:

■ CPUs
■ Memory
■ Cisco storage RAID or passthrough controller with drives (or blank, for no local drive
support)
■ SAS, SATA, NVMe, M.2, or U.2 drives
■ Cisco adapters (such as the 15000 series VIC or Bridge)

10 Cisco UCS X410c M7 Compute Node


CONFIGURING the Cisco UCS X410c M7 Compute Node

STEP 2 CHOOSE CPU(S)


The standard CPU features are:

■ The 4th Gen Intel® Xeon® Scalable Processors (codenamed Sapphire Rapids) are paired with
Intel® C741 series chipset
■ Up to 60 cores
■ Cache size of up to 112.50 MB
■ Power: Up to 350Watts
■ UPI Links: Up to Four at 16GT/s

Select CPUs

The available CPUs are listed in Table 4.

Table 4 Available CPUs


Highest DDR5 DIMM Clock
Product ID Cores Clock Freq Power Cache Size Support
(PID) (C) (GHz) (W) (MB) (MT/s)

8000 Series Processors


UCSX-CPU-I8490H 60 1.90 350 112.50 4800
UCSX-CPU-I8468H 48 2.10 330 105.00 4800
UCSX-CPU-I8460H 40 2.20 330 105.00 4800
UCSX-CPU-I8454H 32 2.10 270 82.50 4800
UCSX-CPU-I8450H 28 2.00 250 75.00 4800
UCSX-CPU-I8444H 16 2.90 270 45.00 4800
6000 Series Processors
UCSX-CPU-I6448H 32 2.40 250 60.00 4800
UCSX-CPU-I6434H 8 3.70 195 22.50 4800
UCSX-CPU-I6418H 24 2.10 185 60.00 4800
UCSX-CPU-I6416H 18 2.20 165 45.00 4800

Cisco UCS X410c M7 Compute Node 11


CONFIGURING the Cisco UCS X410c M7 Compute Node

Supported Configurations

(1) DRAM configuration:

■ Select four identical CPUs listed in Table 4 on page 11

(2) Configurations with NVMe PCIe drives:

■ Select four identical CPUs listed in Table 4 on page 11

(3) Four-CPU Configuration

■ Choose four identical CPUs from any one of the rows of Table 4 Available CPUs, page 11

12 Cisco UCS X410c M7 Compute Node


CONFIGURING the Cisco UCS X410c M7 Compute Node

STEP 3 CHOOSE MEMORY


The Table 5 below describes the main memory DIMM features supported on Cisco UCS X410c M7
Compute Node.

Table 5 X410c M7 Main Memory Features

Memory DIMM server technologies Description

Maximum DDR5 memory clock speed Up to 4800MT/s 1DPC; Up to 4400MT/S 2DPC

Operational voltage 1.1 Volts

DRAM Fab. density 16Gb

DRAM DIMM type RDIMM (Registered DDR5 DIMM with on die ECC)

Memory DIMM organization Eight memory DIMM channels per CPU; up to 2 DIMMs per channel

Maximum number of DRAM DIMM per 64 (4-Socket)


server

DRAM DIMM densities and ranks 16GB 1Rx8, 32GB 1Rx4, 64GB 2Rx4, 128GB 4Rx4, 256GB1 8Rx4

Maximum system capacity (DRAM


DIMMs only) 16TB (64x256GB1)

Notes:
1. 256GB DIMM Available post first customer ship (FCS)

Cisco UCS X410c M7 Compute Node 13


CONFIGURING the Cisco UCS X410c M7 Compute Node

Figure 3 Cisco UCS X410c M7 Compute Node Memory Organization

14 Cisco UCS X410c M7 Compute Node


CONFIGURING the Cisco UCS X410c M7 Compute Node

Select DIMMs and Memory Mirroring

Select the memory configuration and whether or not you want the memory mirroring option.
The available memory DIMMs and mirroring option are listed in Table 6.

NOTE: When memory mirroring is enabled, the memory subsystem simultaneously


writes identical data to two channels. If a memory read from one of the channels
returns incorrect data due to an uncorrectable memory error, the system
automatically retrieves the data from the other channel. A transient or soft error in
one channel does not affect the mirrored data, and operation continues unless there
is a simultaneous error in exactly the same location on a DIMM and its mirrored
DIMM. Memory mirroring reduces the amount of memory available to the operating
system by 50% because only one of the two populated channels provides data.

Table 6 Available DDR5 DIMMs

Product ID (PID) PID Description

DRAMs
UCSX-MRX16G1RE1 16GB DDR5-4800 RDIMM 1Rx8 (16Gb)
UCSX-MRX32G1RE1 32GB DDR5-4800 RDIMM 1Rx4 (16Gb)
UCSX-MRX64G2RE1 64GB DDR5-4800 RDIMM 2Rx4 (16Gb)
UCSX-MR128G4RE1 128GB DDR5-4800 RDIMM 4Rx4 (16Gb)
UCSX-MR256G8RE11 256GB DDR5-4800 RDIMM 8Rx4 (16Gb)
Memory Mirroring Option
N01-MMIRRORD Memory mirroring option
Accessories/spare included with Memory configuration:
■ UCS-DDR5-BLK2 is auto included for the unselected DIMMs slots

Notes:
1. Available post first customer ship (FCS).
2. Any empty DIMM slot must be populated with a DIMM blank to maintain proper cooling airflow.

Cisco UCS X410c M7 Compute Node 15


CONFIGURING the Cisco UCS X410c M7 Compute Node

Memory configurations and mixing rules


■ Memory on every CPU socket shall be configured identically.
■ System speed is dependent on the CPU DIMM speed support. Refer to Available CPUs on page 11 for
DIMM speeds.
■ For full details on supported memory configurations see the M7 Memory Guide
■ DIMM Count Rules:
■ DIMM count for 1-CPU as reference, not really allowed in X410c M7:
— Minimum DIMM count = 1; Maximum DIMM count = 16
— 1, 2, 4, 6, 8, 121, or 16 DIMMs allowed
— 3, 5, 7, 9, 10, 11, 13, 14, 15 DIMMs not allowed.
■ Allowed DIMM count for 4-CPU:
— Minimum DIMM count = 4; Maximum DIMM count = 64
— 4, 8, 16, 24, 32, 481, or 64 DIMMs allowed
— 12, 20, 28, 36, 40, 44, 52, 56, 60 DIMMs not allowed.
NOTE(1): 12 DIMMs count for 1-CPU and 48 DIMMs count for 4-CPU configurations are only allowed when all
DIMMs have the same density.
■ DIMM Population Rules:
■ Each channel has two memory slots (for example, channel A = slots A1 and A2)
— A channel can operate with one or two DIMMs installed.
— If a channel has only one DIMM, populate slot 1 first (the blue slot).
■ When all CPUs are installed, populate the memory slots of each CPU identically. Fill the blue
slots (slot 1) in the memory channels first according to the recommended DIMM populations
in Table 7.

Table 7 M7 DIMM Population Order per socket

Population of DIMM slots per socket1


#DIMMs per CPU
Slot 1 (Blue) Slot 2 (Black)
1 A1 -
2 A1, G1 -
4 A1, C1, E1, G1 -
6 A1, C1, D1, E1, F1, G1 -
8 A1, B1, C1, D1, E1, F1, G1, H1 -
122 A1, B1, C1, D1, E1, F1, G1, H1 A2, C2, E2, G2
16 A1, B1, C1, D1, E1, F1, G1, H1 A2, B2, C2, D2, E2, F2, G2, H2
Notes:
1. See DIMM Mixing Rules for allowed combinations across slots 1 and 2.
2. Only valid when DIMMs in blue and black slots are the same density.

■ DIMM Mixing Rules:

16 Cisco UCS X410c M7 Compute Node


CONFIGURING the Cisco UCS X410c M7 Compute Node

■ Higher rank DIMMs shall be populated on Slot 1.


■ Mixing different DIMM densities in the same slot across channels is not supported. All
populated slots of the same color must have the same DIMM density.
■ The DIMM mixing rules matrix is described in the Table 8, below.

Table 8 Supported DIMM mixing and population across 2 slots in each channel

Channel Mixing DIMM Slot 2 (Black)

16GB 32GB 64GB 128GB 256GB3


DIMM Slot 1 (Blue)
1Rx8 1Rx4 2Rx4 4Rx4 8Rx4

16GB 1Rx8 Yes1 No No No No

32GB 1Rx4 No Yes1 No No No

64GB 2Rx4 No Yes2 Yes1 No No

128GB 4Rx4 No No No Yes No

256GB3 8Rx4 No No No Yes2 Yes1

Notes:
1. Only 6 or 8 channels are allowed (for 2, 4, or 8 DIMMs you would just populate 1 DPC on 2, 4, or 8 channels)
2. When mixing two different DIMM densities, all 8 channels per CPU must be populated. Use of fewer than 8
channels (16 slots per CPU) is not supported.
3. Available post first customer ship (FCS)

■ Memory Limitations:
■ Memory on every CPU socket shall be configured identically.
■ Refer to Table 7 and Table 8 for DIMM population and DIMM mixing rules.
■ Cisco memory from previous generation servers (DDR3 and DDR4) is not supported with the
M7 servers.
■ For best performance, observe the following:
■ For optimum performance, populate at least one DIMM per memory channel per CPU. When
one DIMM per channel is used, it must be populated in DIMM slot 1 (blue slot farthest away
from the CPU) of a given channel.
■ The maximum 2 DPC speed is 4400 MT/s, refer to Table 9 for the details.

Table 9 DDR5-4800 DIMM 1DPC and 2DPC speed matrix

CPU speed/ DIMM speed DDR5 DIMM 1DPC DDR5 DIMM 2DPC

CPU 4800 MT/s 4800 MT/s 4400 MT/s

NOTE: For full details on supported memory configurations see the M7 Memory Guide

Cisco UCS X410c M7 Compute Node 17


CONFIGURING the Cisco UCS X410c M7 Compute Node

STEP 4 CHOOSE REAR mLOM ADAPTER


The Cisco UCS X410c M7 Compute Node must be ordered with a Cisco VIC mLOM Adapter. The
adapter is located at the back and can operate in a single-CPU or dual-CPU configuration.
Table 10 shows the mLOM adapter choices.

Table 10 mLOM Adapters

Product ID (PID) Description Connection type

UCSX-ML-V5D200G-D Cisco UCS VIC 15231 2x100/200G mLOM for X410c mLOM
M7 Compute Node

UCSX-ML-V5Q50G-D Cisco UCS VIC 15420 4x25G secure boot mLOM for X mLOM
Compute Node

NOTE:
■ VIC 15420, or 15231 are supported with both X9108-IFM-25G and
X9108-IFM-100G. VIC 15420 will operate at 4x 25G with both X9108-IFM-25G and
X9108-IFM-100G. While, VIC 15231 will operate at 4x 25G with X9108-IFM-25G
and at 2x 100G with X9108-IFM-100G.
■ The mLOM adapter is mandatory for the Ethernet connectivity to the network
by means of the IFMs and has x16 PCIe Gen4 connectivity with Cisco UCS VIC
15420 or x16 Gen4 connectivity with Cisco UCS VIC 15231 towards the CPU1.
■ There is no backplane in the Cisco UCS X9508 chassis; thus, the compute nodes
directly connect to the IFMs using Orthogonal Direct connectors.
■ Figure 4 shows the location of the mLOM and rear mezzanine adapters on the
Cisco UCS X410c M7 Compute Node. The bridge adapter connects the mLOM
adapter to the rear mezzanine adapter.

18 Cisco UCS X410c M7 Compute Node


CONFIGURING the Cisco UCS X410c M7 Compute Node

Figure 4 shows the network connectivity from the mLOM out to the 25G IFMs.

Figure 4 Network Connectivity 25G IFMs

To Fabric Interconnect To Fabric Interconnect

UCS X9508 Chassis


IFM-1 IFM-2

Cisco ASIC Cisco ASIC

KR Lanes 3 2 1 0 KR Lanes 3 2 1 0
IFM OD connectors (1 for each IFM)

UCS X410c mLOM OD connectors (2)


25G-KR

25G-KR

25G-KR

25G-KR

25G-KR

25G-KR

25G-KR

25G-KR
Cisco UCS X410c Compute Node
Lane 1

Lane 0

Lane 1

Lane 0
Cisco ASIC MAC1 MAC0
Lane 1 25G-KR Cisco ASIC
MAC1 Lane 0 25G-KR
Lane 0 25G-KR
MAC0 25G-KR
Lane 1

Mezz Adapter Bridge Adapter mLOM Adapter

Cisco UCS X410c M7 Compute Node 19


CONFIGURING the Cisco UCS X410c M7 Compute Node

Figure 5 shows the network connectivity from the mLOM out to the 100G IFMs.

Figure 5 Network Connectivity 100G IFMs

To Fabric Interconnect To Fabric Interconnect

UCS X9508 Chassis


IFM-1 IFM-2

Cisco ASIC Cisco ASIC

KR Lanes KR Lanes

IFM OD connectors (1 for each IFM)

100G-KR4

100G-KR4
UCS X410c mLOM OD connectors (2)

Cisco UCS X410c Compute Node

MAC1 MAC0
Empty Cisco ASIC

Mezzanine Slot
mLOM Adapter
Bridge Adapter

20 Cisco UCS X410c M7 Compute Node


CONFIGURING the Cisco UCS X410c M7 Compute Node

STEP 5 CHOOSE OPTIONAL REAR MEZZANINE VIC/BRIDGE ADAPTERS


The Cisco UCS X410c M7 Compute Node has one rear mezzanine adapter connector which can
have a UCS VIC 15422 Mezz card that can be used as a second VIC card on the compute node for
network connectivity or as a connector to the X440p PCIe node via X-Fabric modules. The same
mezzanine slot on the compute node can also accommodate a pass-through mezzanine adapter
for X-Fabric which enables compute node connectivity to the X440p PCIE node. Refer to
Table 11 for supported adapters.

Table 11 Available Rear Mezzanine Adapters

Product ID(PID) PID Description Connector Type

Cisco VIC Card

UCSX-V4-PCIME-D1 UCS PCI Mezz Card for X-Fabric Rear Mezzanine


connector on
motherboard

UCSX-ME-V5Q50G-D Cisco UCS VIC 15422 4x25G secure boot mezz for X Rear Mezzanine
Compute Node connector on
motherboard

Cisco VIC Bridge Card2

UCSX-V5-BRIDGE-D UCS VIC 15000 bridge to connect mLOM and mezz X One connector on
Compute Node Mezz card and one
connector on
(This bridge to connect the Cisco VIC 15420 mLOM and mLOM card
Cisco VIC 15422 Mezz for the X410c M7 Compute Node)
Notes:
1. If this adapter is selected, then two CPUs are required and UCSX-ME-V5Q50G-D or UCSX-V4-PCIME-D is
required.
2. Included with the Cisco VIC 15422 mezzanine adapter.

NOTE: The UCSX-V4-PCIME-D rear mezzanine card for X-Fabric has PCIe Gen4 x16
connectivity towards each CPU1 and CPU2. Additionally, the UCSX-V4-PCIME-D also
provides two PCIe Gen4 x16 to each X-fabric. This rear mezzanine card enables
connectivity from the X410c M7 Compute Node to the X440p PCIe node.

Cisco UCS X410c M7 Compute Node 21


CONFIGURING the Cisco UCS X410c M7 Compute Node

Table 12 Throughput Per UCS X410c M7 Server

FI-6536 + FI-6536 +
FI-6536 + X9108-IFM-25G/100G X9108-IFM-25G/100G
X410c M7 Compute FI-6536/6400 +
X9108-IFM-100G or or
Node X9108-IFM-25G
FI-6400 + FI-6400 +
X9108-IFM-25G X9108-IFM-25G

X410c VIC 15420 + VIC


configuration VIC 15231 VIC 15231 VIC 15420
15422

Throughput per 200G 100G 100G 200G


node (100G per IFM) (50G per IFM) (50G per IFM) (100G per IFM)

vNICs needed for


max BW 2 2 2 4

KR connectivity
from VIC to each
1x 100GKR 2x 25GKR 2x 25GKR 4x 25GKR
IFM

Single vNIC 50G 50G


throughput on VIC 100G 50G 50G
(2x25G (2x25G
(1x100GKR) (2x25G KR) (2x25G KR)
KR) KR)

Max Single flow BW


per vNIC 100G 25G 25G 25G 25G

Single vHBA
throughput on VIC 100G 50G 50G 50G 50G

Supported Configurations

■ One of mLOM VIC from Table 10 is always required.


■ If a UCSX-ME-V5Q50G-D rear mezzanine VIC card is installed, a UCSX-V5-BRIDGE-D VIC
bridge card is included and connects the mLOM to the mezzanine adapter.
■ The UCSX-ME-V5Q50G-D rear mezzanine card has Ethernet connectivity to the IFM using the
UCSX-V5-BRIDGE-D and has a PCIE Gen4 x16 connectivity towards CPU2. Additionally, the
UCSX-ME-V5Q50G-D also provides two PCIe Gen4 x16 to each X-fabric.
■ All the connections to Cisco UCS X-Fabric 1 and Cisco UCS X-Fabric 2 are through the Molex
Orthogonal Direct (OD) connector on the mezzanine card.
■ The rear mezzanine card has 32 x16PCIe lanes to each Cisco UCS X-Fabric for I/O expansion
to enable resource consumption from the PCIe resource nodes.

22 Cisco UCS X410c M7 Compute Node


CONFIGURING the Cisco UCS X410c M7 Compute Node

STEP 6 CHOOSE OPTIONAL FRONT MEZZANINE ADAPTER


The Cisco UCS X410c M7 Compute Node has one front mezzanine connector that can
accommodate one of the following mezzanine cards:

■ Pass-through controller for up to 6 U.2 NVMe drives


■ RAID controller (RAID 0, 1, 5, 10) for 6 SAS/SATA drives or up to 4 U.2 NVMe drives

NOTE:
■ The Cisco UCS X410c M7 Compute Node can be ordered with or without the
front mezzanine adapter. Refer to Table 13 Available Front Mezzanine
Adapters
■ Only one Front Mezzanine connector per Server

Table 13 Available Front Mezzanine Adapters

Product ID(PID) PID Description Connector Type

UCSX-X10C-PT4F-D Cisco UCS X410c M7 Compute Node compute pass through Front Mezzanine
controller for up to 6 NVMe drives

UCSX-X10C-RAIDF-D Cisco UCS X410c M7 Compute Node RAID controller with LSI Front Mezzanine
3900 for up to 6 SAS/SATA drives or up to 4 U.2 NVMe drives
(SAS/SATA and NVMe drives can be mixed).

Cisco UCS X410c M7 Compute Node 23


CONFIGURING the Cisco UCS X410c M7 Compute Node

STEP 7 CHOOSE OPTIONAL GPU PCIe NODE


Refer to Table 14 for GPU PCIe Node

Table 14 GPU PCIe Node1

Product ID(PID) PID Description

UCSX-440P-D UCS X-Series Gen4 PCIe node

Notes:
1. Available post first customer ship (FCS)

NOTE: If UCSX-440P-D is selected, then rear mezzanine is required.

24 Cisco UCS X410c M7 Compute Node


CONFIGURING the Cisco UCS X410c M7 Compute Node

STEP 8 CHOOSE OPTIONAL GPUs


Select GPU Options

The available PCIe node GPU options are listed in Table 15.

Table 15 Available PCIe GPU Cards supported on the PCIe Node

GPU Product ID (PID) PID Description Maximum number of GPUs per node
UCSX-GPU-A16-D NVIDIA A16 PCIE 250W 4X16GB 2
UCSX-GPU-A40-D TESLA A40 RTX, PASSIVE, 300W, 48GB 2
UCSX-GPU-A100-80-D TESLA A100, PASSIVE, 300W, 80GB1 2
Notes:
1. Required power cables are included with the riser cards in the X440p PCIe node.

Cisco UCS X410c M7 Compute Node 25


CONFIGURING the Cisco UCS X410c M7 Compute Node

STEP 9 CHOOSE OPTIONAL DRIVES


The Cisco UCS X410c M7 Compute Node can be ordered with or without drives. The drive options
are:

■ One to six 2.5-inch small form factor SAS/SATA SSDs or PCIe U.2 NVMe drives
— Hot-pluggable
— Sled-mounted

Select one or two drives from the list of supported drives available in Table 16.

Table 16 Available Drive Options

Drive
Product ID (PID) Description Speed Size
Type
SAS/SATA SSDs1,2,3
Self-Encrypted Drives (SED)
UCSX-SD76TBKNK9-D 7.6TB Enterprise value SAS SSD (1X DWPD, SED-FIPS) SAS 7.6TB
UCSX-SD38TBKNK9-D 3.8TB Enterprise Value SAS SSD (1X DWPD, SED) SAS 3.8TB
UCSX-SD16TBKNK9-D 1.6TB Enterprise performance SAS SSD (3X DWPD, SED) SAS 1.6TB
UCSXSD960GBKNK9-D 960GB 2.5" Enterprise value 12G SAS SSD (1X endurance, SAS 960GB
FIPS)
UCSXSD800GBKNK9-D 800GB Enterprise performance SAS SSD (3X DWPD, SED) SAS 800GB
UCSXS76TBEM2NK9-D 7.6TB Enterprise value SATA SSD (1X, SED) SATA 7.6TB
UCSXS38TBEM2NK9-D 3.8TB Enterprise value SATA SSD (1X, SED) SATA 3.8TB
UCSXS960GBM2NK9-D 960GB Enterprise value SATA SSD (1X, SED) SATA 960GB
Enterprise Performance SSDs (high endurance, supports up to 3X DWPD (drive writes per day))
UCSXSD800GK3XEP-D 800GB 2.5in Enterprise Performance 12G SAS SSD SAS 12G 800GB
(3X endurance)
UCSX-SD16TK3XEP-D 1.6TB 2.5in Enterprise Performance 12G SAS SSD SAS 12G 1.6TB
(3X endurance)
UCSX-SD32TK3XEP-D 3.2TB 2.5in Enterprise Performance 12G SAS SSD SAS 12G 3.2TB
(3X endurance)
UCSXSD800GS3XEP-D 800GB 2.5in Enterprise Performance 12G SAS SSD SAS 12G 800GB
(3X endurance)
UCSX-SD16TS3XEP-D 1.6TB 2.5in Enterprise Performance 12G SAS SSD SAS 12G 1.6TB
(3X endurance)
UCSX-SD32TS3XEP-D 3.2TB 2.5in Enterprise Performance 12G SAS SSD SAS 12G 3.2TB
(3X endurance)
UCSX-SD19T63XEP-D 1.9TB 2.5 inch Enterprise performance 6G SATA SSD SATA 6G 1.9TB
(3X endurance)
UCSX-SD19TM3XEP-D 1.9TB 2.5 inch Enterprise performance 6G SATA SSD SATA 6G 1.9TB
(3X endurance)
UCSXSD480G63XEP-D 480GB 2.5in Enterprise performance 6G SATA SSD SATA 6G 480GB
(3X endurance)
UCSXSD480GM3XEP-D 480GB 2.5in Enterprise performance 6G SATA SSD SATA 6G 480GB
(3X endurance)
UCSXSD960G63XEP-D 960GB 2.5 inch Enterprise performance 6G SATA SSD SATA 6G 960GB
(3X endurance)

26 Cisco UCS X410c M7 Compute Node


CONFIGURING the Cisco UCS X410c M7 Compute Node

Table 16 Available Drive Options (continued)

Drive
Product ID (PID) Description Speed Size
Type
UCSX-SD38T63XEP-D 3.8TB 2.5 in Enterprise performance 6G SATA SSD SATA 6G 3.8TB
(3X endurance)
UCSXSD960GM3XEP-D 960GB 2.5 inch Enterprise performance 6G SATA SSD SATA 6G 960GB
(3X endurance)
Enterprise Value SSDs (Low endurance, supports up to 1X DWPD (drive writes per day))
UCSXSD240GM1XEV-D 240GB 2.5 inch Enterprise Value 6G SATA SSD SATA 6G 240GB
UCSXSD960GM1XEV-D 960GB 2.5 inch Enterprise Value 6G SATA SSD SATA 6G 960GB
UCSX-SD16TM1XEV-D 1.6TB 2.5 inch Enterprise Value 6G SATA SSD SATA 6G 1.6TB
UCSX-SD19TM1XEV-D 1.9TB 2.5 inch Enterprise Value 6G SATA SSD SATA 6G 1.9TB
UCSX-SD38TM1XEV-D 3.8TB 2.5 inch Enterprise Value 6G SATA SSD SATA 6G 3.8TB
UCSXSD38T6I1XEV-D 3.8TB 2.5 inch Enterprise Value 6G SATA SSD SATA 6G 3.8TB
UCSXSD19T6S1XEV-D 1.9TB 2.5 inch Enterprise Value 6G SATA SSD SATA 6G 1.9TB
UCSXSD38T6S1XEV-D 3.8TB 2.5 inch Enterprise Value 6G SATA SSD SATA 6G 3.8TB
UCSX-SD76TM1XEV-D 7.6TB 2.5 inch Enterprise Value 6G SATA SSD SATA 6G 7.6TB
UCSXSD76T6S1XEV-D 7.6TB 2.5 inch Enterprise Value 6G SATA SSD SATA 6G 7.6TB
UCSXS480G6I1XEV-D 480 GB 2.5 inch Enterprise Value 6G SATA Intel SSD SATA 6G 480GB
UCSXS960G6I1XEV-D 960GB 2.5 inch Enterprise Value 6G SATA Intel SSD SATA 6G 960GB
UCSXS960G6S1XEV-D 960GB 2.5 inch Enterprise Value 6G SATA Samsung SSD SATA 6G 960GB
UCSXSD960GK1XEV-D 960GB 2.5in Enter Value 12G SAS Kioxia G1 SSD SAS 12G 960GB
UCSXSD960GS1XEV-D 960GB 2.5in Enter Value 12G SAS Seagate SSD SAS 12G 960GB
UCSX-SD19TK1XEV-D 1.9TB 2.5in Enter Value 12G SAS Kioxia G1 SSD SAS 12G 1.9TB
UCSX-SD19TS1XEV-D 1.9TB 2.5v Enter Value 12G SAS Seagate SSD SAS 12G 1.9TB
UCSX-SD38TK1XEV-D 3.8TB 2.5in Enter Value 12G SAS Kioxia G1 SSD SAS 12G 3.8TB
UCSX-SD38TS1XEV-D 3.8TB 2.5in Enter Value 12G SAS Seagate SSD SAS 12G 3.8TB
UCSX-SD76TK1XEV-D 7.6TB 2.5in Enter Value 12G SAS Kioxia G1 SSD SAS 12G 7.6TB
UCSX-SD15TK1XEV-D 15.3TB 2.5in Enter Value 12G SAS Kioxia G1 SSD SAS 12G 15.3TB
NVMe4,5
UCSX-NVME4-15360D 15.3TB 2.5in U.2 P5520 NVMe High Perf Medium Endurance NVMe U.2 15.3TB
UCSX-NVME4-1600-D 1.6TB 2.5in U.2 P5620 NVMe High Perf High Endurance NVMe U.2 1.6TB
UCSX-NVME4-1920-D 1.9TB 2.5in U.2 P5520 NVMe High Perf Medium Endurance NVMe U.2 1.9TB
UCSX-NVME4-3200-D 3.2TB 2.5in U.2 P5620 NVMe High Perf High Endurance NVMe U.2 3.2TB
UCSX-NVME4-3840-D 3.8TB 2.5in U.2 P5520 NVMe High Perf Medium Endurance NVMe U.2 3.8TB
UCSX-NVME4-6400-D 6.4TB 2.5in U.2 P5620 NVMe High Perf High Endurance NVMe U.2 6.4TB
UCSX-NVME4-7680-D 7.6TB 2.5in U.2 P5520 NVMe High Perf Medium Endurance NVMe U.2 7.6TB
UCSX-NVMEXPI400-D 400GB 2.5in U.2 Intel P5800X Optane NVMe Extreme NVMe U.2 400GB
Perform SSD
UCSX-NVMEXPI800-D 800GB 2.5in U.2Intel P5800X Optane NVMe Extreme NVMe U.2 800GB
Perform SSD
NOTE: Cisco uses solid state drives from several vendors. All solid state drives are subject to physical write
limits and have varying maximum usage limitation specifications set by the manufacturer. Cisco will not
replace any solid state drives that have exceeded any maximum usage specifications set by Cisco or the
manufacturer, as determined solely by Cisco.

Cisco UCS X410c M7 Compute Node 27


CONFIGURING the Cisco UCS X410c M7 Compute Node

Notes:
1. SSD drives require the UCSX-X10C-RAIDF-D front mezzanine adapter
2. For SSD drives to be in a RAID group, two identical SSDs must be used in the group.
3. If SSDs are in JBOD Mode, the drives do not need to be identical.
4. NVMe drives require a front mezzanine the UCSX-X10C-PT4F-D pass through controller or UCSX-X10C-RAIDF-D
RAID controller.
5. A maximum of 4x NVMe drives can be ordered with RAID controller.

28 Cisco UCS X410c M7 Compute Node


CONFIGURING the Cisco UCS X410c M7 Compute Node

STEP 10 ORDER M.2 SATA SSDs AND RAID CONTROLLER


■ Cisco 6GB/s SATA Boot-Optimized M.2 RAID Controller (included): Boot-Optimized RAID controller
(UCSX-M2-HWRD-FPS) for hardware RAID across two SATA M.2 storage modules. The Boot-Optimized RAID
controller plugs into the motherboard and the M.2 SATA drives plug into the Boot-Optimized RAID
controller.

NOTE:
■ The UCSX-M2-HWRD-FPS is auto included with the server configuration
■ The UCSX-M2-HWRD-FPS controller supports RAID 1 and JBOD mode and is available
only with 240GB and 960GB M.2 SATA SSDs.
■ Cisco IMM is supported for configuring of volumes and monitoring of the controller
and installed SATA M.2 drives
■ Hot-plug replacement is not supported. The compute node must be powered off to
replace.
■ The Boot-Optimized RAID controller supports VMware, Windows, and Linux Operating
Systems

Table 17 Boot-Optimized RAID controller (auto included)

Product ID (PID) PID Description

UCSX-M2-HWRD-FPS UCSX Front panel with M.2 RAID controller for SATA drives

■ Select Cisco M.2 SATA SSDs: Order one or two matching M.2 SATA SSDs. This connector accepts the
boot-optimized RAID controller (see Table 17). Each boot-optimized RAID controller can accommodate
up to two SATA M.2 SSDs shown in Table 18.

NOTE:
■ Each boot-optimized RAID controller can accommodate up to two SATA M.2 SSDs
shown in Table 18. The boot-optimized RAID controller plugs into the
motherboard.
■ It is recommended that M.2 SATA SSDs be used as boot-only devices.
■ The SATA M.2 drives can boot in UEFI mode only. Legacy boot mode is not
supported.

Table 18 M.2 SATA SSDs

Product ID (PID) PID Description


UCS-M2-240GB-D 240GB M.2 SATA SSD
UCS-M2-960GB-D 960GB M.2 SATA SSD

Cisco UCS X410c M7 Compute Node 29


CONFIGURING the Cisco UCS X410c M7 Compute Node

STEP 11 CHOOSE OPTIONAL TRUSTED PLATFORM MODULE


Trusted Platform Module (TPM) is a computer chip or microcontroller that can securely store
artifacts used to authenticate the platform or Cisco UCS X410c M7 Compute Node. These
artifacts can include passwords, certificates, or encryption keys. A TPM can also be used to
store platform measurements that help ensure that the platform remains trustworthy.
Authentication (ensuring that the platform can prove that it is what it claims to be) and
attestation (a process helping to prove that a platform is trustworthy and has not been
breached) are necessary steps to ensure safer computing in all environments.

Table 19 Available TPM Option

Product ID (PID) Description

UCSX-TPM-002C-D Trusted Platform Module 2.0, FIPS140-2 Compliant, UCS M7 server

UCSX-TPM-OPT-OUT1 OPT OUT, TPM 2.0, TCG, FIPS140-2, CC EAL4+ Certified

Notes:
1. Please note Microsoft certification requires a TPM 2.0 for bare-metal or guest VM deployments. Opt-out of the
TPM 2.0 voids the Microsoft certification.

NOTE:
■ The TPM module used in this system conforms to TPM v2.0 as defined by the
Trusted Computing Group (TCG).
■ TPM installation is supported after-factory. However, a TPM installs with a
one-way screw and cannot be replaced, upgraded, or moved to another compute
node. If a Cisco UCS X410c M7 Compute Node with a TPM is returned, the
replacement Cisco UCS X410c M7 Compute Node must be ordered with a new
TPM. If there is no existing TPM in the Cisco UCS X410c M7 Compute Node, you
can install a TPM 2.0. Refer to the following document for Installation location
and instructions:
https://www.cisco.com/content/en/us/td/docs/unified_computing/ucs/x/hw/x410
c-m7/install/b-cisco-ucs-x410c-m7-install-guide.html

30 Cisco UCS X410c M7 Compute Node


CONFIGURING the Cisco UCS X410c M7 Compute Node

STEP 12 CHOOSE OPERATING SYSTEM AND VALUE-ADDED SOFTWARE


■ Cisco Software (Table 20)
■ Operating System (Table 21)

NOTE: See this link for operating system guidance:


https://ucshcltool.cloudapps.cisco.com/public/

Table 20 OEM Software

Product ID (PID) PID Description

VMware vCenter

VMW-VCS-STD-D1A VMware vCenter 7 Server Standard, 1 yr support required

VMW-VCS-STD-D3A VMware vCenter 7 Server Standard, 3 yr support required

VMW-VCS-STD-D5A VMware vCenter 7 Server Standard, 5 yr support required

VMW-VCS-FND-D1A VMware vCenter Server 7 Foundation (4 Host), 1 yr supp reqd

VMW-VCS-FND-D3A VMware vCenter Server 7 Foundation (4 Host), 3 yr supp reqd

VMW-VCS-FND-D5A VMware vCenter Server 7 Foundation (4 Host), 5 yr supp reqd

Table 21 Operating System

Product ID (PID) PID Description

Microsoft Windows Server

MSWS-22-ST16CD Windows Server 2022 Standard (16 Cores/2 VMs)

MSWS-22-ST16CD-NS Windows Server 2022 Standard (16 Cores/2 VMs) - No Cisco SVC

MSWS-22-DC16CD Windows Server 2022 Data Center (16 Cores/Unlimited VMs)

MSWS-22-DC16CD-NS Windows Server 2022 DC (16 Cores/Unlim VMs) - No Cisco SVC

MSWS-19-ST16CD Windows Server 2019 Standard (16 Cores/2 VMs)

MSWS-19-ST16CD-NS Windows Server 2019 Standard (16 Cores/2 VMs) - No Cisco SVC

MSWS-19-DC16CD Windows Server 2019 Data Center (16 Cores/Unlimited VMs)

MSWS-19-DC16CD-NS Windows Server 2019 DC (16 Cores/Unlim VMs) - No Cisco SVC

Red Hat

RHEL-2S2V-D1A Red Hat Enterprise Linux (1-2 CPU,1-2 VN); 1-Yr Support Req

RHEL-2S2V-D3A Red Hat Enterprise Linux (1-2 CPU,1-2 VN); 3-Yr Support Req

Cisco UCS X410c M7 Compute Node 31


CONFIGURING the Cisco UCS X410c M7 Compute Node

Table 21 Operating System (continued)

Product ID (PID) PID Description

RHEL-2S2V-D5A Red Hat Enterprise Linux (1-2 CPU,1-2 VN); 5-Yr Support Req

RHEL-VDC-2SUV-D1A RHEL for Virt Datacenters (1-2 CPU, Unlim VN) 1 Yr Supp Req

RHEL-VDC-2SUV-D3A RHEL for Virt Datacenters (1-2 CPU, Unlim VN) 3 Yr Supp Req

RHEL-VDC-2SUV-D5A RHEL for Virt Datacenters (1-2 CPU, Unlim VN) 5 Yr Supp Req

Red Hat Ent Linux/ High Avail/ Res Strg/ Scal

RHEL-2S2V-D1S Red Hat Enterprise Linux (1-2 CPU,1-2 VN); Prem 1Yr SnS Reqd

RHEL-2S2V-D3S Red Hat Enterprise Linux (1-2 CPU,1-2 VN); Prem 3Yr SnS Reqd

RHEL-2S-HA-D1S RHEL High Availability (1-2 CPU); Premium 1-yr SnS Reqd

RHEL-2S-HA-D3S RHEL High Availability (1-2 CPU); Premium 3-yr SnS Reqd

RHEL-2S-RS-D1S RHEL Resilent Storage (1-2 CPU); Premium 1-yr SnS Reqd

RHEL-2S-RS-D3S RHEL Resilent Storage (1-2 CPU); Premium 3-yr SnS Reqd

RHEL-VDC-2SUV-D1S RHEL for Virt Datacenters (1-2 CPU, Unlim VN) 1 Yr SnS Reqd

RHEL-VDC-2SUV-D3S RHEL for Virt Datacenters (1-2 CPU, Unlim VN) 3 Yr SnS Reqd

Red Hat SAP

RHEL-SAP-2S2V-D1S RHEL for SAP Apps (1-2 CPU, 1-2 VN); Prem 1-Yr SnS Reqd

RHEL-SAP-2S2V-D3S RHEL for SAP Apps (1-2 CPU, 1-2 VN); Prem 3-Yr SnS Reqd

RHEL-SAPSP-D3S RHEL SAP Solutions Premium - 3 Years

RHEL-SAPSS-D3S RHEL SAP Solutions Standard - 3 Years

VMware

VMW-VSP-STD-D1A VMware vSphere 7 Std (1 CPU, 32 Core) 1-yr, Support Required

VMW-VSP-STD-D3A VMware vSphere 7 Std (1 CPU, 32 Core) 3-yr, Support Required

VMW-VSP-STD-D5A VMware vSphere 7 Std (1 CPU, 32 Core) 5-yr, Support Required

VMW-VSP-EPL-D1A VMware vSphere 7 Ent Plus (1 CPU, 32 Core) 1Yr, Support Reqd

VMW-VSP-EPL-D3A VMware vSphere 7 Ent Plus (1 CPU, 32 Core) 3Yr, Support Reqd

VMW-VSP-EPL-D5A VMware vSphere 7 Ent Plus (1 CPU, 32 Core) 5Yr, Support Reqd

SUSE

SLES-2S2V-D1A SUSE Linux Enterprise Svr (1-2 CPU,1-2 VM); 1-Yr Support Req

SLES-2S2V-D3A SUSE Linux Enterprise Svr (1-2 CPU,1-2 VM); 3-Yr Support Req

SLES-2S2V-D5A SUSE Linux Enterprise Svr (1-2 CPU,1-2 VM); 5-Yr Support Req

32 Cisco UCS X410c M7 Compute Node


CONFIGURING the Cisco UCS X410c M7 Compute Node

Table 21 Operating System (continued)

Product ID (PID) PID Description

SLES-2SUVM-D1A SUSE Linux Enterprise Svr (1-2 CPU,Unl VM) LP; 1Y Supp Req

SLES-2SUVM-D3A SUSE Linux Enterprise Svr (1-2 CPU,Unl VM) LP; 3Y Supp Req

SLES-2SUVM-D5A SUSE Linux Enterprise Svr (1-2 CPU,Unl VM) LP; 5Y Supp Req

SLES-2S-LP-D1A SUSE Linux Live Patching Add-on (1-2 CPU); 1yr Support Req

SLES-2S-LP-D3A SUSE Linux Live Patching Add-on (1-2 CPU); 3yr Support Req

SLES-2S2V-D1S SUSE Linux Enterprise Svr (1-2 CPU,1-2 VM); Prio 1-Yr SnS

SLES-2S2V-D3S SUSE Linux Enterprise Svr (1-2 CPU,1-2 VM); Prio 3-Yr SnS

SLES-2S2V-D5S SUSE Linux Enterprise Svr (1-2 CPU,1-2 VM); Prio 5-Yr SnS

SLES-2SUVM-D1S SUSE Linux Enterprise Svr (1-2 CPU,Unl VM) LP; Prio 1Y SnS

SLES-2SUVM-D3S SUSE Linux Enterprise Svr (1-2 CPU,Unl VM) LP; Prio 3Y SnS

SLES-2SUVM-D5S SUSE Linux Enterprise Svr (1-2 CPU,Unl VM) LP; Prio 5Y SnS

SLES-2S-HA-D1S SUSE Linux High Availability Ext (1-2 CPU); 1yr SnS

SLES-2S-HA-D3S SUSE Linux High Availability Ext (1-2 CPU); 3yr SnS

SLES-2S-HA-D5S SUSE Linux High Availability Ext (1-2 CPU); 5yr SnS

SLES-2S-GC-D1S SUSE Linux GEO Clustering for HA (1-2 CPU); 1yr Sns

SLES-2S-GC-D3S SUSE Linux GEO Clustering for HA (1-2 CPU); 3yr SnS

SLES-2S-GC-D5S SUSE Linux GEO Clustering for HA (1-2 CPU); 5yr SnS

SLES-2S-LP-D1S SUSE Linux Live Patching Add-on (1-2 CPU); 1yr SnS Required

SLES-2S-LP-D3S SUSE Linux Live Patching Add-on (1-2 CPU); 3yr SnS Required

SLES and SAP

SLES-SAP-2S2V-D1S SLES for SAP Apps (1-2 CPU, 1-2 VM); Priority 1-Yr SnS

SLES-SAP-2S2V-D3S SLES for SAP Apps (1-2 CPU, 1-2 VM); Priority 3-Yr SnS

SLES-SAP-2S2V-D5S SLES for SAP Apps (1-2 CPU, 1-2 VM); Priority 5-Yr SnS

SLES-SAP-2S2V-D1A SLES for SAP Apps w/ HA (1-2 CPU, 1-2 VM); 1-Yr Support Reqd

SLES-SAP-2S2V-D3A SLES for SAP Apps w/ HA (1-2 CPU, 1-2 VM); 3-Yr Support Reqd

SLES-SAP-2S2V-D5A SLES for SAP Apps w/ HA (1-2 CPU, 1-2 VM); 5-Yr Support Reqd

Cisco UCS X410c M7 Compute Node 33


CONFIGURING the Cisco UCS X410c M7 Compute Node

STEP 13 CHOOSE OPTIONAL OPERATING SYSTEM MEDIA KIT

Select the optional operating system media listed in Table 22.

Table 22 OS Media

Product ID (PID) PID Description

MSWS-19-ST16CD-RM Windows Server 2019 Stan (16 Cores/2 VMs) Rec Media DVD Only

MSWS-19-DC16CD-RM Windows Server 2019 DC (16Cores/Unlim VM) Rec Media DVD Only

MSWS-22-ST16CD-RM Windows Server 2022 Stan (16 Cores/2 VMs) Rec Media DVD Only

MSWS-22-DC16CD-RM Windows Server 2022 DC (16Cores/Unlim VM) Rec Media DVD Only

34 Cisco UCS X410c M7 Compute Node


SUPPLEMENTAL MATERIAL

SUPPLEMENTAL MATERIAL
Simplified Block Diagram
A simplified block diagram of the Cisco UCS X410c M7 Compute Node system board is shown in Figure 6.

Figure 6 Cisco UCS X410c M7 Compute Node Simplified Block Diagram (IFMs 25G with Drives)

Cisco UCS X410c M7 Node


Left Front Right Front

Disk 1 ....... Disk 6


Local Storage
M.2 M.2

Front Mezz Adapter Mini Storage Connector

RAID Controller RAID Controller

PCIe Gen4x16 PCIe Gen2x2

PCIe Gen4x16
CPU 1 (Front CPU) CPU 3 (Front CPU)

UPI
Links

PCIe Gen4x16
CPU 2 (Rear CPU) CPU 4 (Rear CPU)

PCIe Gen4x16

PCIe Gen4x16

Rear MEZZ Adapter Rear mLOM Adapter


Bridge Connector

Bridge Connector

Bridge
Adapter SGMII
SGMII
Main ASIC Main ASIC PCIe Gen4x16
Node mLOM Connector
Node MEZZ Connector

PCIe Gen4x16 PCIe Gen4x16

2x25G-KR

2x25G-KR
PCIe Gen4x16
PCIe Gen4x16
2x25G-KR

2x25G-KR

2x25G-KR

2x25G-KR

FEM-1 OD FEM-2 OD IFM-1 OD IFM-2 OD


Connector Connector Connector Connector

Cisco UCS X410c M7 Compute Node 35


SUPPLEMENTAL MATERIAL

Figure 7 Cisco UCS X410c M7 Compute Node Simplified Block Diagram (IFMs 100G with Drives)
Cisco UCS X410c M7 Node
Left Front Right Front

Disk 1 ....... Disk 6


Local Storage
M.2 M.2

Front Mezz Adapter Mini Storage Connector

RAID Controller RAID Controller

PCIe Gen4x16 PCIe Gen2x2

PCIe Gen4x16
CPU 1 (Front CPU) CPU 3 (Front CPU)

UPI
Links

PCIe Gen4x16
CPU 2 (Rear CPU) CPU 4 (Rear CPU)

PCIe Gen4x16

PCIe Gen4x16

Rear MEZZ Adapter Rear mLOM Adapter

SGMII

Main ASIC PCIe Gen4x16


Node mLOM Connector
Node MEZZ Connector

PCIe Mezz Card for X-Fabric

PCIe Gen4x16
PCIe Gen4x16
100G-KR

100G-KR

IFM-1 OD IFM-2 OD IFM-1 OD IFM-2 OD


Connector Connector Connector Connector

36 Cisco UCS X410c M7 Compute Node


SUPPLEMENTAL MATERIAL

System Board
A top view of the Cisco UCS X410c M7 Compute Node system board is shown in Figure 8.

Figure 8 Cisco UCS X410c M7 Compute Node System Board

1 Front mezzanine module slot 5 Motherboard USB connector

2 CPU 1 slot 6 Rear mezzanine slot, which supports X-Series


mezzanine cards, such as VIC 15422.

3 DIMM slots 7 Bridge Card slot, which connects 8 rear


mezzanine slot and the mLOM/VIC slot

4 CPU 2 slot 8 mLOM/VIC slot that supports zero or one Cisco


VIC or Cisco X-Series 100 Gbps mLOM

Please refer to the Cisco UCS X410c M7 Compute Node Installation Guide for installation
procedures.

Cisco UCS X410c M7 Compute Node 37


UPGRADING or REPLACING CPUs

UPGRADING or REPLACING CPUs

NOTE: Before servicing any CPU, do the following:


■ Decommission and power off the compute node.
■ Slide the Cisco UCS X410c M7 Compute Node out from its chassis.
■ Remove the top cover.

To replace an existing CPU, follow these steps:

(1) Have the following tools and materials available for the procedure:

■ T-30 Torx driver—Supplied with replacement CPU.


■ #1 flat-head screwdriver—Supplied with replacement CPU.
■ CPU assembly tool—Supplied with replacement CPU. Can be ordered separately as Cisco PID
UCSX-CPUAT=.
■ Heatsink cleaning kit—Supplied with replacement CPU. Can be ordered separately as Cisco
PID UCSX-HSCK=.
■ Thermal interface material (TIM)—Syringe supplied with replacement CPU. Can be ordered
separately as Cisco PID UCSX-CPU-TIM=.

(2) Order the appropriate replacement CPU from Available CPUs on page 11.

Carefully remove and replace the CPU and heatsink in accordance with the instructions
found in “Cisco UCS X410c M7 Compute Node Installation and Service Note,” found at:
https://www.cisco.com/content/en/us/td/docs/unified_computing/ucs/x/hw/x410c-m7/insta
ll/b-cisco-ucs-x410c-m7-install-guide.html

(3) .

To add a new CPU, follow these steps:

(1) Have the following tools and materials available for the procedure:

■ T-30 Torx driver—Supplied with new CPU.


■ #1 flat-head screwdriver—Supplied with new CPU
■ CPU assembly tool—Supplied with new CPU.Can be ordered separately as Cisco PID
UCSX-CPUAT=
■ Thermal interface material (TIM)—Syringe supplied with replacement CPU.Can be ordered
separately as Cisco PID UCSX-CPU-TIM=

(2) Order the appropriate new CPU from Table 4 on page 11.

(3) Order one heat sink for each new CPU. Order PID UCSX-C-M7-HS-F= for the front CPU socket
and PID UCSX-C-M6-HS-R= for the rear CPU socket.

38 Cisco UCS X410c M7 Compute Node


UPGRADING or REPLACING MEMORY

Carefully install the CPU and heatsink in accordance with the instructions found in “Cisco
UCS X410c M7 Compute Node Installation and Service Note,” found at:
https://www.cisco.com/content/en/us/td/docs/unified_computing/ucs/x/hw/x410c-m7/insta
ll/b-cisco-ucs-x410c-m7-install-guide.html

UPGRADING or REPLACING MEMORY


NOTE: Before servicing any DIMM or PMem, do the following:
■ Decommission and power off the compute node.
■ Slide the Cisco UCS X410c M7 Compute Node out from its chassis.
■ Remove the top cover.

To add or replace DIMMs or PMem, follow these steps:

To add or replace DIMMs or PMem, follow these steps:

Step 1 Open both DIMM connector latches.

Step 2 Press evenly on both ends of the DIMM until it clicks into place in its slot

Note: Ensure that the notch in the DIMM aligns with the slot. If the notch is misaligned, it is
possible to damage the DIMM, the slot, or both.

Step 3 Press the DIMM connector latches inward slightly to seat them fully.

Step 4 Populate all slots with a DIMM or DIMM blank. A slot cannot be empty.

Figure 9 Replacing Memory


2

1
3
2

4
1
306040

For additional details on replacing or upgrading DIMMs, see “Cisco UCS X410c M7 Compute
Node Installation and Service Note,” found at
https://www.cisco.com/content/en/us/td/docs/unified_computing/ucs/x/hw/x410c-m7/insta
ll/b-cisco-ucs-x410c-m7-install-guide.html

Cisco UCS X410c M7 Compute Node 39


TECHNICAL SPECIFICATIONS

TECHNICAL SPECIFICATIONS
Dimensions and Weight

Table 23 Cisco UCS X410c M7 Compute Node Dimensions and Weight

Parameter Value

Height 3.67 inches (93.22 mm)

Width 11.28 inches (286.52 mm)

Depth 23.8 inches (604.52 mm)

Weight The weight depends on the components installed:


■ Minimally configured compute node weight: 25 lb (11.34 kg)
■ Fully configured compute node weight: 42 lb (19.05 kg)

Environmental Specifications

Table 24 Cisco UCS X410c M7 Compute Node Environmental Specifications

Parameter Value

Operating temperature Supported operating temperatures depend on the compute node's memory:
■ For 256GB DDR5 DIMMs: 50° to 89.6° F (10° to 32° C) at 0 to 10,000
■ All other memory configurations: 50° to 95° F (10° to 35° C) at 0 to
10,000

Non-operating temperature -40° to 149°F (–40° to 65°C)

Operating humidity 5% to 90% noncondensing

Non-operating humidity 5% to 93% noncondensing

Operating altitude 0 to 10,000 ft (0 to 3000m); maximum ambient temperature decreases by


1°C per 300m

Non-operating altitude 40,000 ft (12,000m)

For configuration-specific power specifications, use the Cisco UCS Power Calculator at:

http://ucspowercalc.cisco.com

40 Cisco UCS X410c M7 Compute Node


TECHNICAL SPECIFICATIONS

Cisco UCS X410c M7 Compute Node 41


TECHNICAL SPECIFICATIONS

42 Cisco UCS X410c M7 Compute Node

You might also like