x410cm7 Specsheet
x410cm7 Specsheet
OVERVIEW
The Cisco UCS X-Series Modular System simplifies your data center, adapting to the unpredictable needs of
modern applications while also providing for traditional scale-out and enterprise workloads. It reduces the
number of server types to maintain, helping to improve operational efficiency and agility as it helps reduce
complexity. Powered by the Cisco Intersight™ cloud operations platform, it shifts your thinking from
administrative details to business outcomes with hybrid cloud infrastructure that is assembled from the
cloud, shaped to your workloads, and continuously optimized.
The Cisco UCS X410c M7 Compute Node is the computing device to integrate into the Cisco UCS X-Series
Modular System. Up to eight compute nodes can reside in the 7-Rack-Unit (7RU) Cisco UCS X9508 Chassis,
offering one of the highest densities of compute, IO, and storage per rack unit in the industry.
The Cisco UCS X410c M7 Compute Node harnesses the power of the latest 4th Gen Intel® Xeon® Scalable
Processors (Sapphire Rapids), and offers the following:
■ CPU: Four 4th Gen Intel® Xeon® Scalable Processors (codenamed Sapphire Rapids) with up to 60 cores
per processor.
■ Memory: Up to 16TB with 64 x 256GB1 DDR5-4800MT/s DIMMs, in a 4-sockets configuration.
■ Storage: Up to 6 hot-pluggable, Solid-State Drives (SSDs), or Non-Volatile Memory Express (NVMe)
2.5-inch drives with a choice of enterprise-class Redundant Array of Independent Disks (RAID) or
pass-through controllers with four lanes each of PCIe Gen 4 connectivity and up to 2 M.2 SATA drives
for flexible boot and local storage capabilities.
■ mLOM virtual interface cards:
■ Cisco UCS Virtual Interface Card (VIC) 15420 occupies the server's Modular LAN on
Motherboard (mLOM) slot, enabling up to 50Gbps (2 x25Gbps) of unified fabric connectivity
to each of the chassis Intelligent Fabric Modules (IFMs) for 100Gbps connectivity per server.
■ Cisco UCS Virtual Interface Card (VIC) 15231 occupies the server's Modular LAN on
Motherboard (mLOM) slot, enabling up to 100Gbps of unified fabric connectivity to each of
the chassis Intelligent Fabric Modules (IFM) for 200Gbps (2x 100Gbps) connectivity per
server.
■ Optional Mezzanine card:
■ Cisco UCS Virtual Interface Card (VIC) 15422 can occupy the server's mezzanine slot at the
bottom rear of the chassis. An included bridge card extends this VIC's 100Gbps (4 x 25Gbps)
of network connections through IFM connectors, bringing the total bandwidth to 100Gbps
per VIC 15420 and 15422 (for a total of 200Gbps per server). In addition to IFM connectivity,
the VIC 15422 I/O connectors link to Cisco UCS X-Fabric technology.
■ Cisco UCS PCI Mezz card for X-Fabric can occupy the server's mezzanine slot at the bottom
rear of the chassis. This card's I/O connectors link to Cisco UCS X-Fabric modules and enable
connectivity to the X440p PCIe Node.
■ Security: Includes secure boot silicon root of trust FPGA, ACT2 anti-counterfeit provisions, and
optional Trusted Platform Model (TPM).
Figure 1 on page 5 shows a front view of the Cisco UCS X410c M7 Compute Node.
Notes
DETAILED VIEWS
Cisco UCS X410c M7 Compute Node Front View
Figure 2 shows a front view of the Cisco UCS X410c M7 Compute Node.
Figure 2 Cisco UCS X410c M7 Compute Node Front View (Drives option)
Capability/Feature Description
Chassis The Cisco UCS X410c M7 Compute Node mounts in a Cisco UCS X9508 chassis.
CPU ■ Four 4th Gen Intel® Xeon® Scalable Processors (codenamed Sapphire
Rapids).
■ Each CPU has 8 channels with up to 2 DIMMs per channel, for up to 16
DIMMs per CPU.
■ UPI Links: Up to Four at 16GT/s
Mezzanine Adapter ■ An optional Cisco UCS Virtual Interface Card 15422 can occupy the
(Rear) server’s mezzanine slot at the bottom of the chassis. A bridge card
extends this VIC’s 2x 50Gbps of network connections up to the mLOM slot
and out through the mLOM’s IFM connectors, bringing the total bandwidth
to 100Gbps per fabric—a total of 200Gbps per server.
■ An optional UCS PCIe Mezz card for X-Fabric is also supported in the
server’s mezzanine slot. This card’s I/O connectors link to the Cisco UCS
X-Fabric modules for UCS X-series Gen4 PCIe node access.
mLOM The modular LAN on motherboard (mLOM) cards (the Cisco UCS VIC 15231 and
15420) is located at the rear of the compute node.
■ The Cisco UCS VIC 15420 is a Cisco designed PCI Express (PCIe) based card
that supports two 2x25G-KR network interfaces to provide Ethernet
communication to the network by means of the Intelligent Fabric Modules
(IFMs) in the Cisco UCS X9508 chassis. The Cisco UCS VIC 15420 mLOM can
connect to the rear mezzanine adapter card with a bridge connector.
■ The Cisco UCS VIC 15231 is a Cisco designed PCI Express (PCIe) based card
that supports two 2x100G-KR network interfaces to provide Ethernet
communication to the network by means of the Intelligent Fabric Modules
(IFMs) in the Cisco UCS X9508 chassis
Capability/Feature Description
Additional Storage Dual 80 mm SATA 3.0 M.2 cards (up to 960 GB per card) on a boot-optimized
hardware RAID controller
Front Panel Interfaces OCuLink console port. Note that an adapter cable is required to connect the
OCuLink port to the transition serial USB and video (SUV) octopus cable.
Power subsystem Power is supplied from the Cisco UCS X9508 chassis power supplies. The
Cisco UCS X410c M7 Compute Node consumes a maximum of 2500W.
Integrated management The built-in Cisco Integrated Management Controller enables monitoring of
processor Cisco UCS X410c M7 Compute Node inventory, health, and system event logs.
ACPI Advanced Configuration and Power Interface (ACPI) 6.2 Standard Supported.
ACPI states S0 and S5 are supported. There is no support for states S1
through S4.
Management Cisco Intersight software (SaaS, Virtual Appliance and Private Virtual
Appliance)
Fabric Interconnect Compatible with the Cisco UCS 6454, 64108 and 6536 fabric interconnects
Chassis Compatible with the Cisco UCS 9508 X-Series Server Chassis
Notes:
1. Available post first customer ship (FCS).
■ STEP 1 CHOOSE BASE Cisco UCS X410c M7 Compute Node SKU, page 10
■ STEP 2 CHOOSE CPU(S), page 11
■ STEP 3 CHOOSE MEMORY, page 13
■ STEP 4 CHOOSE REAR mLOM ADAPTER, page 18
■ STEP 5 CHOOSE OPTIONAL REAR MEZZANINE VIC/BRIDGE ADAPTERS, page 21
■ STEP 6 CHOOSE OPTIONAL FRONT MEZZANINE ADAPTER, page 23
■ STEP 7 CHOOSE OPTIONAL GPU PCIe NODE, page 24
■ STEP 8 CHOOSE OPTIONAL GPUs, page 25
■ STEP 9 CHOOSE OPTIONAL DRIVES, page 26
■ STEP 10 ORDER M.2 SATA SSDs AND RAID CONTROLLER, page 29
■ STEP 11 CHOOSE OPTIONAL TRUSTED PLATFORM MODULE, page 30
■ STEP 12 CHOOSE OPERATING SYSTEM AND VALUE-ADDED SOFTWARE, page 31
■ STEP 13 CHOOSE OPTIONAL OPERATING SYSTEM MEDIA KIT, page 34
■ SUPPLEMENTAL MATERIAL, page 35
Verify the product ID (PID) of the Cisco UCS X410c M7 Compute Node as shown in Table 4.
UCSX-X410C-M7 Cisco UCS X410c M7 Compute Node 4S Intel 4th Gen CPU without CPU, memory,
drive bays, drives, VIC adapter, or mezzanine adapters (ordered as a UCS X9508
chassis option)
UCSX-X410C-M7-U Cisco UCS X410c M7 Compute Node 4S Intel 4th Gen CPU without CPU, memory,
drive bays, drives, VIC adapter, or mezzanine adapters (ordered standalone)
A base Cisco UCS X410c M7 Compute Node ordered in Table 3 does not include any components
or options. They must be selected during product ordering.
Please follow the steps on the following pages to order components such as the following, which
Please follow the steps on the following pages to order components such as the following, which
are required in a functional compute node:
■ CPUs
■ Memory
■ Cisco storage RAID or passthrough controller with drives (or blank, for no local drive
support)
■ SAS, SATA, NVMe, M.2, or U.2 drives
■ Cisco adapters (such as the 15000 series VIC or Bridge)
■ The 4th Gen Intel® Xeon® Scalable Processors (codenamed Sapphire Rapids) are paired with
Intel® C741 series chipset
■ Up to 60 cores
■ Cache size of up to 112.50 MB
■ Power: Up to 350Watts
■ UPI Links: Up to Four at 16GT/s
Select CPUs
Supported Configurations
■ Choose four identical CPUs from any one of the rows of Table 4 Available CPUs, page 11
DRAM DIMM type RDIMM (Registered DDR5 DIMM with on die ECC)
Memory DIMM organization Eight memory DIMM channels per CPU; up to 2 DIMMs per channel
DRAM DIMM densities and ranks 16GB 1Rx8, 32GB 1Rx4, 64GB 2Rx4, 128GB 4Rx4, 256GB1 8Rx4
Notes:
1. 256GB DIMM Available post first customer ship (FCS)
Select the memory configuration and whether or not you want the memory mirroring option.
The available memory DIMMs and mirroring option are listed in Table 6.
DRAMs
UCSX-MRX16G1RE1 16GB DDR5-4800 RDIMM 1Rx8 (16Gb)
UCSX-MRX32G1RE1 32GB DDR5-4800 RDIMM 1Rx4 (16Gb)
UCSX-MRX64G2RE1 64GB DDR5-4800 RDIMM 2Rx4 (16Gb)
UCSX-MR128G4RE1 128GB DDR5-4800 RDIMM 4Rx4 (16Gb)
UCSX-MR256G8RE11 256GB DDR5-4800 RDIMM 8Rx4 (16Gb)
Memory Mirroring Option
N01-MMIRRORD Memory mirroring option
Accessories/spare included with Memory configuration:
■ UCS-DDR5-BLK2 is auto included for the unselected DIMMs slots
Notes:
1. Available post first customer ship (FCS).
2. Any empty DIMM slot must be populated with a DIMM blank to maintain proper cooling airflow.
Table 8 Supported DIMM mixing and population across 2 slots in each channel
Notes:
1. Only 6 or 8 channels are allowed (for 2, 4, or 8 DIMMs you would just populate 1 DPC on 2, 4, or 8 channels)
2. When mixing two different DIMM densities, all 8 channels per CPU must be populated. Use of fewer than 8
channels (16 slots per CPU) is not supported.
3. Available post first customer ship (FCS)
■ Memory Limitations:
■ Memory on every CPU socket shall be configured identically.
■ Refer to Table 7 and Table 8 for DIMM population and DIMM mixing rules.
■ Cisco memory from previous generation servers (DDR3 and DDR4) is not supported with the
M7 servers.
■ For best performance, observe the following:
■ For optimum performance, populate at least one DIMM per memory channel per CPU. When
one DIMM per channel is used, it must be populated in DIMM slot 1 (blue slot farthest away
from the CPU) of a given channel.
■ The maximum 2 DPC speed is 4400 MT/s, refer to Table 9 for the details.
CPU speed/ DIMM speed DDR5 DIMM 1DPC DDR5 DIMM 2DPC
NOTE: For full details on supported memory configurations see the M7 Memory Guide
UCSX-ML-V5D200G-D Cisco UCS VIC 15231 2x100/200G mLOM for X410c mLOM
M7 Compute Node
UCSX-ML-V5Q50G-D Cisco UCS VIC 15420 4x25G secure boot mLOM for X mLOM
Compute Node
NOTE:
■ VIC 15420, or 15231 are supported with both X9108-IFM-25G and
X9108-IFM-100G. VIC 15420 will operate at 4x 25G with both X9108-IFM-25G and
X9108-IFM-100G. While, VIC 15231 will operate at 4x 25G with X9108-IFM-25G
and at 2x 100G with X9108-IFM-100G.
■ The mLOM adapter is mandatory for the Ethernet connectivity to the network
by means of the IFMs and has x16 PCIe Gen4 connectivity with Cisco UCS VIC
15420 or x16 Gen4 connectivity with Cisco UCS VIC 15231 towards the CPU1.
■ There is no backplane in the Cisco UCS X9508 chassis; thus, the compute nodes
directly connect to the IFMs using Orthogonal Direct connectors.
■ Figure 4 shows the location of the mLOM and rear mezzanine adapters on the
Cisco UCS X410c M7 Compute Node. The bridge adapter connects the mLOM
adapter to the rear mezzanine adapter.
Figure 4 shows the network connectivity from the mLOM out to the 25G IFMs.
KR Lanes 3 2 1 0 KR Lanes 3 2 1 0
IFM OD connectors (1 for each IFM)
25G-KR
25G-KR
25G-KR
25G-KR
25G-KR
25G-KR
25G-KR
Cisco UCS X410c Compute Node
Lane 1
Lane 0
Lane 1
Lane 0
Cisco ASIC MAC1 MAC0
Lane 1 25G-KR Cisco ASIC
MAC1 Lane 0 25G-KR
Lane 0 25G-KR
MAC0 25G-KR
Lane 1
Figure 5 shows the network connectivity from the mLOM out to the 100G IFMs.
KR Lanes KR Lanes
100G-KR4
100G-KR4
UCS X410c mLOM OD connectors (2)
MAC1 MAC0
Empty Cisco ASIC
Mezzanine Slot
mLOM Adapter
Bridge Adapter
UCSX-ME-V5Q50G-D Cisco UCS VIC 15422 4x25G secure boot mezz for X Rear Mezzanine
Compute Node connector on
motherboard
UCSX-V5-BRIDGE-D UCS VIC 15000 bridge to connect mLOM and mezz X One connector on
Compute Node Mezz card and one
connector on
(This bridge to connect the Cisco VIC 15420 mLOM and mLOM card
Cisco VIC 15422 Mezz for the X410c M7 Compute Node)
Notes:
1. If this adapter is selected, then two CPUs are required and UCSX-ME-V5Q50G-D or UCSX-V4-PCIME-D is
required.
2. Included with the Cisco VIC 15422 mezzanine adapter.
NOTE: The UCSX-V4-PCIME-D rear mezzanine card for X-Fabric has PCIe Gen4 x16
connectivity towards each CPU1 and CPU2. Additionally, the UCSX-V4-PCIME-D also
provides two PCIe Gen4 x16 to each X-fabric. This rear mezzanine card enables
connectivity from the X410c M7 Compute Node to the X440p PCIe node.
FI-6536 + FI-6536 +
FI-6536 + X9108-IFM-25G/100G X9108-IFM-25G/100G
X410c M7 Compute FI-6536/6400 +
X9108-IFM-100G or or
Node X9108-IFM-25G
FI-6400 + FI-6400 +
X9108-IFM-25G X9108-IFM-25G
KR connectivity
from VIC to each
1x 100GKR 2x 25GKR 2x 25GKR 4x 25GKR
IFM
Single vHBA
throughput on VIC 100G 50G 50G 50G 50G
Supported Configurations
NOTE:
■ The Cisco UCS X410c M7 Compute Node can be ordered with or without the
front mezzanine adapter. Refer to Table 13 Available Front Mezzanine
Adapters
■ Only one Front Mezzanine connector per Server
UCSX-X10C-PT4F-D Cisco UCS X410c M7 Compute Node compute pass through Front Mezzanine
controller for up to 6 NVMe drives
UCSX-X10C-RAIDF-D Cisco UCS X410c M7 Compute Node RAID controller with LSI Front Mezzanine
3900 for up to 6 SAS/SATA drives or up to 4 U.2 NVMe drives
(SAS/SATA and NVMe drives can be mixed).
Notes:
1. Available post first customer ship (FCS)
The available PCIe node GPU options are listed in Table 15.
GPU Product ID (PID) PID Description Maximum number of GPUs per node
UCSX-GPU-A16-D NVIDIA A16 PCIE 250W 4X16GB 2
UCSX-GPU-A40-D TESLA A40 RTX, PASSIVE, 300W, 48GB 2
UCSX-GPU-A100-80-D TESLA A100, PASSIVE, 300W, 80GB1 2
Notes:
1. Required power cables are included with the riser cards in the X440p PCIe node.
■ One to six 2.5-inch small form factor SAS/SATA SSDs or PCIe U.2 NVMe drives
— Hot-pluggable
— Sled-mounted
Select one or two drives from the list of supported drives available in Table 16.
Drive
Product ID (PID) Description Speed Size
Type
SAS/SATA SSDs1,2,3
Self-Encrypted Drives (SED)
UCSX-SD76TBKNK9-D 7.6TB Enterprise value SAS SSD (1X DWPD, SED-FIPS) SAS 7.6TB
UCSX-SD38TBKNK9-D 3.8TB Enterprise Value SAS SSD (1X DWPD, SED) SAS 3.8TB
UCSX-SD16TBKNK9-D 1.6TB Enterprise performance SAS SSD (3X DWPD, SED) SAS 1.6TB
UCSXSD960GBKNK9-D 960GB 2.5" Enterprise value 12G SAS SSD (1X endurance, SAS 960GB
FIPS)
UCSXSD800GBKNK9-D 800GB Enterprise performance SAS SSD (3X DWPD, SED) SAS 800GB
UCSXS76TBEM2NK9-D 7.6TB Enterprise value SATA SSD (1X, SED) SATA 7.6TB
UCSXS38TBEM2NK9-D 3.8TB Enterprise value SATA SSD (1X, SED) SATA 3.8TB
UCSXS960GBM2NK9-D 960GB Enterprise value SATA SSD (1X, SED) SATA 960GB
Enterprise Performance SSDs (high endurance, supports up to 3X DWPD (drive writes per day))
UCSXSD800GK3XEP-D 800GB 2.5in Enterprise Performance 12G SAS SSD SAS 12G 800GB
(3X endurance)
UCSX-SD16TK3XEP-D 1.6TB 2.5in Enterprise Performance 12G SAS SSD SAS 12G 1.6TB
(3X endurance)
UCSX-SD32TK3XEP-D 3.2TB 2.5in Enterprise Performance 12G SAS SSD SAS 12G 3.2TB
(3X endurance)
UCSXSD800GS3XEP-D 800GB 2.5in Enterprise Performance 12G SAS SSD SAS 12G 800GB
(3X endurance)
UCSX-SD16TS3XEP-D 1.6TB 2.5in Enterprise Performance 12G SAS SSD SAS 12G 1.6TB
(3X endurance)
UCSX-SD32TS3XEP-D 3.2TB 2.5in Enterprise Performance 12G SAS SSD SAS 12G 3.2TB
(3X endurance)
UCSX-SD19T63XEP-D 1.9TB 2.5 inch Enterprise performance 6G SATA SSD SATA 6G 1.9TB
(3X endurance)
UCSX-SD19TM3XEP-D 1.9TB 2.5 inch Enterprise performance 6G SATA SSD SATA 6G 1.9TB
(3X endurance)
UCSXSD480G63XEP-D 480GB 2.5in Enterprise performance 6G SATA SSD SATA 6G 480GB
(3X endurance)
UCSXSD480GM3XEP-D 480GB 2.5in Enterprise performance 6G SATA SSD SATA 6G 480GB
(3X endurance)
UCSXSD960G63XEP-D 960GB 2.5 inch Enterprise performance 6G SATA SSD SATA 6G 960GB
(3X endurance)
Drive
Product ID (PID) Description Speed Size
Type
UCSX-SD38T63XEP-D 3.8TB 2.5 in Enterprise performance 6G SATA SSD SATA 6G 3.8TB
(3X endurance)
UCSXSD960GM3XEP-D 960GB 2.5 inch Enterprise performance 6G SATA SSD SATA 6G 960GB
(3X endurance)
Enterprise Value SSDs (Low endurance, supports up to 1X DWPD (drive writes per day))
UCSXSD240GM1XEV-D 240GB 2.5 inch Enterprise Value 6G SATA SSD SATA 6G 240GB
UCSXSD960GM1XEV-D 960GB 2.5 inch Enterprise Value 6G SATA SSD SATA 6G 960GB
UCSX-SD16TM1XEV-D 1.6TB 2.5 inch Enterprise Value 6G SATA SSD SATA 6G 1.6TB
UCSX-SD19TM1XEV-D 1.9TB 2.5 inch Enterprise Value 6G SATA SSD SATA 6G 1.9TB
UCSX-SD38TM1XEV-D 3.8TB 2.5 inch Enterprise Value 6G SATA SSD SATA 6G 3.8TB
UCSXSD38T6I1XEV-D 3.8TB 2.5 inch Enterprise Value 6G SATA SSD SATA 6G 3.8TB
UCSXSD19T6S1XEV-D 1.9TB 2.5 inch Enterprise Value 6G SATA SSD SATA 6G 1.9TB
UCSXSD38T6S1XEV-D 3.8TB 2.5 inch Enterprise Value 6G SATA SSD SATA 6G 3.8TB
UCSX-SD76TM1XEV-D 7.6TB 2.5 inch Enterprise Value 6G SATA SSD SATA 6G 7.6TB
UCSXSD76T6S1XEV-D 7.6TB 2.5 inch Enterprise Value 6G SATA SSD SATA 6G 7.6TB
UCSXS480G6I1XEV-D 480 GB 2.5 inch Enterprise Value 6G SATA Intel SSD SATA 6G 480GB
UCSXS960G6I1XEV-D 960GB 2.5 inch Enterprise Value 6G SATA Intel SSD SATA 6G 960GB
UCSXS960G6S1XEV-D 960GB 2.5 inch Enterprise Value 6G SATA Samsung SSD SATA 6G 960GB
UCSXSD960GK1XEV-D 960GB 2.5in Enter Value 12G SAS Kioxia G1 SSD SAS 12G 960GB
UCSXSD960GS1XEV-D 960GB 2.5in Enter Value 12G SAS Seagate SSD SAS 12G 960GB
UCSX-SD19TK1XEV-D 1.9TB 2.5in Enter Value 12G SAS Kioxia G1 SSD SAS 12G 1.9TB
UCSX-SD19TS1XEV-D 1.9TB 2.5v Enter Value 12G SAS Seagate SSD SAS 12G 1.9TB
UCSX-SD38TK1XEV-D 3.8TB 2.5in Enter Value 12G SAS Kioxia G1 SSD SAS 12G 3.8TB
UCSX-SD38TS1XEV-D 3.8TB 2.5in Enter Value 12G SAS Seagate SSD SAS 12G 3.8TB
UCSX-SD76TK1XEV-D 7.6TB 2.5in Enter Value 12G SAS Kioxia G1 SSD SAS 12G 7.6TB
UCSX-SD15TK1XEV-D 15.3TB 2.5in Enter Value 12G SAS Kioxia G1 SSD SAS 12G 15.3TB
NVMe4,5
UCSX-NVME4-15360D 15.3TB 2.5in U.2 P5520 NVMe High Perf Medium Endurance NVMe U.2 15.3TB
UCSX-NVME4-1600-D 1.6TB 2.5in U.2 P5620 NVMe High Perf High Endurance NVMe U.2 1.6TB
UCSX-NVME4-1920-D 1.9TB 2.5in U.2 P5520 NVMe High Perf Medium Endurance NVMe U.2 1.9TB
UCSX-NVME4-3200-D 3.2TB 2.5in U.2 P5620 NVMe High Perf High Endurance NVMe U.2 3.2TB
UCSX-NVME4-3840-D 3.8TB 2.5in U.2 P5520 NVMe High Perf Medium Endurance NVMe U.2 3.8TB
UCSX-NVME4-6400-D 6.4TB 2.5in U.2 P5620 NVMe High Perf High Endurance NVMe U.2 6.4TB
UCSX-NVME4-7680-D 7.6TB 2.5in U.2 P5520 NVMe High Perf Medium Endurance NVMe U.2 7.6TB
UCSX-NVMEXPI400-D 400GB 2.5in U.2 Intel P5800X Optane NVMe Extreme NVMe U.2 400GB
Perform SSD
UCSX-NVMEXPI800-D 800GB 2.5in U.2Intel P5800X Optane NVMe Extreme NVMe U.2 800GB
Perform SSD
NOTE: Cisco uses solid state drives from several vendors. All solid state drives are subject to physical write
limits and have varying maximum usage limitation specifications set by the manufacturer. Cisco will not
replace any solid state drives that have exceeded any maximum usage specifications set by Cisco or the
manufacturer, as determined solely by Cisco.
Notes:
1. SSD drives require the UCSX-X10C-RAIDF-D front mezzanine adapter
2. For SSD drives to be in a RAID group, two identical SSDs must be used in the group.
3. If SSDs are in JBOD Mode, the drives do not need to be identical.
4. NVMe drives require a front mezzanine the UCSX-X10C-PT4F-D pass through controller or UCSX-X10C-RAIDF-D
RAID controller.
5. A maximum of 4x NVMe drives can be ordered with RAID controller.
NOTE:
■ The UCSX-M2-HWRD-FPS is auto included with the server configuration
■ The UCSX-M2-HWRD-FPS controller supports RAID 1 and JBOD mode and is available
only with 240GB and 960GB M.2 SATA SSDs.
■ Cisco IMM is supported for configuring of volumes and monitoring of the controller
and installed SATA M.2 drives
■ Hot-plug replacement is not supported. The compute node must be powered off to
replace.
■ The Boot-Optimized RAID controller supports VMware, Windows, and Linux Operating
Systems
UCSX-M2-HWRD-FPS UCSX Front panel with M.2 RAID controller for SATA drives
■ Select Cisco M.2 SATA SSDs: Order one or two matching M.2 SATA SSDs. This connector accepts the
boot-optimized RAID controller (see Table 17). Each boot-optimized RAID controller can accommodate
up to two SATA M.2 SSDs shown in Table 18.
NOTE:
■ Each boot-optimized RAID controller can accommodate up to two SATA M.2 SSDs
shown in Table 18. The boot-optimized RAID controller plugs into the
motherboard.
■ It is recommended that M.2 SATA SSDs be used as boot-only devices.
■ The SATA M.2 drives can boot in UEFI mode only. Legacy boot mode is not
supported.
Notes:
1. Please note Microsoft certification requires a TPM 2.0 for bare-metal or guest VM deployments. Opt-out of the
TPM 2.0 voids the Microsoft certification.
NOTE:
■ The TPM module used in this system conforms to TPM v2.0 as defined by the
Trusted Computing Group (TCG).
■ TPM installation is supported after-factory. However, a TPM installs with a
one-way screw and cannot be replaced, upgraded, or moved to another compute
node. If a Cisco UCS X410c M7 Compute Node with a TPM is returned, the
replacement Cisco UCS X410c M7 Compute Node must be ordered with a new
TPM. If there is no existing TPM in the Cisco UCS X410c M7 Compute Node, you
can install a TPM 2.0. Refer to the following document for Installation location
and instructions:
https://www.cisco.com/content/en/us/td/docs/unified_computing/ucs/x/hw/x410
c-m7/install/b-cisco-ucs-x410c-m7-install-guide.html
VMware vCenter
MSWS-22-ST16CD-NS Windows Server 2022 Standard (16 Cores/2 VMs) - No Cisco SVC
MSWS-19-ST16CD-NS Windows Server 2019 Standard (16 Cores/2 VMs) - No Cisco SVC
Red Hat
RHEL-2S2V-D1A Red Hat Enterprise Linux (1-2 CPU,1-2 VN); 1-Yr Support Req
RHEL-2S2V-D3A Red Hat Enterprise Linux (1-2 CPU,1-2 VN); 3-Yr Support Req
RHEL-2S2V-D5A Red Hat Enterprise Linux (1-2 CPU,1-2 VN); 5-Yr Support Req
RHEL-VDC-2SUV-D1A RHEL for Virt Datacenters (1-2 CPU, Unlim VN) 1 Yr Supp Req
RHEL-VDC-2SUV-D3A RHEL for Virt Datacenters (1-2 CPU, Unlim VN) 3 Yr Supp Req
RHEL-VDC-2SUV-D5A RHEL for Virt Datacenters (1-2 CPU, Unlim VN) 5 Yr Supp Req
RHEL-2S2V-D1S Red Hat Enterprise Linux (1-2 CPU,1-2 VN); Prem 1Yr SnS Reqd
RHEL-2S2V-D3S Red Hat Enterprise Linux (1-2 CPU,1-2 VN); Prem 3Yr SnS Reqd
RHEL-2S-HA-D1S RHEL High Availability (1-2 CPU); Premium 1-yr SnS Reqd
RHEL-2S-HA-D3S RHEL High Availability (1-2 CPU); Premium 3-yr SnS Reqd
RHEL-2S-RS-D1S RHEL Resilent Storage (1-2 CPU); Premium 1-yr SnS Reqd
RHEL-2S-RS-D3S RHEL Resilent Storage (1-2 CPU); Premium 3-yr SnS Reqd
RHEL-VDC-2SUV-D1S RHEL for Virt Datacenters (1-2 CPU, Unlim VN) 1 Yr SnS Reqd
RHEL-VDC-2SUV-D3S RHEL for Virt Datacenters (1-2 CPU, Unlim VN) 3 Yr SnS Reqd
RHEL-SAP-2S2V-D1S RHEL for SAP Apps (1-2 CPU, 1-2 VN); Prem 1-Yr SnS Reqd
RHEL-SAP-2S2V-D3S RHEL for SAP Apps (1-2 CPU, 1-2 VN); Prem 3-Yr SnS Reqd
VMware
VMW-VSP-EPL-D1A VMware vSphere 7 Ent Plus (1 CPU, 32 Core) 1Yr, Support Reqd
VMW-VSP-EPL-D3A VMware vSphere 7 Ent Plus (1 CPU, 32 Core) 3Yr, Support Reqd
VMW-VSP-EPL-D5A VMware vSphere 7 Ent Plus (1 CPU, 32 Core) 5Yr, Support Reqd
SUSE
SLES-2S2V-D1A SUSE Linux Enterprise Svr (1-2 CPU,1-2 VM); 1-Yr Support Req
SLES-2S2V-D3A SUSE Linux Enterprise Svr (1-2 CPU,1-2 VM); 3-Yr Support Req
SLES-2S2V-D5A SUSE Linux Enterprise Svr (1-2 CPU,1-2 VM); 5-Yr Support Req
SLES-2SUVM-D1A SUSE Linux Enterprise Svr (1-2 CPU,Unl VM) LP; 1Y Supp Req
SLES-2SUVM-D3A SUSE Linux Enterprise Svr (1-2 CPU,Unl VM) LP; 3Y Supp Req
SLES-2SUVM-D5A SUSE Linux Enterprise Svr (1-2 CPU,Unl VM) LP; 5Y Supp Req
SLES-2S-LP-D1A SUSE Linux Live Patching Add-on (1-2 CPU); 1yr Support Req
SLES-2S-LP-D3A SUSE Linux Live Patching Add-on (1-2 CPU); 3yr Support Req
SLES-2S2V-D1S SUSE Linux Enterprise Svr (1-2 CPU,1-2 VM); Prio 1-Yr SnS
SLES-2S2V-D3S SUSE Linux Enterprise Svr (1-2 CPU,1-2 VM); Prio 3-Yr SnS
SLES-2S2V-D5S SUSE Linux Enterprise Svr (1-2 CPU,1-2 VM); Prio 5-Yr SnS
SLES-2SUVM-D1S SUSE Linux Enterprise Svr (1-2 CPU,Unl VM) LP; Prio 1Y SnS
SLES-2SUVM-D3S SUSE Linux Enterprise Svr (1-2 CPU,Unl VM) LP; Prio 3Y SnS
SLES-2SUVM-D5S SUSE Linux Enterprise Svr (1-2 CPU,Unl VM) LP; Prio 5Y SnS
SLES-2S-HA-D1S SUSE Linux High Availability Ext (1-2 CPU); 1yr SnS
SLES-2S-HA-D3S SUSE Linux High Availability Ext (1-2 CPU); 3yr SnS
SLES-2S-HA-D5S SUSE Linux High Availability Ext (1-2 CPU); 5yr SnS
SLES-2S-GC-D1S SUSE Linux GEO Clustering for HA (1-2 CPU); 1yr Sns
SLES-2S-GC-D3S SUSE Linux GEO Clustering for HA (1-2 CPU); 3yr SnS
SLES-2S-GC-D5S SUSE Linux GEO Clustering for HA (1-2 CPU); 5yr SnS
SLES-2S-LP-D1S SUSE Linux Live Patching Add-on (1-2 CPU); 1yr SnS Required
SLES-2S-LP-D3S SUSE Linux Live Patching Add-on (1-2 CPU); 3yr SnS Required
SLES-SAP-2S2V-D1S SLES for SAP Apps (1-2 CPU, 1-2 VM); Priority 1-Yr SnS
SLES-SAP-2S2V-D3S SLES for SAP Apps (1-2 CPU, 1-2 VM); Priority 3-Yr SnS
SLES-SAP-2S2V-D5S SLES for SAP Apps (1-2 CPU, 1-2 VM); Priority 5-Yr SnS
SLES-SAP-2S2V-D1A SLES for SAP Apps w/ HA (1-2 CPU, 1-2 VM); 1-Yr Support Reqd
SLES-SAP-2S2V-D3A SLES for SAP Apps w/ HA (1-2 CPU, 1-2 VM); 3-Yr Support Reqd
SLES-SAP-2S2V-D5A SLES for SAP Apps w/ HA (1-2 CPU, 1-2 VM); 5-Yr Support Reqd
Table 22 OS Media
MSWS-19-ST16CD-RM Windows Server 2019 Stan (16 Cores/2 VMs) Rec Media DVD Only
MSWS-19-DC16CD-RM Windows Server 2019 DC (16Cores/Unlim VM) Rec Media DVD Only
MSWS-22-ST16CD-RM Windows Server 2022 Stan (16 Cores/2 VMs) Rec Media DVD Only
MSWS-22-DC16CD-RM Windows Server 2022 DC (16Cores/Unlim VM) Rec Media DVD Only
SUPPLEMENTAL MATERIAL
Simplified Block Diagram
A simplified block diagram of the Cisco UCS X410c M7 Compute Node system board is shown in Figure 6.
Figure 6 Cisco UCS X410c M7 Compute Node Simplified Block Diagram (IFMs 25G with Drives)
PCIe Gen4x16
CPU 1 (Front CPU) CPU 3 (Front CPU)
UPI
Links
PCIe Gen4x16
CPU 2 (Rear CPU) CPU 4 (Rear CPU)
PCIe Gen4x16
PCIe Gen4x16
Bridge Connector
Bridge
Adapter SGMII
SGMII
Main ASIC Main ASIC PCIe Gen4x16
Node mLOM Connector
Node MEZZ Connector
2x25G-KR
2x25G-KR
PCIe Gen4x16
PCIe Gen4x16
2x25G-KR
2x25G-KR
2x25G-KR
2x25G-KR
Figure 7 Cisco UCS X410c M7 Compute Node Simplified Block Diagram (IFMs 100G with Drives)
Cisco UCS X410c M7 Node
Left Front Right Front
PCIe Gen4x16
CPU 1 (Front CPU) CPU 3 (Front CPU)
UPI
Links
PCIe Gen4x16
CPU 2 (Rear CPU) CPU 4 (Rear CPU)
PCIe Gen4x16
PCIe Gen4x16
SGMII
PCIe Gen4x16
PCIe Gen4x16
100G-KR
100G-KR
System Board
A top view of the Cisco UCS X410c M7 Compute Node system board is shown in Figure 8.
Please refer to the Cisco UCS X410c M7 Compute Node Installation Guide for installation
procedures.
(1) Have the following tools and materials available for the procedure:
(2) Order the appropriate replacement CPU from Available CPUs on page 11.
Carefully remove and replace the CPU and heatsink in accordance with the instructions
found in “Cisco UCS X410c M7 Compute Node Installation and Service Note,” found at:
https://www.cisco.com/content/en/us/td/docs/unified_computing/ucs/x/hw/x410c-m7/insta
ll/b-cisco-ucs-x410c-m7-install-guide.html
(3) .
(1) Have the following tools and materials available for the procedure:
(2) Order the appropriate new CPU from Table 4 on page 11.
(3) Order one heat sink for each new CPU. Order PID UCSX-C-M7-HS-F= for the front CPU socket
and PID UCSX-C-M6-HS-R= for the rear CPU socket.
Carefully install the CPU and heatsink in accordance with the instructions found in “Cisco
UCS X410c M7 Compute Node Installation and Service Note,” found at:
https://www.cisco.com/content/en/us/td/docs/unified_computing/ucs/x/hw/x410c-m7/insta
ll/b-cisco-ucs-x410c-m7-install-guide.html
Step 2 Press evenly on both ends of the DIMM until it clicks into place in its slot
Note: Ensure that the notch in the DIMM aligns with the slot. If the notch is misaligned, it is
possible to damage the DIMM, the slot, or both.
Step 3 Press the DIMM connector latches inward slightly to seat them fully.
Step 4 Populate all slots with a DIMM or DIMM blank. A slot cannot be empty.
1
3
2
4
1
306040
For additional details on replacing or upgrading DIMMs, see “Cisco UCS X410c M7 Compute
Node Installation and Service Note,” found at
https://www.cisco.com/content/en/us/td/docs/unified_computing/ucs/x/hw/x410c-m7/insta
ll/b-cisco-ucs-x410c-m7-install-guide.html
TECHNICAL SPECIFICATIONS
Dimensions and Weight
Parameter Value
Environmental Specifications
Parameter Value
Operating temperature Supported operating temperatures depend on the compute node's memory:
■ For 256GB DDR5 DIMMs: 50° to 89.6° F (10° to 32° C) at 0 to 10,000
■ All other memory configurations: 50° to 95° F (10° to 35° C) at 0 to
10,000
For configuration-specific power specifications, use the Cisco UCS Power Calculator at:
http://ucspowercalc.cisco.com