Cisco UCS C220 M6 Server Installation and Service Guide: Americas Headquarters
Cisco UCS C220 M6 Server Installation and Service Guide: Americas Headquarters
Americas Headquarters
Cisco Systems, Inc.
170 West Tasman Drive
San Jose, CA 95134-1706
USA
http://www.cisco.com
Tel: 408 526-4000
800 553-NETS (6387)
Fax: 408 527-0883
© 2021–2023 Cisco Systems, Inc. All rights reserved.
CONTENTS
CHAPTER 1 Overview 1
Overview 1
External Features 4
Serviceable Component Locations 7
Summary of Server Features 10
Bias-Free Documentation
Note The documentation set for this product strives to use bias-free language. For purposes of this
documentation set, bias-free is defined as language that does not imply discrimination based on age,
disability, gender, racial identity, ethnic identity, sexual orientation, socioeconomic status, and
intersectionality. Exceptions may be present in the documentation due to language that is hardcoded
in the user interfaces of the product software, language used based on standards documentation, or
language that is used by a referenced third-party product.
in a commercial environment. This equipment generates, uses, and can radiate radio-frequency energy and,
if not installed and used in accordance with the instruction manual, may cause harmful interference to radio
communications. Operation of this equipment in a residential area is likely to cause harmful interference, in
which case users will be required to correct the interference at their own expense.
The following information is for FCC compliance of Class B devices: This equipment has been tested and
found to comply with the limits for a Class B digital device, pursuant to part 15 of the FCC rules. These limits
are designed to provide reasonable protection against harmful interference in a residential installation. This
equipment generates, uses and can radiate radio frequency energy and, if not installed and used in accordance
with the instructions, may cause harmful interference to radio communications. However, there is no guarantee
that interference will not occur in a particular installation. If the equipment causes interference to radio or
television reception, which can be determined by turning the equipment off and on, users are encouraged to
try to correct the interference by using one or more of the following measures:
• Reorient or relocate the receiving antenna.
• Increase the separation between the equipment and receiver.
• Connect the equipment into an outlet on a circuit different from that to which the receiver is connected.
• Consult the dealer or an experienced radio/TV technician for help.
Modifications to this product not authorized by Cisco could void the FCC approval and negate your authority
to operate the product.
The Cisco implementation of TCP header compression is an adaptation of a program developed by the
University of California, Berkeley (UCB) as part of UCB’s public domain version of the UNIX operating
system. All rights reserved. Copyright © 1981, Regents of the University of California.
NOTWITHSTANDING ANY OTHER WARRANTY HEREIN, ALL DOCUMENT FILES AND SOFTWARE
OF THESE SUPPLIERS ARE PROVIDED "AS IS" WITH ALL FAULTS. CISCO AND THE
ABOVE-NAMED SUPPLIERS DISCLAIM ALL WARRANTIES, EXPRESSED OR IMPLIED, INCLUDING,
WITHOUT LIMITATION, THOSE OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE
AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE
PRACTICE.
IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL,
CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT LIMITATION, LOST
PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE
THIS MANUAL, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY
OF SUCH DAMAGES.
Any Internet Protocol (IP) addresses and phone numbers used in this document are not intended to be actual
addresses and phone numbers. Any examples, command display output, network topology diagrams, and
other figures included in the document are shown for illustrative purposes only. Any use of actual IP addresses
or phone numbers in illustrative content is unintentional and coincidental.
All printed copies and duplicate soft copies of this document are considered uncontrolled. See the current
online version for the latest version.
Cisco has more than 200 offices worldwide. Addresses and phone numbers are listed on the Cisco website at
www.cisco.com/go/offices.
Cisco and the Cisco logo are trademarks or registered trademarks of Cisco and/or its affiliates in the U.S. and
other countries. To view a list of Cisco trademarks, go to this URL: https://www.cisco.com/c/en/us/about/
legal/trademarks.html. Third-party trademarks mentioned are the property of their respective owners. The use
of the word partner does not imply a partnership relationship between Cisco and any other company. (1721R)
Overview
The Cisco UCS C220 M6 server is a one-rack unit server that can be used standalone, or as part of the Cisco
Unified Computing System, which unifies computing, networking, management, virtualization, and storage
access into a single integrated architecture. Cisco UCS also enables end-to-end server visibility, management,
and control in both bare metal and virtualized environments. Each Cisco UCS C220 M6 server supports:
• a maximum of two 3rd Generation Intel Xeon processors.
• 32 DDR4 DIMMs (16 per CPU) for a total system memory of either 8 TB (32 256 GB DDR4 DIMMs)
or 12 TB (16 x 256 GB DDR4 DIMMs1 and 16 x 512 GB Intel® Optane™ Persistent Memory Module.
(PMEMs)).
• 3 PCI Express riser connectors, which provide slots for “full height” and “half height” PCI-e adapters.
• Two Titanium (80 PLUS rated) power supplies with support for N and N+1 power redundancy modes.
• 2 10GBase-T Ethernet LAN over Motherboard (LOM) ports for network connectivity, plus one 1 Gigabit
Ethernet dedicated management port
• One mLOM/VIC card provides 10G/25G/40G/50G/100G/200G connectivity. Supported cards are:
• Cisco UCS VIC 15428 Quad Port CNA MLOM (UCSC-M-V5Q50G) supports:
• a x16 PCIe Gen4 Host Interface to the rack server
• four 10G/25G/50G SFP56 ports
• 4GB DDR4 Memory, 3200 MHz
• Integrated blower for optimal ventilation
• Cisco UCS VIC 15427 Quad Port CNA MLOM (UCSC-M-V5Q50GV2) supports:
• Cisco UCS VIC 15425 Quad Port 10G/25G/50G SFP56 CNA PCIe (UCSC-P-V5Q50G)
• a x16 PCIe Gen4 Host Interface to the rack server
• Four 10G/25G/50G SFP+/SFP28/SFP56 ports
• 4GB DDR4 Memory, 3200MHz
• Integrated blower for optimal ventilation
• Secure boot support
• Cisco UCS VIC 15238 Dual Port 40G/100G/200G QSFP56 mLOM (UCSC-M-V5D200G) supports:
• a x16 PCIe Gen4 Host Interface to the rack server
• two 40G/100G/200G QSFP/QSFP28/QSFP56 ports
• 4GB DDR4 Memory, 3200 MHz
• Integrated blower for optimal ventilation
• Cisco UCS VIC 15237 Dual Port 40G/100G/200G QSFP56 mLOM (UCSC-M-V5D200GV2)
supports:
• a x16 PCIe Gen4 Host Interface to the rack server
• two 40G/100G/200G QSFP/QSFP28/QSFP56 ports
• 4GB DDR4 Memory, 3200 MHz
• Integrated blower for optimal ventilation
• Secure boot support
• Cisco UCS VIC 15235 Dual Port 40G/100G/200G QSFP56 CNA PCIe (UCSC-P-V5D200GV2)
• a x16 PCIe Gen4 Host Interface to the rack server
• two 40G/100G/200G QSFP/QSFP28/QSFP56 ports
• 4GB DDR4 Memory, 3200MHz
• Integrated blower for optimal ventilation
• Secure boot support
• Cisco UCS VIC 1495 Dual Port 40/100G/200G half-height QSFP28 CNA PCIe
(UCSC-PCIE-C100-04) supports:
• Cisco UCS VIC 1477 Dual Port 40/100G QSFP28 mLOM (UCSC-M-V100-04)
• a x16 PCIe Gen3 Host Interface to the rack server
• two 40G/100G QSFP28 ports
• 2GB DDR3 Memory, 1866 MHz
• Cisco UCS VIC 1467 Quad Port 10/25G SFP28 mLOM (UCSC-M-V25-04) supports:
• a x16 PCIe Gen3 Host Interface to the rack server
• four 10G/25G SFP28 ports
• 2GB DDR3 Memory, 1866 MHz
• Cisco UCS VIC 1455 Quad Port 10G/25G half-height SFP28 CNA PCIe (UCSC-PCIE-C25Q-04)
supports:
• a x16 PCIe Gen3 Host Interface to the rack server
• four 10G/25G SFP/SFP28 ports providing two 50G fabric connections
• 2GB DDR3 Memory, 1866 MHz
• Rear PCI risers are supported as one to three half-height PCIe risers, or one to two full-height PCIe risers.
• The server provides an internal slot for one of the following:
• SATA Interposer to control SATA drives from the PCH (AHCI), or
• Cisco 12G RAID controller with cache backup to control SAS/SATA drives, or
• Cisco 12G SAS pass-through HBA to control SAS/SATA drives
External Features
This topic shows the external features of the server versions.
1 Drive bays 1 – 10 support SAS/SATA hard disk drives 2 Unit identification button/LED
(HDDs) and solid-state drives (SSDs). As an option,
drive bays 1-4 can contain up to 4 NVMe drives in any
number up to 4. Drive bays 5 through 10 support only
SAS/SATA HDDs or SSDs.
NVMe drives are supported in a dual CPU server only.
By default, single CPU servers come with only one half-height riser 1 installed, and dual CPU servers support
all three half-height risers.
Rear PCIe risers can be one of the following configurations:
• Half-height risers:
• one half-height, ¾ length riser (not shown). With this configuration, PCIe slot (slot 1) supports one
half-height, ¾ length, x16 lanes PCIe card and is controlled by CPU 1.
• three half-height, ¾ length risers. See "UCS C220 M6 Server Rear Panel, Half Height, ¾ Length
PCIe Cards" below.
• Full-height risers: Two full height, ¾ length risers. See "Cisco UCS C220 M6 Server Rear Panel, Full
Height, ¾ Length PCIe Cards" below.
Note For definitions of LED states, see Rear-Panel LEDs, on page 38.
Figure 2: Cisco UCS C220 M6 Server Rear Panel, Half Height, ¾ Length PCIe Cards
1 PCIe slots, three 2 Power supply units (PSUs), two which can be redundant
when configured in 1+1 power mode.
This configuration accepts three card in riser slots 1, 2,
and 3 as follows:
• Riser 1, which is controlled by CPU 1:
• Supports one PCIe slot (slot 1)
• Slot 1 is half-height, 3/4 length, x16
5 USB 3.0 ports (two) 6 Dual 1-Gb/10-Gb Ethernet ports (LAN1 and LAN2)
The dual LAN ports can support 1 Gbps and 10 Gbps,
depending on the link partner capability.
Figure 3: Cisco UCS C220 M6 Server Rear Panel, Full Height, ¾ Length PCIe Cards
1 PCIe slots, two 2 Power supply units (PSUs), two which can be redundant
when configured in 1+1 power mode.
This configuration accepts two cards in riser slots 1 and
2 as follows:
• Riser 1, which is controlled by CPU 1:
• Plugs into riser 1 motherboard connector
• Supports one full-height, 3/4 length, x16 PCIe
card
5 USB 3.0 ports (two) 6 Dual 1-Gb/10-Gb Ethernet ports (LAN1 and LAN2)
The dual LAN ports can support 1 Gbps and 10 Gbps,
depending on the link partner capability.
1 Front-loading drive bays 1–10 support SAS/SATA 2 M6 modular RAID card or SATA Interposer card
drives.
5 DIMM sockets on motherboard, 32 total, 16 per CPU 6 Motherboard CPU socket two (CPU2)
Eight DIMM sockets are placed between the CPUs and
the server sidewall, and 16 DIMM sockets are placed
between the two CPUs.
11 Modular LOM (mLOM) card bay on chassis floor (x16 12 Motherboard CPU socket one (CPU1)
PCIe lane)
The mLOM card bay sits below PCIe riser slot 1.
The view in the following figure shows the individual component locations and numbering, including the
FHFW PCIe
cdar.s
Figure 5: Cisco UCS C220 M6 Server, Full Height, Full Width PCIe Cards, Serviceable Component Locations
1 Front-loading drive bays 1–10 support SAS/SATA 2 M6 modular RAID card or SATA Interposer card
drives.
11 PCIe riser slot 1 12 Modular LOM (mLOM) card bay on chassis floor (x16
PCIe lane)
Accepts 1 half height, half width PCIe riser card
The mLOM card bay sits below PCIe riser slot 1.
Note The chassis supports an internal USB drive
(not shown) at this PCIe slot. See Replacing
a USB Drive, on page 91.
The view in the following figure shows the individual component locations and numbering, including the
HHHL PCIe
osl.t
The Technical Specifications Sheets for all versions of this server, which include supported component part
numbers, are at Cisco UCS Servers Technical Specifications Sheets (scroll down to Technical Specifications).
Feature Description
Memory 32 slots for registered DIMMs (RDIMMs), DDR4 DIMMs, 3DS DIMMs, and load-reduced
DIMMs (LR DIMMs) up to 3200 MHz. Also supported is Intel® Optane™ Persistent
Memory Modules (PMEMs)
Feature Description
Video The Cisco Integrated Management Controller (CIMC) provides video using the Matrox
G200e video/graphics controller:
• Integrated 2D graphics core with hardware acceleration
• DDR3 memory interface supports up to 512 MB of addressable memory (8 MB is
allocated by default to video memory)
• Supports display resolutions up to 1920 x 1200 16bpp @ 60Hz
• High-speed integrated 24-bit RAMDAC
• Single lane PCI-Express host interface running at Gen 2 speed
Front panel:
• One front-panel keyboard/video/mouse (KVM) connector that is used with the KVM
breakout cable. The breakout cable provides two USB 2.0, one VGA, and one DB-9
serial connector.
Modular LOM One dedicated socket (x16 PCIe lane) that can be used to add an mLOM card for additional
rear-panel connectivity. As an optional hardware configuration, the Cisco CNIC mLOM
module supports two 100G QSFP+ ports or 4 25 Gbps Ethernet ports.
One power supply is mandatory; one more can be added for 1 + 1 redundancy.
ACPI The advanced configuration and power interface (ACPI) 4.0 standard is supported.
Front Panel The front panel provides status indications and control buttons
Feature Description
InfiniBand In addition to Fibre Channel, Ethernet and other industry-standards, the PCI slots in this
server support the InfiniBand architecture up HDR IB (200Gbps).
Front panel:
• One KVM console connector, which supplies the pins for a KVM break out cable
that supports the following:
• Two USB 2.0 connectors
• One VGA DB15 video connector
• One serial port (RS232) RJ45 connector
Integrated Management Processor Baseboard Management Controller (BMC) running Cisco Integrated Management
Controller (CIMC) firmware.
Depending on your CIMC settings, the CIMC can be accessed through the 1GE dedicated
management port, the 1GE/10GE LOM ports, or a Cisco virtual interface card (VIC).
CIMC supports managing the entire server platform, as well providing management
capabilities for various individual subsystems and components, such as PSUs, Cisco VIC,
GPUs, MRAID and HBA storage controllers, and so on.
Feature Description
Storage Controllers The SATA Interposer board, Cisco 12G SAS RAID Controller with 4GB FBWC, or Cisco
12G SAS HBA. Only one of these at a time can be used.
A Cisco 9500-8e 12G SAS HBA can be plugged into available PCIe risers for external
JBOD attach. This HBA can be used at the same time as one of the other storage controllers.
• SATA Interposer board: AHCI support of up to eight SATA-only drives (slots 1-4
and 6-9 only)
• Cisco 12G RAID controller
• RAID support (RAID 0, 1, 5, 6, 10) and SRAID0
• Supports up to 10 front-loading SFF drives
For a detailed list of storage controller options, see Supported Storage Controllers and
Cables, on page 157.
Modular LAN over Motherboard The dedicated mLOM slot on the motherboard can flexibly accommodate Cisco Virtual
(mLOM) slot Interface Cards (VICs).
UCSM Unified Computing System Manager (UCSM) runs in the Fabric Interconnect and
automatically discovers and provisions some of the server components.
Note Before you install, operate, or service a server, review the Regulatory Compliance and Safety Information
for Cisco UCS C-Series Servers for important safety information.
Warning To prevent the system from overheating, do not operate it in an area that exceeds the maximum
recommended ambient temperature of: 35° C (95° F).
Statement 1047
Warning The plug-socket combination must be accessible at all times, because it serves as the main disconnecting
device.
Statement 1019
Warning This product relies on the building’s installation for short-circuit (overcurrent) protection. Ensure that
the protective device is rated not greater than: 250 V, 15 A.
Statement 1005
Warning Installation of the equipment must comply with local and national electrical codes.
Statement 1074
Warning This unit is intended for installation in restricted access areas. A restricted access area can be accessed
only through the use of a special tool, lock, and key, or other means of security.
Statement 1017
Caution To ensure proper airflow it is necessary to rack the servers using rail kits. Physically placing the units on top
of one another or “stacking” without the use of the rail kits blocks the air vents on top of the servers, which
could result in overheating, higher fan speeds, and higher power consumption. We recommend that you mount
your servers on rail kits when you are installing them into the rack because these rails provide the minimal
spacing required between the servers. No additional spacing between the servers is required when you mount
the units using rail kits.
Caution Avoid uninterruptible power supply (UPS) types that use ferroresonant technology. These UPS types can
become unstable with systems such as the Cisco UCS, which can have substantial current draw fluctuations
from fluctuating data traffic patterns.
• Ensure that there is adequate space around the server to allow for accessing the server and for adequate
airflow. The airflow in this server is from front to back.
• Ensure that the air-conditioning meets the thermal requirements listed in the Environmental Specifications,
on page 146.
• Ensure that the cabinet or rack meets the requirements listed in the Rack Requirements, on page 19.
• Ensure that the site power meets the power requirements listed in the Power Specifications, on page 147.
If available, you can use an uninterruptible power supply (UPS) to protect against power failures.
Warning Statement 7003—Shielded Cable Shielded Cable Requirements for Intrabuilding Lightning Surge
The intrabuilding port(s) of the equipment or subassembly must use shielded intrabuilding cabling/wiring that
is grounded at both ends.
The following port(s) are considered intrabuilding ports on this equipment:
RJ-45 Copper Ethernet Ports
Note Statement 7004—Special Accessories Required to Comply with GR-1089 Emission and Immunity
Requirements
To comply with the emission and immunity requirements of GR-1089, shielded cables are required for the
following ports:
RJ-45 Copper Ethernet Ports
Note Statement 8016—Installation Location Where the National Electric Code (NEC) Applies
This equipment is suitable for installation in locations where the NEC applies.
Note These Cisco UCS servers are designed to boot up within 30 minutes provided the neighboring devices are
fully operational.
Rack Requirements
The rack must be of the following type:
• A standard 19-in. (48.3-cm) wide, four-post EIA rack, with mounting posts that conform to English
universal hole spacing, per section 1 of ANSI/EIA-310-D-1992.
• The rack-post holes can be square 0.38-inch (9.6 mm), round 0.28-inch (7.1 mm), #12-24 UNC, or #10-32
UNC when you use the Cisco-supplied slide rails.
• The minimum vertical rack space per server must be one rack unit (RU), equal to 1.75 in. (44.45 mm).
Front Bezel
An optional locking front bezel (UCSC-BZL-C220M5) is available to provide additional security by preventing
unauthorized access to the front-loading SFF drives. The same bezel is used for both M5 and M6 versions of
the UCS C220 server.
Warning To prevent bodily injury when mounting or servicing this unit in a rack, you must take special
precautions to ensure that the system remains stable. The following guidelines are provided to ensure
your safety:
This unit should be mounted at the bottom of the rack if it is the only unit in the rack.
When mounting this unit in a partially filled rack, load the rack from the bottom to the top with the
heaviest component at the bottom of the rack.
If the rack is provided with stabilizing devices, install the stabilizers before mounting or servicing the
unit in the rack.
Statement 1006
Step 2 Open the front securing plate on both slide-rail assemblies. The front end of the slide-rail assembly has a spring-loaded
securing plate that must be open before you can insert the mounting pegs into the rack-post holes.
On the outside of the assembly, push the green-arrow button toward the rear to open the securing plate.
1 Front mounting pegs 3 Securing plate shown pulled back to the open position
c) Slide the inner-rail release clip toward the rear on both inner rails, and then continue pushing the server into the rack
until its front slam-latches engage with the rack posts.
Figure 8: Inner-Rail Release Clip
Step 5 To comply with GR-63-CORE Seismic requirements, you (the end user) must secure the server in the rack more permanently
by using the two screws that are provided with the slide rails.
With the server fully pushed into the slide rails, open a hinged slam latch lever on the front of the server and insert a
screw through the hole that is under the lever. The screw threads into the static part of the rail on the rack post and prevents
the server from being pulled out. Repeat for the opposite slam latch.
Note The cable management arm (CMA, UCSC-CMA-C220M6) is reversible left-to-right. To reverse the CMA,
see Reversing the Cable Management Arm (Optional), on page 24 before installation.
Step 1 With the server pushed fully into the rack, slide the CMA tab of the CMA arm that is farthest from the server onto the
end of the stationary slide rail that is attached to the rack post. Slide the tab over the end of the rail until it clicks and
locks.
Figure 9: Attaching the CMA to the Rear Ends of the Slide Rails
1 CMA tab on arm farthest from server attaches to 3 CMA tab on width-adjustment slider attaches to end
end of stationary outer slide rail. of stationary outer slide rail.
Step 2 Slide the CMA tab that is closest to the server over the end of the inner rail that is attached to the server. Slide the tab
over the end of the rail until it clicks and locks
Step 3 Pull out the width-adjustment slider that is at the opposite end of the CMA assembly until it matches the width of your
rack.
Step 4 Slide the CMA tab that is at the end of the width-adjustment slider onto the end of the stationary slide rail that is attached
to the rack post. Slide the tab over the end of the rail until it clicks and locks.
Step 5 Open the hinged flap at the top of each plastic cable guide and route your cables through the cable guides as desired.
Step 1 Rotate the entire CMA assembly 180 degrees, left-to-right. The plastic cable guides must remain pointing upward.
Step 2 Flip the tabs at the ends of the CMA arms so that they point toward the rear of the server.
Step 3 Pivot the tab that is at the end of the width-adjustment slider. Depress and hold the metal button on the outside of the tab
and pivot the tab 180 degrees so that it points toward the rear of the server.
Figure 10: Reversing the CMA
Note This section describes how to power on the server, assign an IP address, and connect to server management
when using the server in standalone mode.
Connection Methods
There are two methods for connecting to the system for initial setup:
• Local setup—Use this procedure if you want to connect a keyboard and monitor directly to the system
for setup. This procedure can use a KVM cable (Cisco PID N20-BKVM) or the ports on the rear of the
server.
• Remote setup—Use this procedure if you want to perform setup through your dedicated management
LAN.
Note To configure the system remotely, you must have a DHCP server on the same
network as the system. Your DHCP server must be preconfigured with the range
of MAC addresses for this server node. The MAC address is printed on a label
that is on the pull-out asset tag on the front panel. This server node has a range
of six MAC addresses assigned to the Cisco IMC. The MAC address printed on
the label is the beginning of the range of six contiguous MAC addresses.
Step 1 Attach a power cord to each power supply in your server, and then attach each power cord to a grounded power outlet.
Wait for approximately two minutes to let the server boot to standby power during the first bootup. You can verify system
power status by looking at the system Power Status LED on the front panel. The system is in standby power mode when
the LED is amber.
Step 2 Connect a USB keyboard and VGA monitor to the server using one of the following methods:
• Connect an optional KVM cable (Cisco PID N20-BKVM) to the KVM connector on the front panel. Connect your
USB keyboard and VGA monitor to the KVM cable.
• Connect a USB keyboard and VGA monitor to the corresponding connectors on the rear panel.
Step 4 Continue with Setting Up the System With the Cisco IMC Configuration Utility, on page 27.
Note To configure the system remotely, you must have a DHCP server on the same network as the system. Your
DHCP server must be preconfigured with the range of MAC addresses for this server node. The MAC address
is printed on a label that is on the pull-out asset tag on the front panel. This server node has a range of six
MAC addresses assigned to the Cisco IMC. The MAC address printed on the label is the beginning of the
range of six contiguous MAC addresses.
Step 1 Attach a power cord to each power supply in your server, and then attach each power cord to a grounded power outlet.
Wait for approximately two minutes to let the server boot to standby power during the first bootup. You can verify system
power status by looking at the system Power Status LED on the front panel. The system is in standby power mode when
the LED is amber.
Step 2 Plug your management Ethernet cable into the dedicated management port on the rear panel.
Step 3 Allow your preconfigured DHCP server to assign an IP address to the server node.
Step 4 Use the assigned IP address to access and log in to the Cisco IMC for the server node. Consult with your DHCP server
administrator to determine the IP address.
Note The default username for the server is admin. The default password is password.
Step 5 From the Cisco IMC Server Summary page, click Launch KVM Console. A separate KVM console window opens.
Step 6 From the Cisco IMC Summary page, click Power Cycle Server. The system reboots.
Step 7 Select the KVM console window.
Note The KVM console window must be the active window for the following keyboard actions to work.
Step 8 When prompted, press F8 to enter the Cisco IMC Configuration Utility. This utility opens in the KVM console window.
Note The first time that you enter the Cisco IMC Configuration Utility, you are prompted to change the default
password. The default password is password. The Strong Password feature is enabled.
Step 9 Continue with Setting Up the System With the Cisco IMC Configuration Utility, on page 27.
Step 1 Set the NIC mode to choose which ports to use to access Cisco IMC for server management:
• Shared LOM EXT (default)—This is the shared LOM extended mode, the factory-default setting. With this mode,
the Shared LOM and Cisco Card interfaces are both enabled. You must select the default Active-Active NIC
redundancy setting in the following step.
In this NIC mode, DHCP replies are returned to both the shared LOM ports and the Cisco card ports. If the system
determines that the Cisco card connection is not getting its IP address from a Cisco UCS Manager system because
the server is in standalone mode, further DHCP requests from the Cisco card are disabled. Use the Cisco Card
NIC mode if you want to connect to Cisco IMC through a Cisco card in standalone mode.
• Shared LOM—The 1-Gb/10-Gb Ethernet ports are used to access Cisco IMC. You must select either the
Active-Active or Active-standby NIC redundancy setting in the following step.
• Dedicated—The dedicated management port is used to access Cisco IMC. You must select the None NIC redundancy
setting in the following step.
• Cisco Card—The ports on an installed Cisco UCS Virtual Interface Card (VIC) are used to access the Cisco IMC.
You must select either the Active-Active or Active-standby NIC redundancy setting in the following step.
See also the required VIC Slot setting below.
• VIC Slot—Only if you use the Cisco Card NIC mode, you must select this setting to match where your VIC is
installed. The choices are Riser1, Riser2, or Flex-LOM (the mLOM slot).
• If you select Riser1, you must install the VIC in slot 1.
• If you select Riser2, you must install the VIC in slot 2.
• If you select Flex-LOM, you must install an mLOM-style VIC in the mLOM slot.
Step 2 Set the NIC redundancy to your preference. This server has three possible NIC redundancy settings:
• None—The Ethernet ports operate independently and do not fail over if there is a problem. This setting can be
used only with the Dedicated NIC mode.
• Active-standby—If an active Ethernet port fails, traffic fails over to a standby port. Shared LOM and Cisco Card
modes can each use either Active-standby or Active-active settings.
• Active-active (default)—All Ethernet ports are utilized simultaneously. The Shared LOM EXT mode must use
only this NIC redundancy setting. Shared LOM and Cisco Card modes can each use either Active-standby or
Active-active settings.
Step 3 Choose whether to enable DHCP for dynamic network settings, or to enter static network settings.
Note Before you enable DHCP, you must preconfigure your DHCP server with the range of MAC addresses
for this server. The MAC address is printed on a label on the rear of the server. This server has a range of
six MAC addresses assigned to Cisco IMC. The MAC address printed on the label is the beginning of the
range of six contiguous MAC addresses.
Step 10 (Optional) Enable auto-negotiation of port settings or set the port speed and duplex mode manually.
Note Auto-negotiation is applicable only when you use the Dedicated NIC mode. Auto-negotiation sets the port
speed and duplex mode automatically based on the switch port to which the server is connected. If you
disable auto-negotiation, you must set the port speed and duplex mode manually.
What to do next
Use a browser and the IP address of the Cisco IMC to connect to the Cisco IMC management interface. The
IP address is based upon the settings that you made (either a static address or the address assigned by your
DHCP server).
Note The factory default username for the server is admin. The default password is password.
To manage the server, see the Cisco UCS C-Series Rack-Mount Server Configuration Guide or the Cisco UCS
C-Series Rack-Mount Server CLI Configuration Guide for instructions on using those interfaces for your
Cisco IMC release. The links to the configuration guides are in the Cisco UCS C-Series Documentation
Roadmap.
Dedicated None
This server has the following NIC mode settings that you can choose from:
• Shared LOM EXT (default)—This is the shared LOM extended mode, the factory-default setting. With
this mode, the Shared LOM and Cisco Card interfaces are both enabled. You must select the default
Active-Active NIC redundancy setting in the following step.
In this NIC mode, DHCP replies are returned to both the shared LOM ports and the Cisco card ports. If
the system determines that the Cisco card connection is not getting its IP address from a Cisco UCS
Manager system because the server is in standalone mode, further DHCP requests from the Cisco card
are disabled. Use the Cisco Card NIC mode if you want to connect to Cisco IMC through a Cisco card
in standalone mode.
• Shared LOM—The 1-Gb/10-Gb Ethernet ports are used to access Cisco IMC. You must select either the
Active-Active or Active-standby NIC redundancy setting in the following step.
• Dedicated—The dedicated management port is used to access Cisco IMC. You must select the None
NIC redundancy setting in the following step.
• Cisco Card—The ports on an installed Cisco UCS Virtual Interface Card (VIC) are used to access the
Cisco IMC. You must select either the Active-Active or Active-standby NIC redundancy setting in the
following step.
See also the required VIC Slot setting below.
• VIC Slot—Only if you use the Cisco Card NIC mode, you must select this setting to match where your
VIC is installed. The choices are Riser1, Riser2, or Flex-LOM (the mLOM slot).
• If you select Riser1, you must install the VIC in slot 1.
• If you select Riser2, you must install the VIC in slot 2.
• If you select Flex-LOM, you must install an mLOM-style VIC in the mLOM slot.
This server has the following NIC redundancy settings that you can choose from:
• None—The Ethernet ports operate independently and do not fail over if there is a problem. This setting
can be used only with the Dedicated NIC mode.
• Active-standby—If an active Ethernet port fails, traffic fails over to a standby port. Shared LOM and
Cisco Card modes can each use either Active-standby or Active-active settings.
• Active-active (default)—All Ethernet ports are utilized simultaneously. The Shared LOM EXT mode
must use only this NIC redundancy setting. Shared LOM and Cisco Card modes can each use either
Active-standby or Active-active settings.
Caution When you upgrade the BIOS firmware, you must also upgrade the Cisco IMC firmware to the same version,
or the server does not boot. Do not power off the server until the BIOS and Cisco IMC firmware are matching
or the server does not boot.
Cisco provides the Cisco Host Upgrade Utility to assist with simultaneously upgrading the BIOS, Cisco IMC,
and other firmware to compatible levels.
The server uses firmware obtained from and certified by Cisco. Cisco provides release notes with each firmware
image. There are several possible methods for updating the firmware:
• Recommended method for firmware update: Use the Cisco Host Upgrade Utility to simultaneously
upgrade the Cisco IMC, BIOS, and component firmware to compatible levels.
See the Cisco Host Upgrade Utility Quick Reference Guide for your firmware release at the documentation
roadmap link below.
• You can upgrade the Cisco IMC and BIOS firmware by using the Cisco IMC GUI interface.
See the Cisco UCS C-Series Rack-Mount Server Configuration Guide.
• You can upgrade the Cisco IMC and BIOS firmware by using the Cisco IMC CLI interface.
See the Cisco UCS C-Series Rack-Mount Server CLI Configuration Guide.
For links to the documents listed above, see the Cisco UCS C-Series Documentation Roadmap.
Step 2 Use the arrow keys to select the BIOS menu page.
Step 3 Highlight the field to be modified by using the arrow keys.
Step 4 Press Enter to select the field that you want to change, and then modify the value in the field.
Step 5 Press the right arrow key until the Exit menu screen is displayed.
Step 6 Follow the instructions on the Exit menu screen to save your changes and exit the setup utility (or press F10). You can
exit without saving changes by pressing Esc.
Note You cannot switch to Cisco IMC CLI if the serial-over-LAN (SOL) feature is
enabled.
• After a session is created, it is shown in the CLI or web GUI by the name serial.
Note Any mouse or keyboard that is connected to the KVM cable is disconnected when
you enable Smart Access USB.
• You can use USB 3.0-based devices, but they will operate at USB 2.0 speed.
• We recommend that the USB device have only one partition.
• The file system formats supported are: FAT16, FAT32, MSDOS, EXT2, EXT3, and EXT4. NTFS
is not supported.
• The front-panel KVM connector has been designed to switch the USB port between Host OS and BMC.
• Smart Access USB can be enabled or disabled using any of the BMC user interfaces. For example, you
can use the Cisco IMC Configuration Utility that is accessed by pressing F8 when prompted during
bootup.
• Enabled: the front-panel USB device is connected to the BMC.
• Disabled: the front-panel USB device is connected to the host.
• In a case where no management network is available to connect remotely to Cisco IMC, a Device Firmware
Update (DFU) shell over serial cable can be used to generate and download technical support files to the
USB device that is attached to front panel USB port.
Front-Panel LEDs
Figure 11: Front Panel LEDs
Rear-Panel LEDs
Figure 12: Rear Panel LEDs
1 1-Gb/10-Gb Ethernet link speed (on both LAN1 and LAN2) • Amber—Link speed is 100 Mbps.
• Amber—Link speed is 1 Gbps.
• Green—Link speed is 10 Gbps.
2 1-Gb/10-Gb Ethernet link status (on both LAN1 and LAN2) • Off—No link is present.
• Green—Link is active.
• Green, blinking—Traffic is present on the active link.
6 Power supply status (one LED each power supply unit) AC power supplies:
• Off—No AC input (12 V main power off, 12 V standby
power off).
• Green, blinking—12 V main power off; 12 V standby
power on.
• Green, solid—12 V main power on; 12 V standby power
on.
• Amber, blinking—Warning threshold detected but 12 V
main power on.
• Amber, solid—Critical error detected; 12 V main power
off (for example, over-current, over-voltage, or
over-temperature failure).
DC power supplies:
• Off—No DC input (12 V main power off, 12 V standby
power off).
• Green, blinking—12 V main power off; 12 V standby
power on.
• Green, solid—12 V main power on; 12 V standby power
on.
• Amber, blinking—Warning threshold detected but 12 V
main power on.
• Amber, solid—Critical error detected; 12 V main power
off (for example, over-current, over-voltage, or
over-temperature failure).
1 Fan module fault LEDs (one behind each fan connector 3 DIMM fault LEDs (one behind each DIMM socket on the
on the motherboard) motherboard)
• Amber—Fan has a fault or is not fully seated. These LEDs operate only when the server is in standby
power mode.
• Green—Fan is OK.
• Amber—DIMM has a fault.
• Off—DIMM is OK.
• T-30 Torx driver (supplied with replacement CPUs for heatsink removal)
• #1 flat-head screwdriver (supplied with replacement CPUs for heatsink removal)
• #1 Phillips-head screwdriver (for M.2 SSD and intrusion switch replacement)
• Electrostatic discharge (ESD) strap or other grounding equipment such as a grounded mat
Caution After a server is shut down to standby power, electric current is still present in the server. To completely
remove power as directed in some service procedures, you must disconnect all power cords from all power
supplies in the server.
You can shut down the server by using the front-panel power button or the software management interfaces.
Step 3 If a service procedure instructs you to completely remove power from the server, disconnect all power cords from the
power supplies in the server.
Step 5 If a service procedure instructs you to completely remove power from the server, disconnect all power cords from the
power supplies in the server.
Step 3 If a service procedure instructs you to completely remove power from the server, disconnect all power cords from the
power supplies in the server.
c) Press the latch down to the closed position. The cover is pushed forward to the closed position as you push down the
latch.
d) Lock the latch by sliding the lock button to sideways to the left.
Locking the latch ensures that the server latch handle does not protrude when you install the blade.
• Hot-plug replacement—You must take the component offline before removing it for the following
component:
• NVMe PCIe solid state drives
Warning Blank faceplates and cover panels serve three important functions: they prevent exposure to hazardous
voltages and currents inside the chassis; they contain electromagnetic interference (EMI) that might
disrupt other equipment; and they direct the flow of cooling air through the chassis. Do not operate
the system unless all cards, faceplates, front covers, and rear covers are in place.
Statement 1029
Caution When handling server components, handle them only by carrier edges and use an electrostatic discharge (ESD)
wrist-strap or other grounding device to avoid damage.
Tip You can press the unit identification button on the front panel or rear panel to turn on a flashing, blue unit
identification LED on both the front and rear panels of the server. This button allows you to locate the specific
server that you are servicing when you go to the opposite side of the rack. You can also activate these LEDs
remotely by using the Cisco IMC interface.
1 Front-loading drive bays 1–10 support SAS/SATA 2 M6 modular RAID card or SATA Interposer card
drives.
5 DIMM sockets on motherboard, 32 total, 16 per CPU 6 Motherboard CPU socket two (CPU2)
Eight DIMM sockets are placed between the CPUs and
the server sidewall, and 16 DIMM sockets are placed
between the two CPUs.
11 Modular LOM (mLOM) card bay on chassis floor (x16 12 Motherboard CPU socket one (CPU1)
PCIe lane)
The mLOM card bay sits below PCIe riser slot 1.
The view in the following figure shows the individual component locations and numbering, including the
FHFW PCIe
cdar.s
Figure 16: Cisco UCS C220 M6 Server, Full Height, Full Width PCIe Cards, Serviceable Component Locations
1 Front-loading drive bays 1–10 support SAS/SATA 2 M6 modular RAID card or SATA Interposer card
drives.
11 PCIe riser slot 1 12 Modular LOM (mLOM) card bay on chassis floor (x16
PCIe lane)
Accepts 1 half height, half width PCIe riser card
The mLOM card bay sits below PCIe riser slot 1.
Note The chassis supports an internal USB drive
(not shown) at this PCIe slot. See Replacing
a USB Drive, on page 91.
The view in the following figure shows the individual component locations and numbering, including the
HHHL PCIe
osl.t
The Technical Specifications Sheets for all versions of this server, which include supported component part
numbers, are at Cisco UCS Servers Technical Specifications Sheets (scroll down to Technical Specifications).
Note You do not have to shut down the server or drive to replace SAS/SATA hard drives or SSDs because they
are hot-swappable. To replace an NVMe PCIe SSD drive, which must be shut down before removal, see
Replacing a Front-Loading NVMe SSD, on page 51.
Step 1 Enter the BIOS setup utility by pressing the F2 key when prompted during bootup.
Step 2 Go to the Boot Options tab.
Step 3 Set Boot Mode to UEFI Mode.
Step 4 Under Boot Option Priorities, set your OS installation media (such as a virtual DVD) as your Boot Option #1.
Step 5 Press F10 to save changes and exit the BIOS setup utility. Allow the server to reboot.
Step 6 After the OS installs, verify the installation:
a) Enter the BIOS setup utility by pressing the F2 key when prompted during bootup.
b) Go to the Boot Options tab.
c) Under Boot Option Priorities, verify that the OS you installed is listed as your Boot Option #1.
Step 1 Remove the drive that you are replacing or remove a blank drive tray from the bay:
Step 1 Enter the BIOS setup utility by pressing the F2 key when prompted during bootup.
Step 2 Navigate to Advanced > PCI Subsystem Settings > NVMe SSD Hot-Plug Support.
Step 3 Set the value to Enabled.
Step 4 Save your changes and exit the utility.
Step 1 Use a browser to log in to the Cisco IMC GUI for the server.
Step 2 Navigate to Compute > BIOS > Advanced > PCI Configuration.
Step 3 Set NVME SSD Hot-Plug Support to Enabled.
Step 4 Save your changes.
Note OS-surprise removal is not supported. OS-informed hot-insertion and hot-removal are supported on all
supported operating systems except VMware ESXi.
Note OS-informed hot-insertion and hot-removal must be enabled in the system BIOS. See Enabling Hot-Plug
Support in the System BIOS, on page 51.
c) Push the tray into the slot until it touches the backplane, and then close the ejector lever to lock the drive in place.
Step 3 Observe the drive-tray LED and wait until it returns to solid green before accessing the drive:
• Off—The drive is not in use.
• Green, blinking—the driver is initializing following hot-plug insertion.
• Green—The drive is in use and functioning properly.
B1 and B2), and only one connector (NVMe B) for the motherboard. Connectors are keyed, and they are
different at each end of the cable to prevent improper installation. The backplane connector IDs are silkscreened
onto the interior of the server.
For this task, you need the NVMe "Y" cable (74-124686-01) which is available through CBL-FNVME-220M6=.
1 Connector B1 2 Connector B2
3 Motherboard connector -
Step 3 Orient the cable correctly and lower it into place, but do not attach it yet.
Step 4
Step 5 Pass the NVMe B motherboard connector through the rectangular cutout in the fan cage's sheetmetal.
Note To pass the NVMe B connector through the cutout, rotate the connector so that it is horizontal.
To provide enough slack in the cable, make sure that you have not attached the NVMe B1 and B2 connectors
yet.
Tip Each fan module has a fault LED next to the fan connector on the motherboard. This LED lights green when
the fan is correctly seated and is operating OK. The LED lights amber when the fan has a fault or is not
correctly seated.
Caution You do not have to shut down or remove power from the server to replace fan modules because they are hot-
swappable. However, to maintain proper cooling, do not operate the server for more than one minute with
any fan module removed.
a) Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach
cables from the rear panel to provide clearance.
Caution If you cannot safely view and access the component, remove the server from the rack.
b) Remove the top cover from the server as described in Removing Top Cover, on page 42.
c) Grasp the fan module at its front and rear finger-grips. Lift straight up to disengage its connector from the motherboard.
Step 2 Install a new fan module:
a) Set the new fan module in place. The arrow printed on the top of the fan module should point toward the rear of the
server.
b) Press down gently on the fan module to fully engage it with the connector on the motherboard.
c) Replace the top cover to the server.
d) Replace the server in the rack, replace cables, and then fully power on the server by pressing the Power button.
Note If you need to remove the MLOM to install riser cages, see Replacing an mLOM Card, on page 104.
By using a Cisco replacement kit, you can change your server's rear PCIe riser configuration from three
half-height riser cages to full-height riser cages or three half-height riser cages to two full-height riser cages.
To perform this replacement, see the following topics:
• Required Equipment for Replacing Riser Cages, on page 58
• Removing Half Height Riser Cages, on page 59
• Installing Full Height Riser Cages, on page 61
• Removing Full Height Riser Cages, on page 64
• Installing Half Height Riser Cages, on page 68
Note To remove and install screws, you also need a #2 Phillips screwdriver, which is not provided by Cisco.
Step 1 Remove the server top cover to gain access to the PCIe riser cages.
See Removing Top Cover, on page 42.
Step 3 Using a #2 Phillips screwdriver, remove the four screws that secure the half height rear wall and mLOM bracket to the
chassis sheet metal.
Note One of the screws is located behind the rear wall so it might be difficult to see. when you are facing the
server's rear riser slots.
Figure 20: Locations of Securing Screws, Facing Rear Riser Slots
Step 4 Remove the half height rear wall and mLOM bracket.
a) Grasp each end of the half height rear wall and remove it.
b) Grasp each end of the mLOM bracket and remove it.
Step 5 Save the three HH riser cages and the half height rear wall.
What to do next
Install the two full-height riser cages. See Installing Full Height Riser Cages, on page 61.
Step 3 Using a #2 Phillips screwdriver, install the four screws the secure the mLOM bracket and the FH rear wall to the server
sheet metal.
Caution Tighten the screws to 4 lbs-in of torque. Do not overtighten the screws or you risk stripping them.
b) Holding each riser cage level, lower it into its PCIe slot, then tighten the thumbscrew by using a #2 Phillips screwdriver
or your fingers.
Caution Tighten the screws to 4 lbs-in of torque. Do not overtighten the screws or you risk stripping them.
Step 1 Remove the server top cover to gain access to the PCIe riser cages.
See Removing Top Cover, on page 42.
3 Rear Riser cage 3 4 Riser Cage Thumbscrews, two total (one per riser
cage)
Step 3 Using a #2 Phillips screwdriver, remove the four screws that secure the half height rear wall and mLOM bracket to the
chassis sheet metal.
Note One of the screws is located behind the rear wall so it might be difficult to see. when you are facing the
server's rear riser slots.
Step 4 Remove the half height rear wall and mLOM bracket.
a) Grasp each end of the full height rear wall and remove it.
Figure 26: Removing the Full Height Rear Wall
Step 5 Save the three FH riser cages and the full height rear wall.
What to do next
Install the two half-height riser cages. See Installing Half Height Riser Cages, on page 68 .
b) Align the screw holes in the HH rear wall with the screw holes in the server sheet metal.
c) Holding the rear wall level, seat onto the server sheet metal, making sure that the screw holes line up.
Step 3 Using a #2 Phillips screwdriver, install the four screws the secure the mLOM bracket and the HH rear wall to the server
sheet metal.
Caution Tighten screws to 4 lbs-in. Do not overtighten screws or you risk stripping them!
b) Holding each riser cage level, lower it into its PCIe slot, then tighten the thumbscrew by using a #2 Phillips screwdriver
or your fingers.
Step 5 Ensure the three riser cages are securely seated on the motherboard.
• One type of CPU heatsink is available for this server, the low profile heatsink (UCSC-HSLP-M6). This
heatsink has four T30 Torx screws on the main heatsink, and 2 Phillips-head screws on the extended
heatsink.
See also Additional CPU-Related Parts to Order with RMA Replacement CPUs, on page 79.
Step 1 Detach the CPU and heatsink (the CPU assembly) from the CPU socket.
a) Using a #2 Phillips screwdriver, loosen the two captive screws at the far end of the heatsink.
b) Using a T30 Torx driver, loosen all the securing nuts.
c) Push the rotating wires towards each other to move them to the unlocked position. The rotating wire locked and
unlocked positions are labeled on the top of the heatsink.
Caution Make sure that the rotating wires are as far inward as possible. When fully unlocked, the bottom of the
rotating wire disengages and allows the removal of the CPU assembly. If the rotating wires are not fully
in the unlocked position, you can feel resistance when attempting to remove the CPU assembly.
d) Grasp the heatsink along the edge of the fins and lift the CPU assembly off of the motherboard.
Caution While lifting the CPU assembly, make sure not to bend the heatsink fins. Also, if you feel any resistance
when lifting the CPU assembly, verify that the rotating wires are completely in the unlocked position.
Step 2 Put the CPU assembly on a rubberized mat or other ESD-safe work surface.
When placing the CPU on the work surface, the heatsink label should be facing up. Do not rotate the CPU assembly
upside down.
Ensure that the heatsink sits level on the work surface.
d) Gently pull up on the extended edge of the CPU carrier (1) so that you can disengage the second pair of CPU clips
near both ends of the TIM breaker.
Caution Be careful when flexing the CPU carrier! If you apply too much force you can damage the CPU carrier.
Flex the carrier only enough to release the CPU clips. Make sure to watch the clips while performing
this step so that you can see when they disengage from the CPU carrier.
e) Gently pull up on the opposite edge of the CPU carrier (2) so that you can disengage the pair of CPU clips.
Step 5 When all the CPU clips are disengaged, grasp the carrier, and lift it and the CPU to detach them from the heatsink.
Note If the carrier and CPU do not lift off of the heatsink, attempt to disengage the CPU clips again.
Step 6 Use the provided cleaning kit (UCSX-HSCK) to remove all of the thermal interface barrier (thermal grease) from the
CPU, CPU carrier, and heatsink.
Important Make sure to use only the Cisco-provided cleaning kit, and make sure that no thermal grease is left on any
surfaces, corners, or crevices. The CPU, CPU carrier, and heatsink must be completely clean.
What to do next
Choose the appropriate option:
• If you will be installing a CPU, go to Installing the CPUs and Heatsinks, on page 77.
• If you will not be installing a CPU, verify that a CPU socket cover is installed. This option is valid only
for CPU socket 2 because CPU socket 1 must always be populated in a runtime deployment.
Step 1 Remove the CPU socket dust cover (UCS-CPU-M6-CVR=) on the server motherboard.
a) Push the two vertical tabs inward to disengage the dust cover.
b) While holding the tabs in, lift the dust cover up to remove it.
Step 2 Grasp the CPU fixture on the edges labeled PRESS, lift it out of the tray, and place the CPU assembly on an ESD-safe
work surface.
Step 3 Apply new TIM.
Note The heatsink must have new TIM on the heatsink-to-CPU surface to ensure proper cooling and performance.
• If you are installing a new heatsink, it is shipped with a pre-applied pad of TIM. Go to step 4.
• If you are reusing a heatsink, you must remove the old TIM from the heatsink and then apply new TIM to the CPU
surface from the supplied syringe. Continue with step a below.
a) Apply the Bottle #1 cleaning solution that is included with the heatsink cleaning kit (UCSX-HSCK=), as well as the
spare CPU package, to the old TIM on the heatsink and let it soak for a least 15 seconds.
b) Wipe all of the TIM off the heatsink using the soft cloth that is included with the heatsink cleaning kit. Be careful to
avoid scratching the heatsink surface.
c) Completely clean the bottom surface of the heatsink using Bottle #2 to prepare the heatsink for installation.
d) Using the syringe of TIM provided with the new CPU (UCS-CPU-TIM=), apply 1.5 cubic centimeters (1.5 ml) of
thermal interface material to the top of the CPU. Use the pattern shown in the following figure to ensure even coverage.
Figure 30: Thermal Interface Material Application Pattern
Caution Use only the correct heatsink for your CPU - UC SC-HSLP-M6=
Note The following items apply to CPU replacement scenarios. If you are replacing a system chassis and moving
existing CPUs to the new motherboard, you do not have to separate the heatsink from the CPU.
• Scenario 3—You have a damaged CPU carrier (the plastic frame around the CPU):
• CPU Carrier: UCS-M6-CPU-CAR=
• #1 flat-head screwdriver (for separating the CPU from the heatsink)
• Heatsink cleaning kit (UCSX-HSCK=)
One cleaning kit can clean up to four CPUs.
• Thermal interface material (TIM) kit for M5 servers (UCS-CPU-TIM=)
One TIM kit covers one CPU.
A CPU heatsink cleaning kit is good for up to four CPU and heatsink cleanings. The cleaning kit contains
two bottles of solution, one to clean the CPU and heatsink of old TIM and the other to prepare the surface of
the heatsink.
New heatsink spares come with a pre-applied pad of TIM. It is important to clean any old TIM off of the CPU
surface prior to installing the heatsinks. Therefore, even when you are ordering new heatsinks, you must order
the heatsink cleaning kit.
Caution DIMMs and their sockets are fragile and must be handled with care to avoid damage during installation.
Note DIMMs and their slots are keyed to insert only one way. Make sure to align the notch on the bottom of the
DIMM with the key in the DIMM slot. If you are seating a DIMM in a slot and feel resistance, remove the
DIMM and verify that its notch is properly aligned with the slot's key.
Caution Cisco does not support third-party DIMMs. Using non-Cisco DIMMs in the server might result in system
problems or damage to the motherboard.
Note To ensure the best server performance, it is important that you are familiar with memory performance guidelines
and population rules before you install or replace DIMMs.
• When one DIMM is used, it must be populated in DIMM slot 1 (farthest away from the CPU) of a given
channel.
• When single- or dual-rank DIMMs are populated in two DIMMs per channel (2DPC) configurations,
always populate the higher number rank DIMM first (starting from the farthest slot). For a 2DPC example,
first populate with dual-rank DIMMs in DIMM slot 1. Then populate single-rank DIMMs in DIMM 2
slot.
• Each channel has two DIMM sockets (for example, channel A = slots A1, A2).
• In a single-CPU configuration, populate the channels for CPU1 only (P1 A1 through P1 H2).
• For optimal performance, populate DIMMs in the order shown in the following table, depending on the
number of CPUs and the number of DIMMs per CPU. If your server has two CPUs, balance DIMMs
evenly across the two CPUs as shown in the table. DIMMs for CPU 1 and CPU 2 (when populated) must
always be configured identically.
• Cisco memory from previous generation servers (DDR3 and DDR4) is not compatible with the server.
• Memory can be configured in any number of DIMMs as pairs, although for optimal performance, see
the following document: https://www.cisco.com/c/dam/en/us/products/collateral/servers-unified-computing/
ucs-c-series-rack-servers/c220-c240-b200-m6-memory-guide.pdf.
• DIMM mixing is supported for DIMMs, but not when Intel Optane Persistent Memory is installed.
• LRDIMMs cannot be mixed with RDIMMs
• RDIMMs can be mixed with RDIMMs, and LRDIMMs can be mixed with LRDIMMs, but mixing
of non-3DS and 3DS LRDIMMs is not allowed in the same channel, across different channels, or
across different sockets.
• Allowed mixing must be in pairs of similar quantities (for example, 8x32GB and 8x64GB, 8x16GB
and 8x64GB, 8x32GB and 8x64GB, or 8x16GB and 8x32GB). Mixing of 10x32GB and 6x64GB,
for example, is not allowed.
• DIMMs are keyed. To properly install them, make sure that the notch on the bottom of the DIMM lines
up with the key in slot.
• Populate all slots with a DIMM or DIMM blank. A DIMM slot cannot be empty.
1 (A1) - (A1)
12 (A1,C1); (D1, E1): (A2, C2); (D2, E2); (A1,C1); (D1, E1): (A2, C2); (D2, E2);
(G1, H1) (G2, H2) (G1, H1) (G2, H2)
16 All populated (A1 All populated (A2 All populated (A1 All populated (A2
through H1) through H2) through H1) through H2)
Table 5: DIMM Plus Intel Optane PMem 200 Series Memory Population Order
Total Number of DIMMs per DDR4 DIMM Slot Intel Optane PMem 200 Series
CPU DIMM Slot
8+4 DIMMs A0, B0, C0, D0, E0, F0, G0, H0 A1, C1, E1, G1
8+8 DIMMs A0, B0, C0, D0, E0, F0, G0, H0 A1, B1, C1, D1, E1, F1, G1, H1
Memory Mirroring
The CPUs in the server support memory mirroring only when an even number of channels are populated with
DIMMs. If one or three channels are populated with DIMMs, memory mirroring is automatically disabled.
Memory mirroring reduces the amount of memory available by 50 percent because only one of the two
populated channels provides data. The second, duplicate channel provides redundancy.
Replacing DIMMs
Identifying a Faulty DIMM
Each DIMM socket has a corresponding DIMM fault LED, directly in front of the DIMM socket. See Internal
Diagnostic LEDs, on page 39 for the locations of these LEDs. When the server is in standby power mode,
these LEDs light amber to indicate a faulty DIMM.
Caution DCPMMs and their sockets are fragile and must be handled with care to avoid damage during installation.
Note To ensure the best server performance, it is important that you are familiar with memory performance guidelines
and population rules before you install or replace DCPMMs.
Configuration Rules
Observe the following rules and guidelines:
• To use DCPMMs in this server, two CPUs must be installed.
• When using DCPMMs in a server:
• The DDR4 DIMMs installed in the server must all be the same size.
• The DCPMMs installed in the server must all be the same size and must have the same SKU.
• The DCPMMs run at 2666 MHz. If you have 2933 MHz RDIMMs or LRDIMMs in the server and you
add DCPMMs, the main memory speed clocks down to 2666 MHz to match the speed of the DCPMMs.
• Each DCPMM draws 18 W sustained, with a 20 W peak.
• Intel Optane Persistent Memory supports the following memory modes:
• App Direct Mode, in which the PMEM operates as a solid-state disk storage device. Data is saved
and is non-volatile. Both PMEM and DIMM capacities count towards the CPU capacity limit
• Memory Mode, in which the PMEM operates as a 100% memory module. Data is volatile and
DRAM acts as a cache for PMEMs. Only the PMEM capacity counts towards the CPU capacity
limit). This is the factory default mode
• If DRAMs/PMEMs are mixed, the following configuration the only one supported per CPU socket:
• 4 DRAMs and 4 PMEMs
• 8 DRAMs and 4 PMEMs
• 8 DRAMs and 1 PMEM
• 8 DRAMs and 8 PMEMs
Note DCPMM configuration is always applied to all DCPMMs in a region, including a replacement DCPMM. You
cannot provision a specific replacement DCPMM on a preconfigured server.
e) Locate the DCPMM that you are removing, and then open the ejector levers at each end of its DIMM slot.
Step 2 Install a new DCPMM:
Note Before installing DCPMMs, see the population rules for this server: Intel Optane DC Persistent Memory
Module Population Rules and Performance Guidelines, on page 85.
a) Align the new DCPMM with the empty slot on the motherboard. Use the alignment feature in the DIMM slot to
correctly orient the DCPMM.
b) Push down evenly on the top corners of the DCPMM until it is fully seated and the ejector levers on both ends lock
into place.
c) Reinstall the air baffle.
d) Replace the top cover to the server.
e) Replace the server in the rack, replace cables, and then fully power on the server by pressing the Power button.
Step 3 Perform post-installation actions:
• If the existing configuration is in 100% Memory mode, and the new DCPMM is also in 100% Memory mode (the
factory default), the only action is to ensure that all DCPMMs are at the latest, matching firmware level.
• If the existing configuration is fully or partly in App-Direct mode and new DCPMM is also in App-Direct mode,
then ensure that all DCPMMs are at the latest matching firmware level and also re-provision the DCPMMs by
creating a new goal.
• If the existing configuration and the new DCPMM are in different modes, then ensure that all DCPMMs are at the
latest matching firmware level and also re-provision the DCPMMs by creating a new goal.
There are a number of tools for configuring goals, regions, and namespaces.
• To use the server's BIOS Setup Utility, see Server BIOS Setup Utility Menu for DCPMM, on page 87.
• To use Cisco IMC or Cisco UCS Manager, see the Cisco UCS: Configuring and Managing Intel Optane DC Persistent
Memory Modules guide.
Caution Potential data loss: If you change the mode of a currently installed DCPMM from App Direct or Mixed Mode
to Memory Mode, any data in persistent memory is deleted.
DCPMMs can be configured by using the server's BIOS Setup Utility, Cisco IMC, Cisco UCS Manager, or
OS-related utilities.
• To use the BIOS Setup Utility, see the section below.
• To use Cisco IMC, see the configuration guides for Cisco IMC 4.0(4) or later: Cisco IMC CLI and GUI
Configuration Guides
• To use Cisco UCS Manager, see the configuration guides for Cisco UCS Manager 4.0(4) or later: Cisco
UCS Manager CLI and GUI Configuration Guides
The server BIOS Setup Utility includes menus for DCPMMs. They can be used to view or configure DCPMM
regions, goals, and namespaces, and to update DCPMM firmware.
To open the BIOS Setup Utility, press F2 when prompted during a system boot.
• Update firmware
• Configure security
You can enable security mode and set a password so that the DCPMM configuration is locked.
When you set a password, it applies to all installed DCPMMs. Security mode is disabled by default.
• Configure data policy
• Regions: Displays regions and their persistent memory types. When using App Direct mode with
interleaving, the number of regions is equal to the number of CPU sockets in the server. When using
App Direct mode without interleaving, the number of regions is equal to the number of DCPMMs in the
server.
From the Regions page, you can configure memory goals that tell the DCPMM how to allocate resources.
• Create goal config
• Namespaces: Displays namespaces and allows you to create or delete them when persistent memory is
used. Namespaces can also be created when creating goals. A namespace provisioning of persistent
memory applies only to the selected region.
Existing namespace attributes such as the size cannot be modified. You can only add or delete namespaces.
• Total capacity: Displays the total resource allocation across the server.
Note The Cisco IMC firmware does not include an out-of-band management interface for the M.2 drives installed
in the M.2 version of this mini-storage module (UCS-MSTOR-M2). The M.2 drives are not listed in Cisco
IMC inventory, nor can they be managed by Cisco IMC. This is expected behavior.
Step 1 Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server, on
page 41.
Step 2 Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach
cables from the rear panel to provide clearance.
Caution If you cannot safely view and access the component, remove the server from the rack.
Step 3 Remove the top cover from the server as described in Removing Top Cover, on page 42.
Step 4 Remove a carrier from its socket:
a) Locate the mini-storage module carrier in its socket just in front of power supply 1.
b) At each end of the carrier, push outward on the clip that secures the carrier.
c) Lift both ends of the carrier to disengage it from the socket on the motherboard.
d) Set the carrier on an anti-static surface.
Step 5 Install a carrier to its socket:
a) Position the carrier over socket, with the carrier's connector facing down and at the same end as the motherboard
socket. Two alignment pegs must match with two holes on the carrier.
b) Gently push down the socket end of the carrier so that the two pegs go through the two holes on the carrier.
c) Push down on the carrier so that the securing clips click over it at both ends.
Step 6 Replace the top cover to the server.
Step 7 Replace the server in the rack, replace cables, and then fully power on the server by pressing the Power button.
2 Alignment pegs -
• Dual SD cards can be configured in a RAID 1 array through the Cisco IMC interface.
• SD slot 1 is on the top side of the carrier; SD slot 2 is on the underside of the carrier (the same side as
the carrier's motherboard connector).
Step 1 Power off the server and then remove the mini-storage module carrier from the server as described in Replacing a
Mini-Storage Module Carrier, on page 89.
Step 2 Remove an SD card:
a) Push on the top of the SD card, and then release it to allow it to spring out from the socket.
b) Grasp and remove the SD card from the socket.
Step 3 Install a new SD card:
a) Insert the new SD card into the socket with its label side facing up.
b) Press on the top of the SD card until it clicks in the socket and stays in place.
Step 4 Install the mini-storage module carrier back into the server and then power it on as described in Replacing a Mini-Storage
Module Carrier, on page 89.
Step 1 Power off the server and then remove the mini-storage module carrier from the server as described in Replacing a
Mini-Storage Module Carrier, on page 89.
Step 2 Remove an M.2 SSD:
a) Use a #1 Phillips-head screwdriver to remove the single screw that secures the M.2 SSD to the carrier.
b) Remove the M.2 SSD from its socket on the carrier.
Step 3 Install a new M.2 SSD:
a) Angle the M.2 SSD downward and insert the connector-end into the socket on the carrier. The M.2 SSD's label must
face up.
b) Press the M.2 SSD flat against the carrier.
c) Install the single screw that secures the end of the M.2 SSD to the carrier.
Step 4 Install the mini-storage module carrier back into the server and then power it on as described in Replacing a Mini-Storage
Module Carrier, on page 89.
Caution We do not recommend that you hot-swap the internal USB drive while the server is powered on because of
the potential for data loss.
Step 1 Enter the BIOS Setup Utility by pressing the F2 key when prompted during bootup.
Step 2 Navigate to the Advanced tab.
Step 3 On the Advanced tab, select USB Configuration.
Step 4 On the USB Configuration page, select USB Ports Configuration.
Step 5 Scroll to USB Port: Internal, press Enter, and then choose either Enabled or Disabled from the dialog box.
Step 6 Press F10 to save and exit the utility.
Warning There is danger of explosion if the battery is replaced incorrectly. Replace the battery only with the same or
equivalent type recommended by the manufacturer. Dispose of used batteries according to the manufacturer’s
instructions.
[Statement 1015]
Warning Recyclers: Do not shred the battery! Make sure you dispose of the battery according to appropriate regulations
for your country or locale.
The real-time clock (RTC) battery retains system settings when the server is disconnected from power. The
battery type is CR2032. Cisco supports the industry-standard CR2032 battery, which can be ordered from
Cisco (PID N20-MBLIBATT) or purchased from most electronic stores.
One power supply is mandatory, and one more can be added for 1 + 1 redundancy. You cannot mix AC and
DC power supplies in the same server.
• See also Power Specifications, on page 147 for more information about the power supplies.
• See also Rear-Panel LEDs, on page 38 for information about the power supply LEDs.
This section includes procedures for replacing AC and DC power supply units.
See the following.
• Replacing AC Power Supplies, on page 95
• Replacing DC Power Supplies, on page 96
• Installing DC Power Supplies (First Time Installation), on page 97
• Grounding for DC Power Supplies, on page 98
Note If you have ordered a server with power supply redundancy (two power supplies), you do not have to power
off the server to replace a power supply because they are redundant as 1+1.
Note Do not mix power supply types or wattages in the server. Both power supplies must be identical.
Step 1 Remove the power supply that you are replacing or a blank panel from an empty bay:
a) Perform one of the following actions:
• If your server has only one power supply, shut down and remove power from the server as described in Shutting
Down and Removing Power From the Server, on page 41.
• If your server has two power supplies, you do not have to shut down the server.
b) Remove the power cord from the power supply that you are replacing.
c) Grasp the power supply handle while pinching the release lever toward the handle.
d) Pull the power supply out of the bay.
Step 2 Install a new power supply:
a) Grasp the power supply handle and insert the new power supply into the empty bay.
b) Push the power supply into the bay until the release lever locks.
c) Connect the power cord to the new power supply.
d) Only if you shut down the server, press the Power button to boot the server to main power mode.
Note This procedure is for replacing DC power supplies in a server that already has DC power supplies installed.
If you are installing DC power supplies to the server for the first time, see Installing DC Power Supplies (First
Time Installation), on page 97.
Warning A readily accessible two-poled disconnect device must be incorporated in the fixed wiring.
Statement 1022
Warning This product requires short-circuit (overcurrent) protection, to be provided as part of the building
installation. Install only in accordance with national and local wiring regulations.
Statement 1045
Warning Installation of the equipment must comply with local and national electrical codes.
Statement 1074
Note If you are replacing DC power supplies in a server with power supply redundancy (two power supplies), you
do not have to power off the server to replace a power supply because they are redundant as 1+1.
Note Do not mix power supply types or wattages in the server. Both power supplies must be identical.
Step 1 Remove the DC power supply that you are replacing or a blank panel from an empty bay:
a) Perform one of the following actions:
• If you are replacing a power supply in a server that has only one DC power supply, shut down and remove power
from the server as described in Shutting Down and Removing Power From the Server, on page 41.
• If you are replacing a power supply in a server that has two DC power supplies, you do not have to shut down
the server.
b) Remove the power cord from the power supply that you are replacing. Lift the connector securing clip slightly and
then pull the connector from the socket on the power supply.
c) Grasp the power supply handle while pinching the release lever toward the handle.
d) Pull the power supply out of the bay.
Step 2 Install a new DC power supply:
a) Grasp the power supply handle and insert the new power supply into the empty bay.
b) Push the power supply into the bay until the release lever locks.
c) Connect the power cord to the new power supply. Press the connector into the socket until the securing clip clicks
into place.
d) Only if you shut down the server, press the Power button to boot the server to main power mode.
Figure 32: Replacing DC Power Supplies
Note This procedure is for installing DC power supplies to the server for the first time. If you are replacing DC
power supplies in a server that already has DC power supplies installed, see Replacing DC Power Supplies,
on page 96.
Warning A readily accessible two-poled disconnect device must be incorporated in the fixed wiring.
Statement 1022
Warning This product requires short-circuit (overcurrent) protection, to be provided as part of the building
installation. Install only in accordance with national and local wiring regulations.
Statement 1045
Warning Installation of the equipment must comply with local and national electrical codes.
Statement 1074
Note Do not mix power supply types or wattages in the server. Both power supplies must be identical.
Caution As instructed in the first step of this wiring procedure, turn off the DC power source from your facility’s
circuit breaker to avoid electric shock hazard.
Step 1 Turn off the DC power source from your facility’s circuit breaker to avoid electric shock hazard.
Note The required DC input cable is Cisco part CAB-48DC-40A-8AWG. This 3-meter cable has a 3-pin connector
on one end that is keyed to the DC input socket on the power supply. The other end of the cable has no
connector so that you can wire it to your facility’s DC power.
Step 2 Wire the non-terminated end of the cable to your facility’s DC power input source.
Step 3 Connect the terminated end of the cable to the socket on the power supply. The connector is keyed so that the wires align
for correct polarity and ground.
Step 4 Return DC power from your facility’s circuit breaker.
Step 5 Press the Power button to boot the server to main power mode.
Figure 33: Installing DC Power Supplies
Step 6 See Grounding for DC Power Supplies, on page 98 for information about additional chassis grounding.
Note The grounding points on the chassis are sized for 10-32 screws. You must provide your own screws, grounding
lug, and grounding wire. The grounding lug must be dual-hole lug that fits 10-32 screws. The grounding cable
that you provide must be 14 AWG (2 mm), minimum 60° C wire, or as permitted by the local code.
Note If you are installing a Cisco UCS Virtual Interface Card, there are prerequisite considerations. See Cisco
Virtual Interface Card (VIC) Considerations, on page 102.
Note RAID controller cards install into a separate mRAID riser. See Replacing a SAS Storage Controller Card
(RAID or HBA), on page 121.
Step 1 Remove an existing PCIe card (or a blank filler panel) from the PCIe riser:
a) Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server,
on page 41.
b) Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach
cables from the rear panel to provide clearance.
Caution If you cannot safely view and access the component, remove the server from the rack.
c) Remove the top cover from the server as described in Removing Top Cover, on page 42.
d) Remove any cables from the ports of the PCIe card that you are replacing.
e) Use two hands to grasp the external riser handle and the blue area at the front of the riser.
f) Lift straight up to disengage the riser's connectors from the two sockets on the motherboard. Set the riser upside-down
on an antistatic surface.
g) Open the hinged plastic retainer that secures the rear-panel tab of the card.
h) Pull evenly on both ends of the PCIe card to remove it from the socket on the PCIe riser.
If the riser has no card, remove the blanking panel from the rear opening of the riser.
e) Carefully push down on both ends of the PCIe riser to fully engage its two connectors with the two sockets on the
motherboard.
f) Replace the top cover to the server.
g) Replace the server in the rack, replace cables, and then fully power on the server by pressing the Power button.
Slot Number Electrical Lane Connector Length Maximum Card Card Height (Rear Panel NCSI Support
Width Length Opening)
Slot Number Electrical Lane Connector Length Maximum Card Card Height (Rear NCSI Support
Width Length Panel Opening)
PCIe cable connector Gen-3 x8 Other end of cable connects to front drive backplane to support front-panel NVMe
for front-panel NVMe SSDs.
SSDs
Note If you are installing a Cisco UCS Virtual Interface Card, there are prerequisite considerations. See Cisco
Virtual Interface Card (VIC) Considerations, on page 102.
Note RAID controller cards install into a separate mRAID riser. See Replacing a SAS Storage Controller Card
(RAID or HBA), on page 121.
Step 1 Remove an existing PCIe card (or a blank filler panel) from the PCIe riser:
a) Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server,
on page 41.
b) Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach
cables from the rear panel to provide clearance.
Caution If you cannot safely view and access the component, remove the server from the rack.
c) Remove the top cover from the server as described in Removing Top Cover, on page 42.
d) Remove any cables from the ports of the PCIe card that you are replacing.
e) Use two hands to grasp the external riser handle and the blue area at the front of the riser.
f) Lift straight up to disengage the riser's connectors from the two sockets on the motherboard. Set the riser upside-down
on an antistatic surface.
g) Open the hinged plastic retainer that secures the rear-panel tab of the card.
h) Pull evenly on both ends of the PCIe card to remove it from the socket on the PCIe riser.
If the riser has no card, remove the blanking panel from the rear opening of the riser.
PCIe riser 1/slot 1 has a long-card guide at the front end of the riser. Use the slot in the long-card guide to help support
a full-length card.
b) Push down evenly on both ends of the card until it is fully seated in the socket.
c) Ensure that the card’s rear panel tab sits flat against the riser rear-panel opening and then close the hinged tab retainer
over the card’s rear-panel tab.
d) Position the PCIe riser over its two sockets on the motherboard and over the two chassis alignment channels.
Figure 35: PCIe Riser Alignment Features
e) Carefully push down on both ends of the PCIe riser to fully engage its two connectors with the two sockets on the
motherboard.
f) Replace the top cover to the server.
g) Replace the server in the rack, replace cables, and then fully power on the server by pressing the Power button.
Note If you use the Cisco Card NIC mode, you must also make a VIC Slot setting that matches where your VIC is
installed. The options are Riser1, Riser2, and Flex-LOM. See NIC Mode and NIC Redundancy Settings, on
page 30 for more information about NIC modes.
• If you want to use the Cisco UCS VIC card for Cisco UCS Manager integration, see also the Cisco UCS
C-Series Server Integration with Cisco UCS Manager Guides for details about supported configurations,
cabling, and other requirements.
• C-Series servers support a maximum of three (3) VIC adapters, one mLOM and two PCIe.
Each compatible riser supports only one NCSI capable card, whether Cisco VIC or third party advanced
network adapter (NVIDIA Connect-X, Intel X700/X800, etc) in the higher numbered compatible slot on
each riser.
PCIe x16 slots are recommended and preferred for high performance networking including Cisco VIC.
If a GPU or other non-networking add-in card occupies the x16 slot on the riser, a VIC can be placed in
the x8 alternate slot listed in the support table. The performance for 100gbps network interfaces may be
degraded in an x8 slot, and this configuration is not recommended.
If a third party network adapter with NCSI is in the x16 slot, and a VIC is not supported on that riser,
the system will boot if a VIC is installed in the x8 slot, but the VIC will not be detected because that VIC
is not yet functional.
This consideration applies to Cisco 15000 Series VICs only.
VIC How Many Slots That Primary Slot For Primary Slot For Minimum Cisco
Supported in Support VICs Cisco UCS Manager Cisco Card NIC IMC Firmware
Server Integration Mode
Step 1 Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server, on
page 41.
Step 2 Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach
cables from the rear panel to provide clearance.
Caution If you cannot safely view and access the component, remove the server from the rack.
Step 3 If full height riser cages are present, remove them now.
See Removing Full Height Riser Cages, on page 64.
Step 4 If you have not already removed the riser cage rear wall, remove it now.
a) Using a #2 Phillips screwdriver, remove the two countersink screws.
b) Grasp each end of the full height rear wall and remove it.
Step 5 If you have not removed the existing mLOM bracket, remove it now.
a) Using a #2 Phillips screwdriver, remove the two countersink screws that hold the mLOM bracket in place.
b) Lift the mLOM bracket straight up to remove it from the server.
Step 7 If you are not installing an mLOM, install the filler panel in the mLOM slot as shown below. Otherwise, go to Installing
an mLOM Card (2FH Riser Cages), on page 108.
a) Lower the filler panel onto the server, aligning the screwholes.
b) Using a #2 Phillips screwdriver, insert and tighten the screws.
Caution Tighten screws to 4 lbs-in. Do not overtighten screws or you risk stipping them!
Step 1 Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server, on
page 41.
Step 2 Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach
cables from the rear panel to provide clearance.
Caution If you cannot safely view and access the component, remove the server from the rack.
Step 4 If you have not already removed the half-height rear wall, remove it now.
a) Using a #2 Phillips screwdriver, remove the four countersink screws.
b) Grasp each end of the half-height rear wall and lift it off of the server.
Step 5 If you have not removed the existing mLOM bracket, remove it now.
a) Using a #2 Phillips screwdriver, remove the two countersink screws that hold the mLOM bracket in place.
b) Lift the mLOM bracket to remove it from the server.
Step 7 If you are not installing an mLOM, install the filler panel in the mLOM slot as shown below. Otherwise, go to Installing
an mLOM Card (3HH Riser Cages), on page 115.
a) Lower the filler panel onto the server, aligning the screwholes.
b) Lower the half-height rear wall onto the server, aligning the screwholes.
c) Using a #2 Phillips screwdriver, insert and tighten the four countersink screws.
Note Two screwholes overlap on the rear wall and the filler panel. When installing the screws, make sure that
the screws sink through both parts and tightens into sheetmetal.
Caution Tighten screws to 4 lbs-in. Do not overtighten screws or you risk stripping them!
a) Holding the mLOM level, slide it into the slot until it seats into the PCI connector.
b) Using a #2 Phillips screwdriver, tighten the captive screws to secure the mLOM to the server.
Note For servers running in standalone mode only: After you replace controller hardware, you must run the
Cisco Host Upgrade Utility (HUU) to update the controller firmware, even if the firmware Current Version
is the same as the Update Version. This is necessary to program the controller's suboem-id to the correct value
for the server SKU. If you do not do this, drive enumeration might not display correctly in the software. This
issue does not affect servers controlled in UCSM mode.
a) Disconnect SAS/SATA cables and any Supercap cable from the existing card.
b) Lift up on the card's blue ejector lever to unseat it from the motherboard socket.
c) Lift straight up on the card's carrier frame to disengage the card from the motherboard socket and to disengage the
frame from two pegs on the chassis wall.
d) Remove the existing card from its plastic carrier bracket. Carefully push the retainer tabs aside and then lift the card
from the bracket.
Step 3 Install a new storage controller card to the riser:
a) Install the new card to the plastic carrier bracket. Make sure that the retainer tabs close over the edges of the card.
b) Position the assembly over the chassis and align the card edge with the motherboard socket. At the same time, align
the two slots on the back of the carrier bracket with the pegs on the chassis inner wall.
c) Push on both corners of the card to seat its connector in the riser socket. At the same time, ensure that the slots on
the carrier frame engage with the pegs on the inner chassis wall.
d) Fully close the blue ejector lever on the card to lock the card into the socket.
e) Connect SAS/SATA cables and any Supercap cable to the new card.
Step 4 Replace the top cover to the server.
Step 5 Replace the server in the rack, replace cables, and then fully power on the server by pressing the Power button.
If this is a first-time installation, see Storage Controller and Backplane Connectors, on page 160 for cabling instructions.
Step 6 If your server is running in standalone mode, use the Cisco UCS Host Upgrade Utility to update the controller firmware
and program the correct suboem-id for the controller.
Note For servers running in standalone mode only: After you replace controller hardware, you must run the
Cisco UCS Host Upgrade Utility (HUU) to update the controller firmware, even if the firmware Current
Version is the same as the Update Version. This is necessary to program the controller's suboem-id to the
correct value for the server SKU. If you do not do this, drive enumeration might not display correctly in the
software. This issue does not affect servers controlled in UCSM mode.
See the HUU guide for your Cisco IMC release for instructions on downloading and using the utility to bring
server components to compatible levels: HUU Guides.
Note The Cisco Boot-Optimized M.2 RAID Controller is not supported when the server is used as a compute-only
node in Cisco HyperFlex configurations.
• The minimum version of Cisco IMC and Cisco UCS Manager that support this controller is 4.0(4) and
later.
• This controller supports RAID 1 (single volume) and JBOD mode.
Note Do not use the server's embedded SW MegaRAID controller to configure RAID
settings when using this controller module. Instead, you can use the following
interfaces:
• Cisco IMC 4.2(1) and later
• BIOS HII utility, BIOS 4.2(1) and later
• Cisco UCS Manager 4.2(1) and later (UCS Manager-integrated servers)
• The controller supports only 240 GB and 960 GB M.2 SSDs. The M.2 SATA SSDs must be identical.
You cannot mix M.2 drives with different capacities. For example, one 240 GB M.2 and one 960 GB
M.2 is an unsupported configuration.
• The Boot-Optimized RAID controller supports VMWare, Windows, and Linux Operating Systems only.
• A SATA M.2 drive in slot 1 (the top) is the first SATA device; a SATA M.2 drive in slot 2 (the underside)
is the second SATA device.
• The name of the controller in the software is MSTOR-RAID.
• A drive in Slot 1 is mapped as drive 253; a drive in slot 2 is mapped as drive 254.
• The SATA M.2 drives can boot in UEFI mode only. Legacy boot mode is not supported.
• If you replace a single SATA M.2 drive that was part of a RAID volume, rebuild of the volume is
auto-initiated after the user accepts the prompt to import the configuration. If you replace both drives of
a volume, you must create a RAID volume and manually reinstall any OS.
• We recommend that you erase drive contents before creating volumes on used drives from another server.
The configuration utility in the server BIOS includes a SATA secure-erase function.
• The server BIOS includes a configuration utility specific to this controller that you can use to create and
delete RAID volumes, view controller properties, and erase the physical drive contents. Access the utility
by pressing F2 when prompted during server boot. Then navigate to Advanced > Cisco Boot Optimized
M.2 RAID Controller.
• The boot-optimized RAID controller is not supported when the server is used as a compute node in
HyperFlex configurations.
Step 1 Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server, on
page 41.
Step 2 Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach
cables from the rear panel to provide clearance.
Caution If you cannot safely view and access the component, remove the server from the rack.
Step 3 Remove the top cover from the server as described in Removing Top Cover, on page 42.
Step 4 Grasp and remove the air baffle located between CPU 2 and PCIe Riser 3.
b) At each end of the controller board, push outward on the clip that secures the carrier.
c) Lift both ends of the controller to disengage it from the socket on the motherboard.
d) Set the carrier on an anti-static surface.
Step 6 If you are transferring SATA M.2 drives from the old controller to the replacement controller, do that before installing
the replacement controller:
Note Any previously configured volume and data on the drives are preserved when the M.2 drives are transferred
to the new controller. The system will boot the existing OS that is installed on the drives.
a) Use a #1 Phillips-head screwdriver to remove the single screw that secures the M.2 drive to the carrier.
b) Lift the M.2 drive from its socket on the carrier.
c) Position the replacement M.2 drive over the socket on the controller board.
d) Angle the M.2 drive downward and insert the connector-end into the socket on the carrier. The M.2 drive's label must
face up.
e) Press the M.2 drive flat against the carrier.
f) Install the single screw that secures the end of the M.2 SSD to the carrier.
g) Turn the controller over and install the second M.2 drive.
Figure 37: Cisco Boot-Optimized M.2 RAID Controller, Showing M.2 Drive Installation
c) Remove the top cover from the server as described in Removing Top Cover, on page 42.
Step 2 Remove an existing Supercap:
a) Locate the Supercap modules near the RAID card by the front-loading drives.
b) Disconnect the Supercap cable connector from the RAID cable connector.
c) Push aside the securing tab and open the hinged door that secures the Supercap to its bracket.
c) Connect the Supercap cable from the RAID controller card to the connector on the new Supercap cable.
d) Close the hinged plastic bracket over the Supercap. Push down until the securing tab clicks.
a) Using both hands, grasp the external blue handle on the rear of the riser and the blue finger-grip on the front end of
the riser.
b) Lift the riser straight up to disengage it from the motherboard socket.
c) Set the riser upside down on an antistatic surface.
Step 3 Remove any existing card from the riser:
a) Disconnect cables from the existing card.
b) Open the blue card-ejector lever on the back side of the card to eject it from the socket on the riser.
c) Pull the card from the riser and set it aside.
Step 4 Install a new card to the riser:
a) With the riser upside down, set the card on the riser.
b) Push on both corners of the card to seat its connector in the riser socket.
c) Close the card-ejector lever on the card to lock it into the riser.
Step 5 Return the riser to the server:
a) Align the connector on the riser with the socket on the motherboard. At the same time, align the two slots on the back
side of the bracket with the two pegs on the inner chassis wall.
b) Push down gently to engage the riser connector with the motherboard socket. The metal riser bracket must also engage
the two pegs that secure it to the chassis wall.
Step 6 Reconnect the cables to their connectors on the new card.
Step 7 Replace the top cover to the server.
Step 8 Replace the server in the rack, replace cables, and then fully power on the server by pressing the Power button.
Figure 38: mRAID Riser Location
TPM Considerations
• This server supports either TPM version 1.2 or TPM version 2.0 (UCSX-TPM-002C) as defined by the
Trusted Computing Group (TCG). The TPM is also SPI-based.
• Field replacement of a TPM is not supported; you can install a TPM after-factory only if the server does
not already have a TPM installed.
• If there is an existing TPM 1.2 installed in the server, you cannot upgrade to TPM 2.0. If there is no
existing TPM in the server, you can install TPM 2.0.
• If a server with a TPM is returned, the replacement server must be ordered with a new TPM.
• If the TPM 2.0 becomes unresponsive, reboot the server.
Note Field replacement of a TPM is not supported; you can install a TPM after-factory only if the server does not
already have a TPM installed.
This topic contains the following procedures, which must be followed in this order when installing and enabling
a TPM:
1. Installing the TPM Hardware
2. Enabling the TPM in the BIOS
3. Enabling the Intel TXT Feature in the BIOS
Note For security purposes, the TPM is installed with a one-way screw. It cannot be removed with a standard
screwdriver.
Note You must set a BIOS Administrator password before performing this procedure. To set this password, press
the F2 key when prompted during system boot to enter the BIOS Setup utility. Then navigate to Security >
Set Administrator Password and enter the new password twice as prompted.
Step 1 Reboot the server and watch for the prompt to press F2.
Step 2 When prompted, press F2 to enter the BIOS Setup utility.
Step 3 Verify that the prerequisite BIOS values are enabled:
a) Choose the Advanced tab.
b) Choose Intel TXT(LT-SX) Configuration to open the Intel TXT(LT-SX) Hardware Support window.
c) Verify that the following items are listed as Enabled:
• VT-d Support (default is Enabled)
• VT Support (default is Enabled)
• TPM Support
• TPM State
Note For Recyclers Only! This procedure is not a standard field-service option. This procedure is for recyclers
who will be reclaiming the electronics for proper disposal to comply with local eco design and e-waste
regulations.
To remove the TPM, the following requirements must be met for the server:
• It must be disconnected from facility power.
Step 2 Using the pliers, grip the head of the screw and turn it counter clockwise until the screw releases.
Step 3 Remove the TPM module and dispose of it properly.
What to do next
Remove the PCBA. See Recycling the PCB Assembly (PCBA), on page 137.
You must disconnect the PCBA from the tray before recycling the PCBA.
Note For Recyclers Only! This procedure is not a standard field-service option. This procedure is for recyclers
who will be reclaiming the electronics for proper disposal to comply with local eco design and e-waste
regulations.
To remove the printed circuit board assembly (PCBA), the following requirements must be met:
• The server must be disconnected from facility power.
• The server must be removed from the equipment rack.
• The server's top cover must be removed. See Removing Top Cover, on page 42.
Step 3 Using a T10 Torx driver, remove all of the indicated screws.
Step 4 Remove the PCBA and dispose of it properly.
2 Boot Alternate Cisco IMC Header: CN3 pins 1 - 2 6 Clear BIOS Password Switch (SW12 Switch 6)
3 System Secure Firmware Erase Header: CN3 pins 3 - 4 7 Clear CMOS Switch (SW1q2 Switch 9)
Caution Clearing the CMOS removes any customized settings and might result in data loss. Make a note of any
necessary customized settings in the BIOS before you use this clear CMOS procedure.
Step 1 Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server,
on page 41.
Step 2 Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach
cables from the rear panel to provide clearance.
Caution If you cannot safely view and access the component, remove the server from the rack.
Step 3 Remove the top cover from the server as described in Removing Top Cover, on page 42.
Step 4 Using your finger, gently push the SW12 switch 9 to the side marked ON.
Step 5 Reinstall the top cover and reconnect AC power cords to the server. The server powers up to standby power mode,
indicated when the Power LED on the front panel is amber.
Step 6 Return the server to main power mode by pressing the Power button on the front panel. The server is in main power
mode when the Power LED is green.
Note You must allow the entire server to reboot to main power mode to complete the reset. The state of the
switch cannot be determined without the host CPU running.
Step 7 Press the Power button to shut down the server to standby power mode, and then remove AC power cords from the
server to remove all power.
Step 8 Remove the top cover from the server.
Step 9 Using your finger, gently push switch 9 to its original position (OFF).
Note If you do not reset the switch to its original position (OFF), the CMOS settings are reset to the defaults
every time you power-cycle the server.
Step 10 Replace the top cover, replace the server in the rack, replace power cords and any other cables, and then power on the
server by pressing the Power button.
Note As indicated by the message shown above, there are two procedures for recovering the BIOS. Try procedure
1 first. If that procedure does not recover the BIOS, use procedure 2.
Step 1 Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server,
on page 41. Disconnect power cords from all power supplies.
Step 2 Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach
cables from the rear panel to provide clearance.
Caution If you cannot safely view and access the component, remove the server from the rack.
Step 3 Remove the top cover from the server as described in Removing Top Cover, on page 42.
Step 4 Using your finger, gently slide the SW12 switch 6 to the ON position.
Step 5 Reinstall the top cover and reconnect AC power cords to the server. The server powers up to standby power mode,
indicated when the Power LED on the front panel is amber.
Step 6 Return the server to main power mode by pressing the Power button on the front panel. The server is in main power
mode when the Power LED is green.
Note You must allow the entire server to reboot to main power mode to complete the reset. The state of the
switch cannot be determined without the host CPU running.
Step 7 Press the Power button to shut down the server to standby power mode, and then remove AC power cords from the
server to remove all power.
Step 8 Remove the top cover from the server.
Step 9 Reset the switch to its original position (OFF).
Note If you do not remove the switch to its original position (OFF), the BIOS password is cleared every time
you power-cycle the server.
Step 10 Replace the top cover, replace the server in the rack, replace power cords and any other cables, and then power on the
server by pressing the Power button.
Using the Boot Alternate Cisco IMC Image Header (CN3, Pins 1-2)
You can use this Cisco IMC debug header to force the system to boot from an alternate Cisco IMC image.
You will find it helpful to refer to the location of the CN3 header. See Service Headers and Jumpers, on page
139.
Step 1 Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server,
on page 41. Disconnect power cords from all power supplies.
Step 2 Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach
cables from the rear panel to provide clearance.
Caution If you cannot safely view and access the component, remove the server from the rack.
Step 3 Remove the top cover from the server as described in Removing Top Cover, on page 42.
Step 4 Install a two-pin jumper across CN3 pins 1 and 2.
Step 5 Reinstall the top cover and reconnect AC power cords to the server. The server powers up to standby power mode,
indicated when the Power LED on the front panel is amber.
Step 6 Return the server to main power mode by pressing the Power button on the front panel. The server is in main power
mode when the Power LED is green.
Note When you next log in to Cisco IMC, you see a message similar to the following:
'Boot from alternate image' debug functionality is enabled.
CIMC will boot from alternate image on next reboot or input power cycle.
Note If you do not remove the jumper, the server will boot from an alternate Cisco IMC image every time that
you power cycle the server or reboot Cisco IMC.
Step 7 To remove the jumper, press the Power button to shut down the server to standby power mode, and then remove AC
power cords from the server to remove all power.
Step 8 Remove the top cover from the server.
Step 9 Remove the jumper that you installed.
Step 10 Replace the top cover, replace the server in the rack, replace power cords and any other cables, and then power on the
server by pressing the Power button.
Using the System Firmware Secure Erase Header (CN3, Pins 3-4)
You can use this header to securely erase system firmware from the server.
You will find it helpful to refer to the location of the CN3 header. See Service Headers and Jumpers, on page
139.
Step 1 Shut down and remove power from the server as described in Shutting Down and Removing Power From the Server,
on page 41. Disconnect power cords from all power supplies.
Step 2 Slide the server out the front of the rack far enough so that you can remove the top cover. You might have to detach
cables from the rear panel to provide clearance.
Caution If you cannot safely view and access the component, remove the server from the rack.
Step 3 Remove the top cover from the server as described in Removing Top Cover, on page 42.
Step 4 Install a two-pin jumper across CN3 pins 3 and 4.
Step 5 Reinstall the top cover and reconnect AC power cords to the server. The server powers up to standby power mode,
indicated when the Power LED on the front panel is amber.
Step 6 Return the server to main power mode by pressing the Power button on the front panel. The server is in main power
mode when the Power LED is green.
Note You must allow the entire server to reboot to main power mode to complete the reset. The state of the
jumper cannot be determined without the host CPU running.
Step 7 Press the Power button to shut down the server to standby power mode, and then remove AC power cords from the
server to remove all power.
Step 8 Remove the top cover from the server.
Step 9 Remove the jumper that you installed.
Note If you do not remove the jumper, the password is cleared every time you power-cycle the server.
Step 10 Replace the top cover, replace the server in the rack, replace power cords and any other cables, and then power on the
server by pressing the Power button.
Server Specifications
This appendix lists the physical, environmental, and power specifications for the server.
• Physical Specifications, on page 145
• Environmental Specifications, on page 146
• Power Specifications, on page 147
Physical Specifications
The following table lists the physical specifications for the server.
Description Specification
Environmental Specifications
As a Class A2 product, the server has the following environmental specifications.
Description Specification
Temperature, Extended Operating 5°C to 40°C (41°F to 104°F) with no direct sunlight
Humidity condition: Uncontrolled, not to exceed 50% RH starting condition
Derate the maximum temperature by 1°C (33.8°F) per every 305 meters of altitude
above 900m
Humidity (RH), operating 10% to 90% and 28°C (82.4°F) maximum dew-point temperature, non-condensing
environment
Minimum to be higher (more moisture) of -12 ˚C (10.4 ˚F) dew point or 8% relative
humidity
Maximum to be 24 ˚C (75.2 ˚F) dew point or 90% relative humidity
Humidity (RH), non-operating 5% to 93% relative humidity, non-condensing, with a maximum wet bulb
temperature of 28 °C across the 20 °C to 40 °C dry bulb range.
(when the server is stored or transported)
Power Specifications
Note Do not mix power supply types or wattages in the server. Both power supplies must be identical.
You can get more specific power information for your exact server configuration by using the Cisco UCS
Power Calculator:
http://ucspowercalc.cisco.com
The power specifications for the supported power supply options are listed in the following sections.
Note For the 80PLUS platinum certification documented in the following table, you can find test results at
https://www.clearesult.com/80plus/.
Parameter Specification
Maximum Input at Nominal Input Voltage (W) 889 889 1167 1154
Maximum Input at Nominal Input Voltage (VA) 916 916 1203 1190
Parameter Specification
Note For the 80PLUS platinum certification documented in the following table, you can find test results at
https://www.clearesult.com/80plus/.
Parameter Specification
Parameter Specification
Description Specification
Note For the 80PLUS platinum certification documented in the following table, you can find test results at
https://www.clearesult.com/80plus/.
Parameter Specification
Note For the 80PLUS platinum certification documented in the following table, you can find test results at
https://www.clearesult.com/80plus/.
Parameter Specification
Maximum Input at Nominal Input Voltage (W) 1338 1330 2490 2480
Maximum Input at Nominal Input Voltage (VA) 1351 1343 2515 2505
Note Only the approved power cords or jumper power cords listed below are supported.
The following tables show the supported power cords supported for less than 2300-Watt server PSUs, and
more than 2300-Watt server PSUs.
Table 12: Supported Power Cords for Less than 2300 W Server PSUs
CAB-9K10A-KOR 6 1.8
Power Cord, 125 V AC, 13 A, KSC8305 plug
Korea
CAB-JPN-3PIN 2.4
90-125 V AC, 12 A, NEMA 5-15 plug
Japan
R2XX-DMYMPWRCORD NA NA
No power cord; PID option for ordering server with no power cord
Table 13: Supported Power Cords for More than 2300 W Server PSUs
CAB-C19-CBN
Cabinet Jumper Power Cord, 250 VAC, 16A, C20 to C19 connector
CAB-S132-C19-ISRL 14
S132 to IEC320 C19 connector
Israel
CAB-IR2073-C19-AR 14
IRSM 2073 to IEC320 C19 connector
Argentina
CAB-BS1363-C19-UK 14
BS-1363 to IEC 320 C19 connector
UK
CAB-SABS-C19-IND
SABS 164-1 to IEC 320 C19 connector
India
CAB-C2316-C19-IT 14
CEI 23-16 to IEC 320 C19
Italy
CAB-L520P-C19-US 6
NEMA L5-20 to IEC 320 C19
US
CAB-US515P-C19-US 13
NEMA 5-15 to IEC 320 C19
US
CAB-US520-C19-US 14
NEMA 5-20 to IEC 320 C19
US
CAB-US620P-C19-US 13
NEMA 6-20 to IEC-C19
US
CAB-C19-C20-IND
Power Cord C19 to C20 connector
India
UCSB-CABL-C19-BRZ 14
AC power cord NBR 14136 to C19 connector
Brazil
CAB-9K16A-BRZ
AC Power Cord, 250 V, 16 A, Source Plug EL224 to C19 connector
Brazil
CAB-ACS-16
AC Power Cord, 16A
Switzerland
CAB-AC-16A-AUS
AC Power Cord, 250 V, 16 A, C19 connector
Australia
CAB-C19-C20-3M-JP 10 3
AC Power Cord C19 to C20 connector, Japan PSE mark
Japan
CAB-AC-C19-TW
AC Power Cord, 250 V, 16 A, C19 connectors
Taiwan
CAB-AC-C6K-TWLK
AC Power Cord, 250 V, 16 A, twist lock NEMA L6-20 plug
US
CAB-AC-2500W-EU
AC Power Cord, 250 V, 16 A
Europe
CAB-AC-2500W-INT
AC Power Cord, 250 V, 16A
International
CAB-9K16A-KOR
AC Power Cord, 250 V, 16 A, Source Plug
Korea
CAB-AC-2500W-ISRL
AC Power Cord, 250 V, 16 A
Israel
CAB-AC16A-CH
AC Power Cord, 16 A
China
R2XX-DMYMPWRCORD NA NA
No power cord; PID option for ordering server with no power cord
This server supports the RAID and HBA controller options and cable requirements shown in the following
table.
Storage Adapter Product Name Supported Maximum Supported RAID Cache Size (GB)
(PID) Server Number of Type
Drives
Supported
Note For servers running in standalone mode only: After you replace controller hardware, you must run the
Cisco Host Upgrade Utility (HUU) to update the controller firmware, even if the firmware Current Version
is the same as the Update Version. This is necessary to program the controller's suboem-id to the correct value
for the server SKU. If you do not do this, drive enumeration might not display correctly in the software. This
issue does not affect servers controlled in UCSM mode.
For Supercap unit replacement instructions, see Replacing the Supercap (RAID Backup), on page 127.
Starting RAID Controller Migrate to Hardware RAID Allowed? Migrate to Software RAID
Allowed?
Note The SFF 10-drive version with NVMe drives only, and so does not use SAS or SATA RAID. This version of
the server comes with an NVMe-switch card factory-installed in the internal mRAID riser and a PCIe cable
connected to PCIe riser 2. The NVMe switch card is not orderable separately.
Embedded RAID
This SW RAID option can control up to 8 SATA drives in the SFF 10-drive version.
This embedded RAID option requires that you have a SATA interposer card installed in internal mRAID riser
3. Use the SAS/SATA cables that came with the server.
1. Connect SAS/SATA cable A1 from the A1 interposer connector to the A1 backplane connector.
2. Connect SAS/SATA cable A2 from the A2 interposer connector to the A2 backplane connector.
Note See the following figures that illustrate cable connections and which drives are controlled by each cable. In
the SFF 10-drive version, drives 5 and 10 cannot be controlled by the embedded SATA RAID controller.
This option requires that you have a SAS RAID or HBA card installed in internal mRAID riser 3. Use the
SAS/SATA cables that came with the server.
1. Connect SAS/SATA cable A1 from the A1 card connector to the A1 backplane connector.
2. Connect SAS/SATA cable A2 from the A2 card connector to the A2 backplane connector.
3. For SFF-10-drive servers only: Connect SAS/SATA cable B2 from the B2 card connector to the B2
backplane connector.
Note See the following figures that illustrate cable connections and which drives are controlled by each cable.
NVIDIA 4.0(2e)
T4
Note The minimum version of Cisco UCS Manager that supports this
card is 4.0(2c).
• All GPU cards must be procured from Cisco as there is a unique SBIOS ID required by Cisco management
tools, such a CIMC and UCSM.
• To support one or more GPUs, the server must have two CPUs and two full-height rear risers.
If you need to change this setting, enter the BIOS Setup Utility by pressing F2 when prompted during
bootup.
• If the server is integrated with Cisco UCS Manager and is controlled by a service profile, this setting is
enabled by default in the service profile when a GPU is present.
To change this setting manually, use the following procedure.
Step 1 Refer to the Cisco UCS Manager configuration guide (GUI or CLI) for your release for instructions on configuring service
profiles:
Cisco UCS Manager Configuration Guides
Step 2 Refer to the chapter on Configuring Server-Related Policies > Configuring BIOS Settings.
Step 3 In the section of your profile for PCI Configuration BIOS Settings, set Memory Mapped IO Above 4GB Config to one of
the following:
• Disabled—Does not map 64-bit PCI devices to 64 GB or greater address space.
• Enabled—Maps I/O of 64-bit PCI devices to 64 GB or greater address space.
• Platform Default—The policy uses the value for this attribute contained in the BIOS defaults for the server. Use
this only if you know that the server BIOS is set to use the default enabled setting for this item.
Step 2 Holding the GPU level, slide it out of the socket on the PCIe riser.
Step 3 Install a new GPU card:
Note The NVIDIA Tesla P4 and Tesla T4 are half-height, half-length cards. If one is installed in full-height PCIe
slot 1, it requires a full-height rear-panel tab installed to the card.
a) Align the new GPU card with the empty socket on the PCIe riser and slide each end into the retaining clip.
b) Push evenly on both ends of the card until it is fully seated in the socket.
c) Ensure that the card’s rear panel tab sits flat against the riser rear-panel opening.
Note For easy identification, riser numbers are stamped into the sheet metal on the top of each riser cage.
1 Captive screw for PCIe slot 1 6 Handle for PCIe slot 3 riser
(alignment feature)
PCIe slot 1 rear-panel opening
2 Captive screw for PCIe slot 2 7 Rear-panel opening for PCIe slot 1
(alignment feature)
3 Captive screw for PCIe slot 2 8 Rear-panel opening for PCIe slot 2
(alignment feature)
4 Handle for PCIe slot 1 riser 9 Rear-panel opening for PCIe slot 3
1 Captive screw for PCIe slot 1 4 Handle for PCIe slot 2 riser
2 Captive screw for PCIe slot 2 5 Rear-panel opening for PCIe slot 1
3 Handle for PCIe slot 1 riser - Rear-panel opening for PCIe slot 2
d) Position the PCIe riser over its sockets on the motherboard and over the chassis alignment channels.
Figure 44: PCIe Riser Alignment Features
• For a server with 3 HHHL risers, 3 sockets and 3 alignment features are available, as shown below.
1 Riser alignment features in chassis (captive Riser alignment features in chassis (captive
screws) screws)
• For a server with 2 FHFL risers, 2 sockets and 2 alignment features are available, as shown below.
e) Carefully push down on both ends of the PCIe riser to fully engage its two connectors with the two sockets on the
motherboard.
f) When the riser is level and fully seated, use a #2 Phillips screwdriver to secure the riser to the server chassis.
g) Replace the top cover to the server.
h) Replace the server in the rack, replace cables, and then fully power on the server by pressing the Power button.
Step 4 Optional: Continue with Installing Drivers to Support the GPU Cards, on page 176.
There are three editions of GRID licenses, which enable three different classes of GRID features. The GRID
software automatically selects the license edition based on the features that you are using.
GRID Virtual GPU (vGPU) Virtual GPUs for business desktop computing
GRID Virtual Workstation – Extended Virtual GPUs for high-end workstation computing
Workstation graphics on GPU pass-through
Step 1 Select the Log In link, or the Register link if you do not already have an account.
The NVIDIA Software Licensing Center > License Key Registration dialog opens.
Step 2 Complete the License Key Registration form and then click Submit My Registration Information.
The NVIDIA Software Licensing Center > Product Information Software dialog opens.
Step 3 If you have additional PAKs, click Register Additional Keys. For each additional key, complete the form on the License
Key Registration dialog and then click Submit My Registration Information.
Step 4 Agree to the terms and conditions and set a password when prompted.
Step 1 Return to the NVIDIA Software Licensing Center > Product Information Software dialog.
Step 2 Click the Current Releases tab.
Step 3 Click the NVIDIA GRID link to access the Product Download dialog. This dialog includes download links for:
• NVIDIA License Manager software
• The gpumodeswitch utility
• The host driver software
Installing GRID Licenses From the NVIDIA Licensing Portal to the License
Server
Accessing the GRID License Server Management Interface
Open a web browser on the License Server host and access the URL http://localhost:8080/licserver.
If you configured the License Server host’s firewall to permit remote access to the License Server, the
management interface is accessible from remote machines at the URL http://hostname:8080/licserver
Step 3 Select your License Server’s MAC address from the Server host ID pull-down.
Note It is important to use the same Ethernet ID consistently to identify the server when generating licenses on
NVIDIA’s Licensing Portal. NVIDIA recommends that you select one entry for a primary, non-removable
Ethernet interface on the platform.
Step 3 Use the License Server Configuration menu to install the .bin file that you generated earlier.
a) Click Choose File.
b) Browse to the license .bin file that you want to install and click Open.
c) Click Upload.
The license file is installed on your License Server. When installation is complete, you see the confirmation message,
“Successfully applied license file to license server.”
Step 1 Open the NVIDIA Control Panel using one of the following methods:
• Right-click on the Windows desktop and select NVIDIA Control Panel from the menu.
• Open Windows Control Panel and double-click the NVIDIA Control Panel icon.
Step 2 In the NVIDIA Control Panel left-pane under Licensing, select Manage License.
The Manage License task pane opens and shows the current license edition being used. The GRID software automatically
selects the license edition based on the features that you are using. The default is Tesla (unlicensed).
Step 3 If you want to acquire a license for GRID Virtual Workstation, under License Edition, select GRID Virtual Workstation.
Step 4 In the License Server field, enter the address of your local GRID License Server. The address can be a domain name or
an IP address.
Step 5 In the Port Number field, enter your port number of leave it set to the default used by the server, which is 7070.
Step 6 Select Apply.
The system requests the appropriate license edition from your configured License Server. After a license is successfully
acquired, the features of that license edition are enabled.
Note After you configure licensing settings in the NVIDIA Control Panel, the settings persist across reboots.
Step 2 Edit the ServerUrl line with the address of your local GRID License Server.
The address can be a domain name or an IP address. See the example file below.
Step 3 Append the port number (default 7070) to the end of the address with a colon. See the example file below.
Step 4 Edit the FeatureType line with the integer for the license type. See the example file below.
• GRID vGPU = 1
• GRID Virtual Workstation = 2
The service automatically acquires the license edition that you specified in the FeatureType line. You can confirm this
in /var/log/messages.
Note After you configure licensing settings in the NVIDIA Control Panel, the settings persist across reboots.
Using gpumodeswitch
The command line utility gpumodeswitch can be run in the following environments:
• Windows 64-bit command prompt (requires administrator permissions)
• Linux 32/64-bit shell (including Citrix XenServer dom0) (requires root permissions)
Note Consult NVIDIA product release notes for the latest information on compatibility with compute and graphic
modes.
Switches to graphics mode. Switches mode of all supported GPUs in the server unless you specify
otherwise when prompted.
• --gpumode compute
Switches to compute mode. Switches mode of all supported GPUs in the server unless you specify
otherwise when prompted.
Note After you switch GPU mode, reboot the server to ensure that the modified resources of the GPU are correctly
accounted for by any OS or hypervisor running on the server.
Note You must do this procedure before you update the NVIDIA drivers.
Step 1 Install your hypervisor software on a computer. Refer to your hypervisor documentation for the installation instructions.
Step 2 Create a virtual machine in your hypervisor. Refer to your hypervisor documentation for instructions.
Step 3 Install the GPU drivers to the virtual machine. Download the drivers from either:
• NVIDIA Enterprise Portal for GRID hypervisor downloads (requires NVIDIA login):
https://nvidia.flexnetoperations.com/
• NVIDIA public driver area: http://www.nvidia.com/Download/index.aspx
• AMD: http://support.amd.com/en-us/download