CS500 Hardware Guide
CS500 Hardware Guide
(Rev A)
H-6156
Contents
Contents
About the CS500 Hardware Guide............................................................................................................................3
CS500 System Description........................................................................................................................................4
1211 Service Node....................................................................................................................................................5
Drive Bay Options............................................................................................................................................7
Rear View........................................................................................................................................................9
Front Controls and I/O Panel Features..........................................................................................................10
1211 Chassis Components...........................................................................................................................14
1211 Drive Backplanes..................................................................................................................................15
PCIe Riser Card Support...............................................................................................................................18
Power Supplies..............................................................................................................................................21
Chassis Cooling.............................................................................................................................................22
Internal Cabling.............................................................................................................................................25
3211 Compute Server..............................................................................................................................................31
Drive Bay Options..........................................................................................................................................32
Rear View......................................................................................................................................................34
Front Control Panel Buttons and LEDs.........................................................................................................35
3211 Chassis Components...........................................................................................................................40
Compute Node Tray......................................................................................................................................40
M.2 SSD Support................................................................................................................................45
System Boards and Internal Cabling.............................................................................................................46
CCS Environmental Requirements..........................................................................................................................49
S2600WF Motherboard Description........................................................................................................................50
Component Locations....................................................................................................................................52
Architecture...................................................................................................................................................57
Processor Socket Assembly..........................................................................................................................59
Memory Support and Population...................................................................................................................62
S2600BP Motherboard Description.........................................................................................................................66
Component Locations....................................................................................................................................68
Processor Socket Assembly..........................................................................................................................73
Architecture...................................................................................................................................................75
Processor Population Rules..........................................................................................................................76
Memory Support and Population Rules.........................................................................................................77
Configuration and Recovery Jumpers...........................................................................................................78
BIOS Features...............................................................................................................................................81
H-6156 (Rev A) 2
About the CS500 Hardware Guide
Document Versions
H-6156 (Rev A)
October 2017. The initial release of the CS500 Hardware Guide including the 1211 and
3211 server chassis and Intel S2600BP and S2600WF motherboards.
Feedback
Visit the Cray Publications Portal at http://pubs.cray.com. Email your comments and feedback to [email protected].
Your comments are important to us. We will respond within 24 hours.
H-6156 (Rev A) 3
CS500 System Description
The Cray CS500 system is an x86-64 Linux system Figure 1. CS500 System
that is designed for excellent computational
performance. The system can support features such as
diskless provisioning of the operating system, virtual
cluster provisioning, remote monitoring, and out-of-
band management. The system supports InfiniBand,
Omni-Path, and Ethernet high-speed networks, an
Ethernet network (for provisioning and operations), and
a dedicated Ethernet management network.
There are two rackmount server platforms for the Cray
CS500 cluster supercomputer:
● 1211 service node
● 3211 compute server
H-6156 (Rev A) 4
1211 Service Node
Chassis cover
Rail kit standoffs
Drive bays
Cover removal thumb pads
Disk drive cage
Front control panel
(location differs with
drive configuration)
Feature Description
Chassis Type 19-in wide, 2U rackmounted chassis
Motherboard Options Intel S2600WF (Wolf Pass)
● S2600WF0 - no onboard LAN
● S2600WFT - dual 10GbE ports (RJ45)
GPU Options NVIDIA® Tesla® P100 PCIe card (250 W, 12/16 GB)
H-6156 (Rev A) 5
1211 Service Node
Feature Description
Power Supplies One or two 1300W AC power supply modules
Cooling ● System fan assembly: Six managed 60 mm fans
● Built-in air duct to support passively cooled processors
● Passive processor heatsinks
● Two in-line fans in each power supply module
Riser Card Support Support for three riser cards [PCIe 3.0: 8 GT/s].
● Riser #1 – x24 – up to 3 PCIe slots
● Riser #2 – x24 – up to 3 PCIe slots
● Riser #3 – x16– up to 2 PCIe slots (optional low profile cards)
With three riser cards installed, up to 8 possible add-in cards are supported:
● Risers 1 and 2: Four full height / half length + two full height / half length add-
in cards
● Riser #3: Two low profile add-in cards (optional)
H-6156 (Rev A) 6
1211 Service Node
Drive Numbering
Drive numbers in the following two figures typical numbering schemes. However, actual drive numbering depends
on SAS/SATA controller configuration and backplane cabling. Drive backplanes use multi-port, mini-SAS HD
connectors for each set of four SATA/SAS drives. Backplanes that support PCIe NVMe drives also include a
single PCIe OCuLink connector for each supported NVMe drive.
Figure 3. 2.5" Drive Options
8 - 2.5” Drives
Bay for additional
2.5” hot swap drives 2.5” hot swap drives Video Front
(0-7) (8-15) (DB15) control panel
24 - 2.5” Drives
Front
USB 2.0 port 2.5” hot swap drives control panel
(0-23)
H-6156 (Rev A) 7
1211 Service Node
8 - 3.5” Drives
12 - 3.5” Drives
Support for Front
USB 2.0 port two NVMe drives control panel
H-6156 (Rev A) 8
1211 Service Node
The activity/green LED states for PCIe SSDs are the same the table above. The status/amber LED states are
different as listed in the following table.
Table 3. PCIe* SSD drive status LED state
Solid on Fault/fail
H-6156 (Rev A) 9
1211 Service Node
Supported SATA SSDs must not exceed the following power and thermal limits:
● One or two SATA SSDs supporting up to 4 W per device with a case temperature rating of 70 °C
● One or two SATA SSDs supporting up to 1.5 W per device with a case temperature rating of 60 °C
Figure 5. 1211 Rear View
Two 2.5”
SATA SSDs Riser card 3 Riser card 2 Serial port B Riser card 1
(optional) bay bay (optional) bay
System ID button
Toggles the integrated ID LED and the blue motherboard ID LED on and off. The system ID
LED is used to visually identify a specific server installed in the rack or among several racks
of servers. The system ID LED can also be toggled on and off remotely using the IPMI
“chassis identify” command which causes the LED to blink for 15 seconds.
H-6156 (Rev A) 10
1211 Service Node
NMI button
When the NMI button is pressed, it puts the server in a halt state and issues a non-
maskable interrupt (NMI). This can be useful when performing diagnostics for a given issue
where a memory download is necessary to help determine the cause of the problem. To
prevent an inadvertent system halt, the actual NMI button is recessed from the front panel
where it is accessible only with a small tipped tool like a pin or paper clip.
NIC activity LEDs
An activity LED is included for each onboard network interface controller (NIC). When a
network link is detected, the LED turns on solid. The LED blinks consistently while the
network is being used.
System cold reset button
Pressing this button reboots the reinitializes the system.
System Status LED
This LED lights green or amber to indicate the current health of the server. This feature is
also provided by an LED on the back edge of the motherboard. Both LEDs are tied together
and show the same state. The System Status LED states are driven by the on-board
platform management subsystem. A description of each LED state for the server follows.
Green Solid on OK Indicates the system is running (in S0 state) and its status is
Healthy. There are no system errors.
H-6156 (Rev A) 11
1211 Service Node
1. The overall power consumption of the system is referred to as System Power States. There are a total of six
different power states ranging from S0 (the system is completely powered ON and fully operational) to S5 (the
system is completely powered OFF) and the States (S1, S2, S3 and S4) are referred to as sleeping states
H-6156 (Rev A) 12
1211 Service Node
Video
(DB15) USB 3.0 ports
Video connector
A monitor can be connected to the video connector on the front I/O panel. When BIOS
detects that a monitor is attached to this video connector, it disables video signals routed to
the video connector on the back on the chassis. Video resolution from the front connector
may be lower than from the rear on-board video connector. A short video cable should be
used for best resolution. The front video connector is cabled to a 2x7 header on the server
board labeled “FP Video”.
USB 2.0/3.0 Ports
The front I/O panel includes two USB 2.0/3.0 ports. The USB ports are cabled to a blue 2x5
connector on the server board labeled “FP_USB”.
Due to signal strength limits associated with USB 3.0 ports cabled to a front panel, some
marginally compliant USB 3.0 devices may not be supported from these ports.
H-6156 (Rev A) 13
1211 Service Node
Power supply
bays (2x)
Rail kit
standoffs (4x)
DDR4 DIMMs
CPU 2
Chassis
H-6156 (Rev A) 14
1211 Service Node
SAS/SATA SAS/SATA
drives 4-7 drives 0-3
Back side
H-6156 (Rev A) 15
1211 Service Node
SATA I2C
drive 0
H-6156 (Rev A) 16
1211 Service Node
I2C connector
The backplane includes a 1x5 pin I2C connector. This connector is cabled to a matching
HSBP I2C connector on the motherboard and is used as a communication path to the
onboard BMC.
SGPIO connector
The backplane includes a 1x5 pin Serial General Purpose Input/Output (SGPIO) connector.
When the backplane is cabled to the on-board SATA ports, this connector is cabled to a
matching SGPIO connector on the motherboard, and provides support for drive activity and
fault LEDs.
Back side
Front side
H-6156 (Rev A) 17
1211 Service Node
Front side
H-6156 (Rev A) 18
1211 Service Node
Controlled by CPU1:
Controlled by CPU2:
Controlled by CPU1:
Controlled by CPU2:
H-6156 (Rev A) 19
1211 Service Node
Slot 1
Slot 2
Slot 3
X24 X24 X12
X16 – CPU 1 X24 – CPU 2 X8 – CPU 2
+ +
X8 – CPU 2 X4 DMI - CPU 2
Top CPU 1 – Ports 1A and 1B (x8 elec, x16 mech) CPU 1 – Ports 1A thru 1D (x16 elec, x16 mech)
Middle CPU 1 – Ports 1C and 1D (x8 elec, x16 mech) N/A
Bottom CPU 2 – Ports 1C and 1D (x8 elec, x8 mech) CPU 2 – Ports 1C and 1D (x8 elec, x8 mech)
Top CPU 2 – Ports 2A and 2B (x8 elec, x16 mech) CPU 2 – Ports 2A thru 2D (x16 elec, x16 mech)
Middle CPU 2 – Ports 2C and 2D (x8 elec, x16 mech) N/A
Bottom CPU 2 – Ports 1A and 1B (x8 elec, x8 mech) CPU 2 – Ports 1A and 1B (x8 elec, x8 mech)
Top CPU 2 – DMI x4 (x4 elec, x8 mech) Low profile cards only.
Bottom CPU 2 – Ports 3C and 3D (x8 elec, x8 mech) Low profile cards only.
H-6156 (Rev A) 20
1211 Service Node
No tools are needed to install the riser card assemblies into the chassis. Hooks on the back edge of the riser card
assembly are aligned with slots on the chassis, then each assembly is pushed down into the respective riser card
slots on the motherboard.
Figure 15. Riser Card Assembly Installation
Hooks (2)
Slots (2)
Power Supplies
The 1211 chassis uses two 1300W power supply modules in a 1+1 redundant power configuration. Each power
supply module has dual inline 40mm cooling fans with one mounted inside the enclosure and the other extending
outside the enclosure. The power supplies are modular and can be inserted and removed from the chassis
without tools. When inserted, the card edge connector of the power supply mates blindly to a matching slot on the
motherboard. In the event a power supply fails, hot-swap replacement is available.
Figure 16. 1300W Power Supply
1300W AC
common redundant power supply (CRPS) 80+
titanium efficiency
power supply module
AC Input
Input connector: C14
AC input voltage range: 115 VAC to 220 VAC
H-6156 (Rev A) 21
1211 Service Node
Redundant 1+1 power is automatically configured depending on the total power draw of the chassis. If total
chassis power draw exceeds the power capacity of a single power supply, then power from the second power
supply module is used. Should this occur, power redundancy is lost.
CAUTION:
● Power supply units with different wattage ratings
● Installing two power supply units with different wattage ratings in a system is not supported. Doing so
will not provide power supply redundancy and will result in multiple errors being logged by the
system.
The power supply recovers automatically after an AC power failure. AC power failure is defined to be any loss of
AC power that exceeds the dropout criteria.
The power supplies have over-temperature protection (OTP) circuits that protect the power supplies against high
temperature conditions caused by loss of fan cooling or excessive chassis/ambient temperatures. In an OTP
condition, the power supplies will shut down. Power supplies restore automatically when temperatures drop to
specified limits, while the 12 VSB always remains on.
The server has a throttling system to prevent the system from crashing if a power supply module is overloaded or
over heats. If server system power reaches a preprogrammed limit, system memory and/or processors are
throttled back to reduce power. System performance is impacted if this occurs.
Blinking green, 1 Hz AC present, only 12 VSB on (PS off) or PS in cold redundant state
Amber solid AC cord unplugged or AC power lost, with a second power supply in parallel still
with AC input power
Blinking amber, 1 Hz Power supply warning events where the power supply continues to operate; high
temp, high power, high current, slow fan
Solid amber solid Power supply critical event causing a shutdown, failure, OCP, OVP, Fan Fail
H-6156 (Rev A) 22
1211 Service Node
populated drive carriers, and installed CPU heats sinks. Drive carriers can be populated with a storage device
(SSD or HDD) or supplied drive blank. In addition, it may be necessary to have specific DIMM slots populated
with DIMMs or supplied DIMM blanks.
The CPU 1 processor and heatsink must be installed first. The CPU 2 heatsink must be installed at all times, with
or without a processor installed.
Figure 17. System Fans
Air duct
Air flow
Fan module assembly
(six fans)
Individual fan
With fan redundancy, should a single fan failure occur (system fan or power supply fan), integrated platform
management changes the state of the system status LED to blinking green, reports an error to the system event
log, and automatically adjusts fan speeds as needed to maintain system temperatures below maximum thermal
limits.
All fans in the fan assembly and power supplies are controlled independent of each other. The fan control system
may adjust fan speeds for different fans based on increasing/decreasing temperatures in different thermal zones
within the chassis.
If system temperatures continue to increase above thermal limits with system fans operating at their maximum
speed, platform management may begin to throttle bandwidth of either the memory subsystem or processors or
both, to keep components from overheating and keep the system operational. Throttling of these subsystems will
continue until system temperatures are reduced below preprogrammed limits.
The power supply module will shut down if its temperature exceeds an over-temperature protection limit. If system
thermals increase to a point beyond the maximum thermal limits, the server will shut down, the System Status
H-6156 (Rev A) 23
1211 Service Node
LED changes to solid Amber, and the event is logged to the system event log. If power supply temperatures
increase beyond maximum thermal limits or if a power supply fan fail, the power supply will shut down.
System Fans
The system is designed for fan redundancy when configured with two power supply modules. Should a single
system fan fail, platform management adjusts air flow of the remaining system fans and manages other platform
features to maintain system thermals. Fan redundancy is lost if more than one system fan is in a failed state.
The fan assembly must be removed when routing cables inside the chassis from back to front, or when
motherboard replacement is necessary.
The system fan assembly is designed for ease of use and supports several features:
● Each individual fan is hot-swappable.
● Each fan is blind mated to a matching 6-pin connector located on the motherboard.
● Each fan is designed for tool-less insertion and extraction from the fan assembly.
● Each fan has a tachometer signal that allows the integrated BMC to monitor its status.
● Fan speed for each fan is controlled by integrated platform management. As system thermals fluctuate high
and low, the integrated BMC firmware increases and decreases the speeds to specific fans within the fan
assembly to regulate system thermals.
● An integrated fault LED is located on the top of each fan. Platform management illuminates the fault LED for
the failed fan.
Add-in card
Air duct
support brackets (2)
(clear plastic) Air duct posts (2)
Tabs
(snap underneath the top
edge of the riser card
Alignment tabs (3) assemblies)
(align to matching slots
in the fan assembly
Dual SATA SSD or RAID
mounting location
Air duct
left side wall
(black plastic)
H-6156 (Rev A) 24
1211 Service Node
Internal Cabling
The system fan must be removed when routing cables internally from front to back. All cables should be routed
using the cable channels in between the chassis sidewalls and the air duct side walls as shown by the blue
arrows in the following illustration. When routing cables front to back, none should be routed through the center of
the chassis or between system fans or DIMM slots.
Cable routing diagrams for each of the different drive backplane configurations appear on the following pages.
Figure 19. Internal Cable Routing Channels
H-6156 (Rev A) 25
1211 Service Node
PCIe
SSD 1
SATA
PCIe 0-3
SSD 0
Front Panel
USB 2.0 SATA
4-7
HSBP
Front Panel Power
USB 2.0/3.0
Front
Panel
Standard Video
HSBP I2C Front Panel
Control
Power cable
SAS/SATA cable
I2C cable
Front control panel and I/O cable
H-6156 (Rev A) 26
1211 Service Node
PCIe
SSD 1
SATA
PCIe 0-3
SSD 0
Front Panel
USB 2.0 SATA
4-7
HSBP
Front Panel Power
USB 2.0/3.0
Front
Panel
Standard Video
HSBP I2C Front Panel
Control
H-6156 (Rev A) 27
1211 Service Node
PCIe
SSD 1
SATA
PCIe 0-3
SSD 0
Front Panel
USB 2.0 SATA
4-7
HSBP
Front Panel Power
USB 2.0/3.0
Front
Panel
Standard Video
HSBP I2C Front Panel
Control
8 x 3.5” backplane
Power cable
SAS/SATA cable
I2C cable
Front control panel and I/O cable
H-6156 (Rev A) 28
1211 Service Node
PCIe
SSD 1
SATA
PCIe 0-3
SSD 0
Front Panel
USB 2.0 SATA
4-7
HSBP
Front Panel Power
USB 2.0/3.0
Front
Panel
Standard Video
HSBP I2C Front Panel
Control
12 x 3.5” backplane
Power cable
SAS/SATA cable
I2C cable
Front control panel and I/O cable
H-6156 (Rev A) 29
1211 Service Node
sSATA 4
sSATA 5 HSBP SGPIO & SATA cable bundle
2 x 2.5” Backplane
0
I2C
Peripheral power
Power cable
SAS/SATA cable
I2C/SGPIO cable
H-6156 (Rev A) 30
3211 Compute Server
Compute node
tray (4)
Power distribution
module cover
Feature Description
Chassis Type ● 19-inch wide, 2U rackmount chassis
● Up to four compute modules/nodes
H-6156 (Rev A) 31
3211 Compute Server
Feature Description
Power supplies Two 2130W power supplies (80 Plus Platinum efficiency)
Cooling ● Three 40 x 56 mm dual-rotor fans per node optimized by fan speed control
● One transparent air duct per node
● One passive processor heatsink per node
● One 40 mm fan in each power supply unit
H-6156 (Rev A) 32
3211 Compute Server
Drive numbering. The following figure shows numbers/groups for drives routed to the same compute node
through the backplane. These numbers/groups are not indicated on the hardware.
● 2.5” drives
○ 4x SATA (6 Gbps) / SAS (12 Gbps)
○ 24x SATA (6 Gbps) / SAS (12 Gbps)/ NVMe (8 total, max. 2 per node)
● 3.5: drives
○ 12x SATA (6 Gbps) / SAS (12 Gbps)
Figure 26. Front Bay Drive Options
4 - 2.5” Drives
This drive configuration includes 4x 3.5” drive carriers.
Front However, to maintain the thermal requirements to support Front
control panel 165W TDP processors, only 2.5” drives are supported. control panel
24 - 2.5” Drives
Front Front
control panel control panel
12 - 3.5” Drives
Front Front
control panel Node 3 Node 4 control panel
Node 1 Node 2
H-6156 (Rev A) 33
3211 Compute Server
● For 24 x 2.5” drive configurations, the drive bay supports 12 Gb SAS or 6 Gb SAS drives. The SAS drives are
hot‑swappable. The front side of the backplane includes 24 drive interface connectors. All the 24 connectors
can support SAS drives, but only the connector #4 and #5 of each compute module are capable of supporting
PCIe* SFF devices. Two different drive carriers are included in the drive bay. Drive carriers with a Blue latch
are used to identify support of PCIe* SFF devices or SAS drives. Drives carriers with a Green latch are used
to identify support of SAS drives only
● NVME SSDs have hot swap / hot plug capability. Support and usage models are OS dependent.
● For a given compute node, any combination of NVMe and SAS drives can be supported, as long as the
number of NVMe drives does not exceed two and they are installed only in the last two drive connectors on
the backplane (4 and 5) and the remaining drives are SAS drives (0, 1, 2, 3).
● Mixing of NVMe and SAS drives in an alternating manner is not a recommended configuration.
H-6156 (Rev A) 34
3211 Compute Server
1
Slot 1 riser card Video port Slot 2 riser card
add-in card bay (VGA) add-in card bay
Compute module
RJ45
Dedicated 2
management NIC 1 NIC 2
port
2x Stacked SFP+
USB 3.0
H-6156 (Rev A) 35
3211 Compute Server
Power button with LED. Toggles the node power on and off. Pressing this button sends a signal to the BMC,
which either powers the system on or off. The integrated LED is a single color (green) and is capable of
supporting different indicator states.
The power LED sleep indication is maintained on standby by the chipset. If the compute node is powered down
without going through the BIOS, the LED state in effect at the time of power off is restored when the compute
node is powered on until the BIOS clear it.
If the compute node is not powered down normally, it is possible the Power LED will blink at the same time the
compute node status LED is off due to a failure or configuration change that prevents the BIOS from running.
ID button with LED. Toggles the integrated ID LED and blue ID LED on the rear of the node motherboard on and
off. The ID LED is used to visually identify a specific compute node in the server chassis or among several
servers in the rack. If the LED is off, pushing the ID button lights the ID LED. Issuing a chassis identify command
causes the LED to blink. The LED remains lit until the button is pushed again or until a chassis identify command
is received.
Network link/Activity link. When a network link from the compute node is detected, the LED turns on solid. The
LED blinks consistently while the network is being used.
Status LED. This is a bicolor LED that is tied directly to the Status LED on the motherboard (if present). This LED
indicates the current health of the compute node.
H-6156 (Rev A) 36
3211 Compute Server
When the compute node is powered down (transitions to the DC-off state or S5), the BMC is still on standby
power and retains the sensor and front panel Status LED state established before the power-down event.
When AC power is first applied to the compute node, the Status LED turns solid amber and then immediately
changes to blinking green to indicate that the BMC is booting. If the BMC boot process completes with no errors,
the Status LED will change to solid green.
When power is first applied to the compute node and 5V-STBY is present, the BMC controller on the motherboard
requires 15-20 seconds to initialize. During this time, the compute node status LED will be solid on, both amber
and green. Once BMC initialization has completed, the status LED will stay green solid on. If power button is
pressed before BMC initialization completes, the compute node will not boot to POST.
H-6156 (Rev A) 37
3211 Compute Server
H-6156 (Rev A) 38
3211 Compute Server
Solid on Critical, non-recoverable – Fatal alarm – system has failed or shut down:
System is halted
● CPU CATERR signal asserted
● MSID mismatch detected (CATERR also
asserts for this case).
● CPU 1 is missing
● CPU Thermal Trip
● No power good – power fault
● DIMM failure when there is only 1 DIMM
present and hence no good memory
present.
● Runtime memory uncorrectable error in
nonredundant mode.
● DIMM Thermal Trip or equivalent
● SSB Thermal Trip or equivalent
● CPU ERR2 signal asserted
● BMC/Video memory test failed. (Chassis ID
shows blue/solid-on for this condition)
● Both uBoot BMC FW images are bad.
(Chassis ID shows blue/solid-on for this
condition)
● 240VA fault
● Fatal Error in processor initialization:
○ Processor family not identical
○ Processor model not identical
○ Processor core/thread counts not
identical
○ Processor cache size not identical
○ Unable to synchronize processor
frequency
○ Unable to synchronize QPI link
frequency
● Uncorrectable memory error in a non-
redundant mode
H-6156 (Rev A) 39
3211 Compute Server
Drive cage
(rotated)
Backplane
(attached to drive cage)
H-6156 (Rev A) 40
3211 Compute Server
H-6156 (Rev A) 41
3211 Compute Server
● Main 12V hot swap connectivity between compute node tray and chassis power distribution boards.
● Current sensing of 12V main power for use with node manager.
● Three 8-pin dual rotor fan connectors.
● Four screws secure the power docking board to the compute node tray.
Figure 31. Power Docking Boards
Standard Power Docking Board Power Docking Board for 24x Drive Chassis
Bridge Board
The bridge board extends motherboard I/O signals by delivering SATA/SAS/NVMe signals, disk backplane
management signals, BMC SMBus signals, control panel signals, and various compute node specific signals. The
bridge board provides hot swap interconnect of all electrical signals to the chassis backplane (except for main
12V power). One bridge board is used on each compute node. The bridge board is secured to the compute node
tray with six screws through the side of the tray. A black, plastic mounting plate at the end of the bridge board
protects and separates the bridge board from the side of the tray.
There are different bridge board options to support the different drive options in the front of the server. Dual
processor system configurations are required to support a bridge board with 12G SAS support. The 12G SAS
bridge boards are not functional in a single processor system configuration.
H-6156 (Rev A) 42
3211 Compute Server
RAID key
connector
H-6156 (Rev A) 43
3211 Compute Server
System fans
The three dual rotor 40 x 40 x 56 system managed fans provide front to back air flow through the compute node.
Each fan is mounted within a metal housing on the compute node base. System fans are not held in place using
any type of fastener. They are tightly held in place by friction, using a set of four blue sleeved rubber grommets
that sit within cutouts in the chassis fan bracket.
Each system fan is cabled to separate 8-pin connectors on the power docking board. Fan control signals for each
system fan are then routed to the motherboard through a single 2x7 connector on the power docking board, which
is cabled to a matching fan controller header on the motherboard.
Each fan within the compute node can support variable speeds. Fan speed may change automatically when any
temperature sensor reading changes. Each fan connector within the node supplies a tachometer signal that
allows the baseboard management controller (BMC) to monitor the status of each fan. The fan speed control
algorithm is programmed into the motherboard’s integrated BMC.
Compute nodes do not support fan redundancy. Should a single rotor stop working, the following events will most
likely occur:
● The integrated BMC detects the fan failure.
● The event is logged to the system event log (SEL).
● The System Status LED on the server board and chassis front panel will turn flashing green, indicating a
system is operating at a degraded state and may fail at some point.
● In an effort to keep the compute node at or below pre-programmed maximum thermal limits monitored by the
BMC, the remaining functional system fans will operate at 100%.
Fans are not hot swappable. Should a fan fail, it should be replaced as soon as possible.
Air duct
Each compute node requires the use of a transparent plastic air duct to direct airflow over critical areas within the
node. To maintain the necessary airflow, the air duct must be properly installed and seated before sliding the
compute node into the chassis.
Figure 33. Compute Node Air Duct
Air duct
(installed)
In system configurations where CPU 1 is configured with an integrated Intel® Omni-Path Host Fabric Interface, an
additional plastic air baffle is attached to the bottom side of the air duct. The air baffle must be attached to the air
duct to ensure proper airflow to the chip set and the Intel Fabric Through (IFT) carrier when installed.
H-6156 (Rev A) 44
3211 Compute Server
M.2 connector
(80 mm)
H-6156 (Rev A) 45
3211 Compute Server
H-6156 (Rev A) 46
3211 Compute Server
Figure 36. System Board and Cabling Connections (8x, 12x drive chassis)
Bridge board
connector
PMBus* cable
Power
supply cage CPU 2
Control signal
connector
Main power
output connectors
Power
distribution
board
(PSU 2, top)
(PSU 1, bottom)
CPU 1
SATA/PCIe
Power supply
M.2
Motherboard
Node Tray
H-6156 (Rev A) 47
3211 Compute Server
Figure 37. System Board and Cabling Connections (24x drive chassis)
Front control
Chassis backplane Signal connector Power control 5V power Node 1 main Backplane panel connector
(24x - 2.5” drives) (to BIB) connector (to PIB) power connector interposer board (nodes 1 and 3)
Bridge board
connector
Bridge board
(12 GB SAS/
PCIe SFF
Main power combo)
connectors to bottom
power distribution board 5V
(PSU 1)
CPU 2
Power
supply cage
Control signal
connector
Main power
output connectors
Power
distribution
board
(PSU 2, top)
(PSU 1, bottom)
CPU 1
RAID key
connector
Power supply
units 2x40 pin
edge
PSU 2 (top) connector
PSU 1 (bottom)
Motherboard
Node Tray
H-6156 (Rev A) 48
CCS Environmental Requirements
H-6156 (Rev A) 49
S2600WF Motherboard Description
Support for
OmniPath
carrier card
upport forSupport
Intel®S for
SAS RAID
card
H-6156 (Rev A) 50
S2600WF Motherboard Description
Feature Description
Processor support Support for two Intel Xeon Scalable family processors:
● Two LGA3647, (Socket-P0) processor sockets
● Maximum thermal design power (TDP) of 205 W (board only)
H-6156 (Rev A) 51
S2600WF Motherboard Description
Feature Description
USB Support ● Three external USB 3.0 ports
● One internal Type-A USB 2.0 port
● One internal 20-pin connector for optional front panel USB 3.0
ports (2x)
● One Internal 10-pin connector for optional front panel USB 2.0
ports (2x)
H-6156 (Rev A) 52
S2600WF Motherboard Description
H-6156 (Rev A) 53
S2600WF Motherboard Description
RJ45 Connectors
Jumper Settings
Jumpers can be used to modify the operation of the motherboard. They can be used to configure, protect, or
recover specific features of the motherboard. Jumpers create shorts between two pins to change the function of
the connector. The location of each jumper block is shown in the following figure. Pin 1 of each jumper block is
identified by the arrowhead (▼) silk screened on the board next to the pin.
Figure 41. Jumper Blocks
BIOS
Default Default
J2B1 Enabled
BIOS
Recovery Default
Password J5A3 Enabled
Clear Default
J2B2 Enabled
ME FW
Update Default
BMC
Default J5A4 Enabled
Force
Update Enabled
J1C2
H-6156 (Rev A) 54
S2600WF Motherboard Description
BIOS Default
This jumper resets BIOS options, configured using the <F2> BIOS Setup Utility, back to their
original default factory settings. This jumper does not reset Administrator or User
passwords. In order to reset passwords, the Password Clear jumper must be used.
1. Move the “BIOS DFLT” jumper from pins 1 - 2 (default) to pins 2 - 3 (Set BIOS Defaults).
2. Wait 5 seconds then move the jumper back to pins 1 – 2.
3. During POST, access the <F2> BIOS Setup utility to configure and save desired BIOS
options.
The system will automatically power on after AC is applied to the system. The system time
and date may need to be reset. After resetting BIOS options using the BIOS Default jumper,
the Error Manager Screen in the <F2> BIOS Setup Utility will display two errors: 0012-
System RTC date/time not set and 5220-BIOS Settings reset to default settings.
Password Clear
This jumper causes both the User password and the Administrator password to be cleared if
they were set. The operator should be aware that this creates a security gap until
passwords have been installed again through the <F2> BIOS Setup utility. This is the only
method by which the Administrator and User passwords can be cleared unconditionally.
Other than this jumper, passwords can only be set or cleared by changing them explicitly in
BIOS Setup or by similar means. No method of resetting BIOS configuration settings to de-
fault values will affect either the Administrator or User passwords.
1. Move the “Password Clear” jumper from pins 1 – 2 (default) to pins 2 – 3 (password
clear position).
2. Power up the server and access the <F2> BIOS Setup utility.
3. Verify the password clear operation was successful by viewing the Error Manager
screen. Two errors should be logged: 5221-Passwords cleared by jumper and 5224-
Password clear jumper is set.
4. Exit the BIOS Setup utility and power down the server. For safety, remove the AC power
cords.
5. Move the “Password Clear” jumper back to pins 1 - 2 (default).
6. Power up the server.
7. Boot into <F2> BIOS Setup immediately, go to the Security tab and set the
Administrator and User passwords if you intend to use BIOS password protection.
BMC Force Update
The BMC Force Update jumper is used to put the BMC in Boot Recovery mode for a low-
level update. It causes the BMC to abort its normal boot process and stay in the boot loader
without executing any Linux code. This jumper should only be used if the BMC firmware has
become corrupted and requires re-installation.
1. Power down the system and remove the AC power cords. If the BMC FRC UPD jumper
is moved with AC power applied to the system, the BMC will not operate properly.
2. Move the “BMC FRC UPD” Jumper from pins 1 - 2 (default) to pins 2 - 3 (Force Update
position).
3. Boot the system into the EFI shell.
H-6156 (Rev A) 55
S2600WF Motherboard Description
H-6156 (Rev A) 56
S2600WF Motherboard Description
S2600WF Architecture
The architecture of the S2600WF motherboard is developed around the integrated features and functions of the
Intel® Xeon® Scalable family, the Intel® C620 Series Chipset family, Intel® Ethernet Controller X557, and the
ASPEED AST2500 Server Board Management Controller. Previous generations of Xeon E5-2600 processors are
not supported.
The following figure provides an overview of the S2600WF architecture, showing the features and interconnects
of each of the major subsystem components.
Figure 42. S2600WF Block Diagram
H-6156 (Rev A) 57
S2600WF Motherboard Description
H-6156 (Rev A) 58
S2600WF Motherboard Description
● <F6> - Pop-up BIOS boot menu. Displays all available boot devices. The boot order in the pop-up menu is not
the same as the boot order in the BIOS setup. The pop-up menu simply lists all of the available devices from
which the system can be booted, and allows a manual selection of the desired boot device.
● <F12> - Network boot
● <Esc> - Switch from logo screen to diagnostic screen
● <Pause> - Stop POST temporarily
Field Replaceable Unit (FRU) and Sensor Data Record (SRD) Data
The server/node chassis and motherboard needs accurate FRU and SDR data to ensure the embedded platform
management system is able to monitor the appropriate sensors and operate the chassis/system with optimum
cooling and performance. The BMC automatically updates initial FRU/SDR configuration data after changes are
made to the server hardware configuration when any of the following components are added or removed:
● Processor
● Memory
● OCP Module
● Integrated SAS Raid module
● Power supply
● Fan
● Intel® Xeon Phi™ co-processor PCIe card
● Hot Swap Backplane
● Front Panel
The system may not operate with the best performance or best/appropriate cooling if the proper FRU and SDR
data is not installed.
Important:
● Previous-generation Intel® Xeon® (v3/v4) processors and their supported CPU heatsinks are not supported
on the S2600WF.
● The LGA 3647 socket also supports the Intel Xeon Scalable processors with embedded Omni-Path Host
Fabric Interconnect (HFI).
● The pins inside the processor socket are extremely sensitive. No object except the processor package should
make contact with the pins inside the processor socket. A damaged socket pin may render the socket
inoperable, and will produce erroneous CPU or other system errors.
H-6156 (Rev A) 59
S2600WF Motherboard Description
The parts of the socket assembly is described below and shown in the following figure.
Processor Heat Sink Module (PHM)
The PHM refers to the sub-assembly where the heatsink and processor are fixed together
by the processor package carrier prior to installation on the motherboard. The PHM is
properly installed when it is securely seated over the two Bolster Plate guide pins and it sits
evenly over the processor socket. Once the PHM is properly seated over the processor
socket assembly, the four heatsink Torx screws must be tightened in the order specified on
the label affixed to the top side of the heatsink.
Processor Package Carrier (Clip)
The carrier is an integral part of the PHM. The processor is inserted into the carrier, then the
heatsink with thermal interface material (TIM) are attached. The carrier has keying/
alignment features to align to cutouts on the processor package. These keying features
ensure the processor package snaps into the carrier in only one direction, and the carrier
can only be attached to the heatsink in one orientation.
The processor package snaps into the clips located on the inside ends of the package
carrier. The package carrier with attached processor package is then clipped to the
heatsink. Hook like features on the four corners of the carrier grab onto the heatsink. All
three pieces are secured to the bolster plate with four captive nuts that are part of the
heatsink.
Important: Fabric supported processor models require the use of a Fabric Carrier Clip
which has a different design than the standard clip shown in the figure below. Attempting to
use a standard processor carrier clip with a Fabric supported processor may result in
component damage and result in improper assembly of the PHM.
Bolster Plate
The bolster plate is an integrated subassembly that includes two corner guide posts placed
at opposite corners and two springs that attach to the heatsink via captive screws. Two
Bolster Plate guide pins of different sizes allows the PHM to be installed only one way on
the processor socket assembly.
The springs are pulled upward as the heatsink is lowered and tightened in place, creating a
compressive force between socket and heatsink. The bolster plate provides extra rigidity,
helps maintain flatness to the motherboard, and provides a uniform load distribution across
all contact pins in the socket.
Heatsink
The heatsink is integrated into the PHM which is attached to the bolster plate springs by two
captive nuts on either side of the heatsink. The bolster plate is held in place around the
socket by the backplate. The heatsink's captive shoulder nuts screw onto the corner
standoffs and bolster plate studs. Depending on the manufacturer/model, some heatsinks
may have a label on the top showing the sequence for tightening and loosening the four
nuts.
There are two types of heatsinks, one for each of the processors. These heatsinks are NOT
interchangeable and must be installed on the correct processor/socket, front versus rear.
H-6156 (Rev A) 60
S2600WF Motherboard Description
Heatsink
(1U 80 x 107 mm)
Compression spring
Spring stud
H-6156 (Rev A) 61
S2600WF Motherboard Description
Small guide
post
H-6156 (Rev A) 62
S2600WF Motherboard Description
● Registered DIMMs (RDIMMs), Load Reduced DIMMs (LRDIMMs), and NVDIMMs (Non-Volatile Dual Inline
Memory Module):
● Only RDIMMs and LRDIMMs with integrated Thermal Sensor On Die (TSOD) are supported
● DIMM sizes of 4 GB, 8 GB, 16 GB, 32 GB, 64 GB and 128 GB depending on ranks and technology
● Maximum supported DIMM speeds will be dependent on the processor SKU installed in the system:
○ Intel® Xeon® Platinum 81xx processor – Max. 2666 MT/s (Mega Transfers / second)
○ Intel® Xeon® Gold 61xx processor – Max. 2666 MT/s
○ Intel® Xeon® Gold 51xx processor – Max. 2400 MT/s
○ Intel® Xeon® Silver processor – Max. 2400 MT/s
○ Intel® Xeon® Bronze processor – Max. 2133 MT/s
● DIMMs organized as Single Rank (SR), Dual Rank (DR), or Quad Rank (QR):
○ RDIMMS – Registered DIMMS – SR/DR/QR, ECC only
○ LRDIMMs – Load Reduced DIMMs – QR only, ECC only
○ Maximum of 8 logical ranks per channel
○ Maximum of 10 physical ranks loaded on a channel
Supported Memory
Figure 46. DDR4 RDIMM and LRDIMM Support
H-6156 (Rev A) 63
S2600WF Motherboard Description
Although mixed DIMM configurations may be functional, Cray only supports and performs platform validation on
systems that are configured with identical DIMMs installed.
Figure 47. Memory Slot Layout
CPU 1 CPU 2
D1
D2
C2
C1
B2
B1
A2
A1
E1
E2
F1
F2
D1
D2
C2
C1
B2
B1
A2
A1
E1
E2
F1
F2
● Each installed processor provides six channels of memory. Memory channels from each processor are
identified as Channels A – F.
● Each memory channel supports two DIMM slots, identified as slots 1 and 2.
○ Each DIMM slot is labeled by CPU #, memory channel, and slot # as shown in the following examples:
CPU1_DIMM_A2; CPU2_DIMM_A2
● DIMM population rules require that DIMMs within a channel be populated starting with the BLUE DIMM slot or
DIMM farthest from the processor in a “fill-farthest” approach.
● When only one DIMM is used for a given memory channel, it must be populated in the BLUE DIMM slot
(furthest from the CPU).
● Mixing of DDR4 DIMM Types (RDIMM, LRDIMM, 3DS RDIMM, 3DS LRDIMM, NVDIMM) within a channel
socket or across sockets produces a Fatal Error Halt during Memory Initialization.
● Mixing DIMMs of different frequencies and latencies is not supported within or across processor sockets. If a
mixed configuration is encountered, the BIOS will attempt to operate at the highest common frequency and
the lowest latency possible.
● When populating a Quad-rank DIMM with a Single- or Dual-rank DIMM in the same channel, the Quad-rank
DIMM must be populated farthest from the processor. Intel MRC will check for correct DIMM placement. A
maximum of 8 logical ranks can be used on any one channel, as well as a maximum of 10 physical ranks
loaded on a channel.
● In order to install 3 QR LRDIMMs on the same channel, they must be operated with Rank Multiplication as
RM = 2, this will make each LRDIMM appear as a DR DIMM with ranks twice as large.
● The memory slots associated with a given processor are unavailable if the corresponding processor socket is
not populated.
● A processor may be installed without populating the associated memory slots, provided a second processor is
installed with associated memory. In this case, the memory is shared by the processors. However, the
platform suffers performance degradation and latency due to the remote memory.
H-6156 (Rev A) 64
S2600WF Motherboard Description
● Processor sockets are self-contained and autonomous. However, all memory subsystem support (such as
Memory RAS, Error Management,) in the BIOS setup are applied commonly across processor sockets.
● For multiple DIMMs (RDIMM, LRDIMM, 3DS RDIMM, 3DS LRDIMM) per channel, always populate DIMMs
with higher electrical loading in slot1, followed by slot 2.
H-6156 (Rev A) 65
S2600BP Motherboard Description
Feature Description
Processor Support Support for two Intel Xeon Scalable processors:
● Two LGA 3647, (Socket-P0) processor sockets
● Maximum thermal design power (TDP) of 165W
● 40 lanes of Integrated PCIe® 3.0 low-latency I/O
H-6156 (Rev A) 66
S2600BP Motherboard Description
Feature Description
Internal I/O Connectors ● Bridge slot to extend board I/O
● One 1x12 internal Video header
● One 1x4 IPMB header
● One internal USB 2.0 connector
● One 1x12 pin control panel header
● One DH-10 serial Port connector
● One 2x4 pin header for Intel® RMM4 Lite
● One 1x4 pin header for Storage Upgrade Key
● Two 2x12 pin header for Fabric Sideband CPU1/CPU2
Riser Card Support ● One bridge board slot for board I/O expansion
● Riser Slot 1 (Rear Right Side)
● VGA Bracket is installed on Riser slot 1 as a standard
● Riser Slot 2 (Rear Left Side) providing a x24 PCIe 3.0 lanes: CPU1
● Riser Slot 3 (Front Left Side) providing a x24 PCIe 3.0 lanes: CPU2
● Riser Slot 4 (Middle Left Side) providing a x16 PCIe 3.0 lanes: CPU2
Onboard Storage ● One M.2 SATA/PCIe connector (42 mm drive support only)
Controllers and Options
● Four SATA 6 Gbps ports via Mini-SAS HD (SFF-8643) connector
or
H-6156 (Rev A) 67
S2600BP Motherboard Description
Feature Description
● Support for Intel Intelligent Power Node Manager (Require PMBus compliant
power supply)
RAID key
Fan connectors Riser slot 3 Riser slot 4 Front panel Bridge board Riser slot 2
1 2 3 POST code
LEDs
F1 F1
System E1 E1 Beep LED
fan D1 D1 M.2 ID LED
connector D2 D2 SATA/PCIe Status LED
NIC 2
power 2 Dedicated
management port
A2 A2 USB 2.0
External cooling 7
A1 A1 IPMB
B1 B1 J6B3 External cooling 6
C1 C1 Serial port
Jumpers. The motherboard includes several jumper blocks that can be used to configure, protect, or recover
specific features of the motherboard. These jumper blocks are shown in the default position in the above figure.
Refer to S2600BP Configuration and Recovery Jumpers on page 78 for details.
POST code LEDs. There are several diagnostic (POST code and beep) LEDs to assist in troubleshooting
motherboard level issues.
Figure 50. S2600BP Rear Connectors
Status LED
Chassis ID LED
Beep LED
Dedicated management port. This port with a separate IP address to access the BMC. It provides a port for
monitoring, logging, recovery, and other maintenance functions independent of the main CPU, BIOS, and OS. The
H-6156 (Rev A) 68
S2600BP Motherboard Description
management port is active with or without the RMM4 Lite key installed. The dedicated management port and the
two onboard NICs support a BMC embedded web server and GUI.
Dedicated management port/NIC LEDs. The link/activity LED (at the right of the connector) indicates network
connection when on, and transmit/receive activity when blinking. The speed LED (at the left of the connector)
indicates 10-Gbps operation when green, 1-Gbps operation when amber, and 100-Mbps when off. Figure 58
provides an overview of the LEDs.
Status LED. This bicolor LED lights green (status) or amber (fault) to indicate the current health of the server.
Green indicates normal or degraded operation. Amber indicates the hardware state and overrides the green
status. The state detected by the BMC and other controllers are included in the Status LED state. TRUE? The
Status LED on the chassis front panel and this motherboard Status LED are tied together and show the same
state. When the server is powered down (transitions to the DC-off state or S5), the Integrated BMC is still on
standby power and retains the sensor and front panel status LED state established prior to the power-down event.
The Status LED displays a steady Amber color for all Fatal Errors that are detected during processor initialization.
A steady Amber LED indicates that an unrecoverable system failure condition has occurred.
A description of the Status LED states follows.
Green Solid on OK Indicates the system is running (in S0 state) and status is
healthy. There are no system errors.
H-6156 (Rev A) 69
S2600BP Motherboard Description
Amber Solid on Critical, non- Fatal alarm: System has failed or shutdown
recoverable - system is
halted
Blinking (~1 Non-critical: System is Non-fatal alarm: System failure likely
Hz) operating in a
● Critical threshold crossed (temperature, voltage, power)
degraded state with an
impending failure ● VRD Hot asserted
warning, although still ● Minimum number of fans to cool the system not present
functioning. or failed
● Hard drive fault
● Insufficient power from PSUs
1.The overall power consumption of the system is referred to as System Power States. There are a total of six
different power states ranging from: S0 (the system is completely powered ON and fully operational), to S5 (the
system is completely powered OFF), and the states (S1, S2, S3, and S4) referred to as sleeping states.
Chassis ID LED. This blue LED is used to visually identify a specific motherboard/server installed in the rack or
among several racks of servers. The ID button on front of the server/node toggles the state of the chassis ID LED.
There is no precedence or lock-out mechanism for the control sources. When a new request arrives, all previous
requests are terminated. For example, if the chassis ID LED is blinking and the ID button is pressed, then the ID
LED changes to solid on. If the button is pressed again with no intervening commands, the ID LED turns off.
BMC Boot/Reset Status LED Indicators. During the BMC boot or BMC reset process, the System Status and
Chassis ID LEDs are used to indicate BMC boot process transitions and states. A BMC boot occurs when AC
H-6156 (Rev A) 70
S2600BP Motherboard Description
power is first applied to the system. A BMC reset occurs after a BMC firmware update, after receiving a BMC cold
reset command, and upon a BMC watchdog initiated reset. These two LEDs define states during the BMC boot/
reset process.
Beep LED. The S2600BP does not have an audible beep code component. Instead, it uses a beep code LED that
translates audible beep codes into visual light sequences. Prior to system video initialization, the BIOS uses these
Beep_LED codes to inform users on error conditions. A user-visible beep code is followed by the POST Progress
LEDs.
H-6156 (Rev A) 71
S2600BP Motherboard Description
The Integrated BMC may generate beep codes upon detection of failure conditions. Beep codes are translated
into visual LED sequences each time the problem is discovered, such as on each power-up attempt, but are not lit
continuously. Codes that are common across all Intel server boards and systems that use the same generation of
chipset are listed in the following table. Each digit in the code is represented by a LED lit/off sequence of whose
count is equal to the digit.
1-5-2-4 MSID Mismatch MSID mismatch occurs if a processor is installed into a system
board that has incompatible power capabilities.
1-5-4-2 Power fault DC power unexpectedly lost (power good dropout) – Power
unit sensors report power unit failure offset
1-5-4-4 Power control fault (power Power good assertion timeout – Power unit sensors report soft
good assertion timeout). power control failure offset
1-5-1-2 VR Watchdog Timer sensor VR controller DC power on sequence was not completed in
assertion time.
1-5-1-4 Power Supply Status The system does not power on or unexpectedly powers off
and a Power Supply Unit (PSU) is present that is an
incompatible model with one or more other PSUs in the
system.
H-6156 (Rev A) 72
S2600BP Motherboard Description
H-6156 (Rev A) 73
S2600BP Motherboard Description
There are two types of heatsinks, one for each of the processors. These heatsinks are NOT
interchangeable and must be installed on the correct processor/socket, front versus rear.
Figure 51. Processor Socket Assembly
Heatsink
(1U 80 x 107 mm)
Compression spring
Spring stud
H-6156 (Rev A) 74
S2600BP Motherboard Description
Bolster plate
Processor socket
S2600BP Architecture
The architecture of Intel® Server Board S2600BP is developed around the integrated features and functions of
the Intel® Xeon® Scalable processor family, the Intel® C621 Series Chipset family, Intel® Ethernet Controller
X550, and the ASPEED* AST2500* Server Board Management Controller.
The following figure provides an overview of the S2600BP architecture, showing the features and interconnects of
each of the major subsystem components.
H-6156 (Rev A) 75
S2600BP Motherboard Description
® ® ® ®
DDR4 Channel C Intel Xeon UPI 10.4 GT/s Intel Xeon DDR4 Channel C
DDR4 Channel D
Scalable Scalable DDR4 Channel D
processor processor
DDR4 Channel E DDR4 Channel E
(10 GbE)
M.2 SATA/PCIe sSATA 6 GB/s (Port 2) PCIe 2.0 x1 GPIOs, Reset/PowerGood
eSPI
sSATA 6 GB/s BMC SPI (50 MHz)
BMC FW
(port 0) Flash
PLD SGPIO (32 MB)
Control, Misc
H-6156 (Rev A) 76
S2600BP Motherboard Description
Processors that have different Intel® UltraPath (UPI) Link Frequencies may operate together if they are otherwise
compatible and if a common link frequency can be selected. The common link frequency would be the highest link
frequency that all installed processors can achieve.
Processor stepping within a common processor family can be mixed as long as it is listed in the processor
specification updates published by Intel Corporation.
H-6156 (Rev A) 77
S2600BP Motherboard Description
● Processor sockets are self-contained and autonomous. However, all memory subsystem support (such as
Memory RAS and Error Management) in the BIOS setup is applied commonly across processor sockets.
● Mixing DIMMs of different frequencies and latencies is not supported within or across processor sockets.
● A maximum of 8 logical ranks can be used on any one channel, as well as a maximum of 10 physical ranks
loaded on a channel.
● DIMM slot 1 closest to the processor socket must be populated first in the channel with 2 slots. Only remove
factory installed DIMM blanks when populating the slot with memory. Intel MRC will check for correct DIMM
placement
H-6156 (Rev A) 78
S2600BP Motherboard Description
when the standard firmware update process fails. This jumper should remain in the default/disabled position when
the server is running normally.
To perform a Force ME Update, follow these steps:
1. Move the jumper (J4B1) from the default operating position (covering pins 1 and 2) to the enabled position
(covering pins 2 and 3).
2. Power on the server by pressing the power button on the front panel.
3. Perform the ME firmware update procedure as documented in the Release Notes file that is included in the
given system update package.
4. Power down the server.
5. Move the jumper from the enabled position (covering pins 2 and 3) to the disabled position (covering pins 1
and 2).
6. Power up the server.
H-6156 (Rev A) 79
S2600BP Motherboard Description
The BIOS introduces three mechanisms to start the BIOS recovery process, which is called Recovery Mode:
● The Recovery Mode Jumper causes the BIOS to boot in Recovery Mode.
● The Boot Block detects partial BIOS update and automatically boots in Recovery Mode.
● The BMC asserts Recovery Mode GPIO in case of partial BIOS update and FRB2 time-out.
The BIOS Recovery takes place without any external media or Mass Storage device as it utilizes the Backup
BIOS inside the BIOS flash in Recovery Mode. The Recovery procedure is included here for general reference.
However, if in conflict, the instructions in the BIOS Release Notes are the definitive version
When Recovery Mode Jumper is set, the BIOS begins with a “Recovery Start” event logged to the SEL, loads and
boots with the Backup BIOS image inside the BIOS flash itself. This process takes place before any video or
console is available. The system boots up into the Shell directly while a “Recovery Complete” SEL logged. An
external media is required to store the BIOS update package and steps are the same as the normal BIOS update
procedures. After the update is complete, there will be a message displayed stating that the “BIOS has been
updated successfully" indicating the BIOS update process is finished. The User should then switch the recovery
jumper back to normal operation and restart the system by performing a power cycle.
If the BIOS detects partial BIOS update or the BMC asserts Recovery Mode GPIO, the BIOS will boot up with
Recovery Mode. The difference is that the BIOS boots up to the Error Manager Page in the BIOS Setup utility. In
the BIOS Setup utility, boot device, Shell or Linux for example, could be selected to perform the BIOS update
procedure under Shell or OS environment.
Again, before starting to perform a Recovery Boot, be sure to check the BIOS Release Notes and verify the
Recovery procedure shown in the Release Notes.
The following steps demonstrate this recovery process:
1. Move the jumper (J4B3) from the default operating position (covering pins 1 and 2) to the BIOS Recovery
position (covering pins 2 and 3).
2. Power on the server.
3. The BIOS will load and boot with the backup BIOS image without any video or display.
4. When the compute module boots into the EFI shell directly, the BIOS recovery is successful.
5. Power off the server.
6. Move the jumper (J4B3) back to the normal position (covering pins 1 and 2).
7. Put the server back into the rack. A normal BIOS update can be performed if needed.
H-6156 (Rev A) 80
S2600BP Motherboard Description
3. Boot the system into Setup. Check the Error Manager tab, and you should see POST Error Codes:
● 0012 System RTC date/time not set
● 5220 BIOS Settings reset to default settings
4. Go to the Setup Main tab, and set the System Date and System Time to the correct current settings. Make
any other changes that are required in Setup – for example, Boot Order.
H-6156 (Rev A) 81
S2600BP Motherboard Description
Security
Password Setup
The BIOS uses passwords to prevent unauthorized access to the server. Passwords can restrict entry to the BIOS
Setup utility, restrict use of the Boot Device pop-up menu during POST, suppress automatic USB device
reordering, and prevent unauthorized system power on. It is strongly recommended that an Administrator
Password be set. A system with no Administrator password set allows anyone who has access to the server to
change BIOS settings.
An Administrator password must be set in order to set the User password.
The maximum length of a password is 14 characters and can be made up of a combination of alphanumeric (a-z,
A-Z, 0-9) characters and any of the following special characters:
!@#$%^&*()–_+=?
H-6156 (Rev A) 82
S2600BP Motherboard Description
H-6156 (Rev A) 83
S2600BP Motherboard Description
H-6156 (Rev A) 84