0% found this document useful (0 votes)
189 views84 pages

CS500 Hardware Guide

HPC Slide

Uploaded by

sam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
189 views84 pages

CS500 Hardware Guide

HPC Slide

Uploaded by

sam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 84

CS500™ Hardware Guide

(Rev A)

H-6156
Contents

Contents
About the CS500 Hardware Guide............................................................................................................................3
CS500 System Description........................................................................................................................................4
1211 Service Node....................................................................................................................................................5
Drive Bay Options............................................................................................................................................7
Rear View........................................................................................................................................................9
Front Controls and I/O Panel Features..........................................................................................................10
1211 Chassis Components...........................................................................................................................14
1211 Drive Backplanes..................................................................................................................................15
PCIe Riser Card Support...............................................................................................................................18
Power Supplies..............................................................................................................................................21
Chassis Cooling.............................................................................................................................................22
Internal Cabling.............................................................................................................................................25
3211 Compute Server..............................................................................................................................................31
Drive Bay Options..........................................................................................................................................32
Rear View......................................................................................................................................................34
Front Control Panel Buttons and LEDs.........................................................................................................35
3211 Chassis Components...........................................................................................................................40
Compute Node Tray......................................................................................................................................40
M.2 SSD Support................................................................................................................................45
System Boards and Internal Cabling.............................................................................................................46
CCS Environmental Requirements..........................................................................................................................49
S2600WF Motherboard Description........................................................................................................................50
Component Locations....................................................................................................................................52
Architecture...................................................................................................................................................57
Processor Socket Assembly..........................................................................................................................59
Memory Support and Population...................................................................................................................62
S2600BP Motherboard Description.........................................................................................................................66
Component Locations....................................................................................................................................68
Processor Socket Assembly..........................................................................................................................73
Architecture...................................................................................................................................................75
Processor Population Rules..........................................................................................................................76
Memory Support and Population Rules.........................................................................................................77
Configuration and Recovery Jumpers...........................................................................................................78
BIOS Features...............................................................................................................................................81

H-6156 (Rev A) 2
About the CS500 Hardware Guide

About the CS500 Hardware Guide


The Cray® CS500™ Hardware Guide describes system-level and server components and features including
chassis layout, system boards, cabling, and power and cooling subsystems.

Document Versions
H-6156 (Rev A)
October 2017. The initial release of the CS500 Hardware Guide including the 1211 and
3211 server chassis and Intel S2600BP and S2600WF motherboards.

Scope and Audience


This document provides information about the CS500 system. Installation and service information is provided for
users who have experience maintaining high performance computing (HPC) equipment. Installation and
maintenance tasks should be performed by experienced technicians in accordance with the service agreement.
The information is presented in topic-based format and does not include chapters, appendices, or section
numbering.

Feedback
Visit the Cray Publications Portal at http://pubs.cray.com. Email your comments and feedback to [email protected].
Your comments are important to us. We will respond within 24 hours.

H-6156 (Rev A) 3
CS500 System Description

CS500 System Description


The Cray® CS500™ system is a scalable, flexible system that consists of optimized, industry‑standard building
block server platforms unified into a fully integrated system. Cray C500 systems provide superior price/
performance, energy efficiency, and configuration flexibility. The CS500 is an air-cooled cluster supercomputer
that uses standard 42U or 48U 19-inch rack cabinets.

The Cray CS500 system is an x86-64 Linux system Figure 1. CS500 System
that is designed for excellent computational
performance. The system can support features such as
diskless provisioning of the operating system, virtual
cluster provisioning, remote monitoring, and out-of-
band management. The system supports InfiniBand,
Omni-Path, and Ethernet high-speed networks, an
Ethernet network (for provisioning and operations), and
a dedicated Ethernet management network.
There are two rackmount server platforms for the Cray
CS500 cluster supercomputer:
● 1211 service node
● 3211 compute server

1211 Service Node


The 1211 provides maximum configuration capabilities and is used for I/O intensive service
nodes that provide login and system management functions or as compute nodes that
require very large memories.
The 1211 uses an Intel S2600WF motherboard with 24 DIMM slots, no onboard LAN
(baseline) or optional dual 10-GbE LAN ports, and a 1 GbE port for dedicated management.
3211 Compute Server
The 3211 is a high-density server that delivers high performance and system
responsiveness with outstanding memory bandwidth for data-intensive applications.
The 3211 contains four nodes, each with its own Intel S2600BP motherboard with two Intel®
Xeon® Scalable family processors.

H-6156 (Rev A) 4
1211 Service Node

1211 Service Node


The Cray 1211 rackmounted service node is used in Cray CS500 systems. The 1211 is commonly used to
provide login and system management functions or as a high-memory compute or data storage node.
Figure 2. 1211 Server Chassis

Cover removal captive


thumb screws

Chassis cover
Rail kit standoffs
Drive bays
Cover removal thumb pads
Disk drive cage
Front control panel
(location differs with
drive configuration)

Table 1. 1211 Server Features

Feature Description
Chassis Type 19-in wide, 2U rackmounted chassis
Motherboard Options Intel S2600WF (Wolf Pass)
● S2600WF0 - no onboard LAN
● S2600WFT - dual 10GbE ports (RJ45)

Memory 24 DIMM slots

Up to 1536 GB (24 x 64 GB)

DDR4 2666 MT/s

GPU Options NVIDIA® Tesla® P100 PCIe card (250 W, 12/16 GB)

H-6156 (Rev A) 5
1211 Service Node

Feature Description
Power Supplies One or two 1300W AC power supply modules
Cooling ● System fan assembly: Six managed 60 mm fans
● Built-in air duct to support passively cooled processors
● Passive processor heatsinks
● Two in-line fans in each power supply module

Storage Options 2.5" hot swap drives


● 8x SATA/SAS/NVMe
● 12x SATA/SAS/up to 2 NVME
● 24x SATA/SAS/NVMe
3.5" hot swap drives
● 8x SATA/SAS
● 12x SATA/SAS/up to 2 NVME
2x internal fixed mount 2.5” SSDs
External I/O ● DB-15 video connectors
○ Front and back (non-storage system configurations only)
○ RJ-45 serial port A connector
○ Dual RJ-45 network interface connectors (S2600WFT-based systems
only)
○ Dedicated RJ-45 server management NIC
○ (3) – USB 3.0 connectors on back panel
○ (2) – USB 3.0 connectors on front panel (non-storage system
configurations only)
○ (1) – USB 2.0 connector on rack handle (storage configurations only)

Riser Card Support Support for three riser cards [PCIe 3.0: 8 GT/s].
● Riser #1 – x24 – up to 3 PCIe slots
● Riser #2 – x24 – up to 3 PCIe slots
● Riser #3 – x16– up to 2 PCIe slots (optional low profile cards)
With three riser cards installed, up to 8 possible add-in cards are supported:
● Risers 1 and 2: Four full height / half length + two full height / half length add-
in cards
● Riser #3: Two low profile add-in cards (optional)

H-6156 (Rev A) 6
1211 Service Node

1211 Drive Bay Options


The CS500 1211 server supports a variety of different storage configurations. Options vary depending on the
server model and available accessory options installed. This section provides an overview of each available
option.
● 2.5” drives
○ 8x SATA/SAS/NVMe
○ 24x SATA/SAS/NVMe
● 3.5” drives
○ 8x SATA/SAS
○ 12x SATA/SAS/up to 2 NVME

Drive Numbering
Drive numbers in the following two figures typical numbering schemes. However, actual drive numbering depends
on SAS/SATA controller configuration and backplane cabling. Drive backplanes use multi-port, mini-SAS HD
connectors for each set of four SATA/SAS drives. Backplanes that support PCIe NVMe drives also include a
single PCIe OCuLink connector for each supported NVMe drive.
Figure 3. 2.5" Drive Options

8 - 2.5” Drives
Bay for additional
2.5” hot swap drives 2.5” hot swap drives Video Front
(0-7) (8-15) (DB15) control panel

System label USB 3.0 ports


pull-out tab

24 - 2.5” Drives
Front
USB 2.0 port 2.5” hot swap drives control panel
(0-23)

H-6156 (Rev A) 7
1211 Service Node

Figure 4. 3.5" Drive Options

8 - 3.5” Drives

System label Video USB ports Front


pull-out tab (DB15) (2.0/3.0) control panel

3.5” hot swap drives*


(0-7)

12 - 3.5” Drives
Support for Front
USB 2.0 port two NVMe drives control panel

3.5” hot swap drives*


(0-11)
* Each set of four drives, shown with device numbers, use a
common cable connector on the backside of the backplane.

Drive LED Indicators


Drive trays for both 2.5" and 3.5" drives include separate amber and green LED indicators. Light pipes in the trays
direct light from LEDs on the backplane to the front of the drive tray, making them visible from the front of the
chassis.
The drive activity LED is driven by signals coming from the drive itself. Drive vendors may choose to operate the
activity LED differently from what is described in the table below. Should the activity LED on a given drive type
behave differently than what is described, customers should reference the drive vendor specifications for the
specific drive model to determine the expected drive activity LED operation.

H-6156 (Rev A) 8
1211 Service Node

Table 2. SAS/SATA/NVMe LED States

Activity Status Amber Status LED States


LED LED
(Green) (Amber) Amber Off No access and no fault

Solid on Hard drive fault occurred

Blink (1 Hz) RAID rebuild in progress

Blink (2 Hz) Locate (identify)

Green Activity LED States

Green Condition Drive Type Behavior

Power on with no drive SAS/NVMe LED stays on


activity
SATA LED stays off

Power on with drive SAS/NVMe LED blinks off when processing a


activity command

SATA LED blinks on when processing a


command

Power on and drive SAS/NVMe LED stays off


spun down
SATA LED stays off

Power on and drive SAS/NVMe LED blinks


spinning up
SATA LED stays off

The activity/green LED states for PCIe SSDs are the same the table above. The status/amber LED states are
different as listed in the following table.
Table 3. PCIe* SSD drive status LED state

Color LED State Drive Status

Amber Off No fault, OK

Solid on Fault/fail

Blinking (1 Hz) Rebuild

Blinking (4 Hz) Locate (identify)

1211 Rear View


The 1211 server supports two 6 Gb/sec hot swap SATA SSDs installed in a modular drive bay in the back of the
system. Because of thermal limits in this area of the chassis, the rear hot swap drive bay cannot support hard disk
drives. The SSDs are installed in 2.5" drive carriers that connect to a backplane mounted in the rear of the module
bay.

H-6156 (Rev A) 9
1211 Service Node

Supported SATA SSDs must not exceed the following power and thermal limits:
● One or two SATA SSDs supporting up to 4 W per device with a case temperature rating of 70 °C
● One or two SATA SSDs supporting up to 1.5 W per device with a case temperature rating of 60 °C
Figure 5. 1211 Rear View

Two 2.5”
SATA SSDs Riser card 3 Riser card 2 Serial port B Riser card 1
(optional) bay bay (optional) bay

Power Power NIC 1 NIC 2 3x stacked Bay for optional


supply 1 supply 2 (RJ45) (RJ45) USB ports network expansion
(2.0/3.0) module (OCP)
Video Remote
(DB15) Serial management
port A port
(RJ45) (RJ45)

1211 Front Control and I/O Panel Features


The 1211 server includes a control panel that provides push-button system controls and LED indicators for
several system features. Depending on the drive configuration, the front control panel may come in either of two
formats; however, both provide the same functionality.
Figure 6. 1211 Front Panel Controls and Indicators

System cold Drive Power button


reset button activity with integrated
(tool required) LED LED

System Drive activity LED


status
LED System ID
System ID button with
button with integrated LED
integrated LED System cold
reset button
NIC
NMI NIC System Power button
activity
button activity status with integrated NMI
LED
(tool required) LED LED LED button
(tool required)

System ID button
Toggles the integrated ID LED and the blue motherboard ID LED on and off. The system ID
LED is used to visually identify a specific server installed in the rack or among several racks
of servers. The system ID LED can also be toggled on and off remotely using the IPMI
“chassis identify” command which causes the LED to blink for 15 seconds.

H-6156 (Rev A) 10
1211 Service Node

NMI button
When the NMI button is pressed, it puts the server in a halt state and issues a non-
maskable interrupt (NMI). This can be useful when performing diagnostics for a given issue
where a memory download is necessary to help determine the cause of the problem. To
prevent an inadvertent system halt, the actual NMI button is recessed from the front panel
where it is accessible only with a small tipped tool like a pin or paper clip.
NIC activity LEDs
An activity LED is included for each onboard network interface controller (NIC). When a
network link is detected, the LED turns on solid. The LED blinks consistently while the
network is being used.
System cold reset button
Pressing this button reboots the reinitializes the system.
System Status LED
This LED lights green or amber to indicate the current health of the server. This feature is
also provided by an LED on the back edge of the motherboard. Both LEDs are tied together
and show the same state. The System Status LED states are driven by the on-board
platform management subsystem. A description of each LED state for the server follows.

Color State Criticality Description


Off System is not Not ready ● System is powered off (AC and/or DC)
operating
● System is in Energy-using Product (EuP) Lot6 Off
mode/regulation1
● System is in S5 Soft-off state.

Green Solid on OK Indicates the system is running (in S0 state) and its status is
Healthy. There are no system errors.

AC power is present and the BMC has booted and


management is up and running.
After a BMC reset with a chassis ID solid on, the BMC is
booting Linux.
Control has been passed from BMC uboot or BMC Linux.
It will be in this states for ~10-20 seconds.

Blinking (~1 Degraded - system is System degraded:


Hz) operating in a
● Power supply/fan redundancy loss
degraded state
although still ● Fan warning or failure
functional, or system is ● Non-critical threshold crossed (temperature, voltage,
operating in a power)
redundant state but
with an impending ● Power supply failure
failure warning ● Unable to use all installed memory
● Correctable memory errors beyond threshold
● Battery failure
● Error during BMC operation

H-6156 (Rev A) 11
1211 Service Node

Color State Criticality Description


Amber Solid on Critical, non- Fatal alarm - system has failed or shutdown
recoverable - system is
halted
Blinking (~1 Non-critical - system is Non-fatal alarm - system will likely fail
Hz) operating in a
● Critical threshold crossed (temperature, voltage, power)
degraded state with an
impending failure ● Hard drive fault
warning, although still ● Insufficient power from PSUs
functioning
● Insufficient cooling fans

1. The overall power consumption of the system is referred to as System Power States. There are a total of six
different power states ranging from S0 (the system is completely powered ON and fully operational) to S5 (the
system is completely powered OFF) and the States (S1, S2, S3 and S4) are referred to as sleeping states

Drive activity LED


This LED indicates drive activity from the on-board storage controllers. The motherboard
also provides a header to give access to this LED for add-on controllers.
Power/sleep button
This button toggles the system power on and off. Pressing this button sends a signal to the
BMC, which either powers the system on or off. This button also functions as a sleep button
if enabled by an ACPI compliant operating system. The integrated LED is a single color
(Green) and is capable of supporting different indicator states as defined in the following
table.

State Power Mode LED Description


Power-off Non-ACPI Off System power is off, and the BIOS has not initialized the
chipset.
Power-on Non-ACPI On System power is on.
S5 ACPI Off Mechanical is off and the operating system has not saved any
context to the hard disk.
S0 ACPI On System and the operating system are up and running.

H-6156 (Rev A) 12
1211 Service Node

I/O Panel Features


Systems configured with eight 3.5” hard drive bays or up to sixteen 2.5” hard drive bays will also include an I/O
Panel providing additional system I/O features.
Figure 7. Front I/O Panels

Video
(DB15) USB 3.0 ports

Front I/O panels

Video USB ports


(DB15) (2.0/3.0)

Video connector
A monitor can be connected to the video connector on the front I/O panel. When BIOS
detects that a monitor is attached to this video connector, it disables video signals routed to
the video connector on the back on the chassis. Video resolution from the front connector
may be lower than from the rear on-board video connector. A short video cable should be
used for best resolution. The front video connector is cabled to a 2x7 header on the server
board labeled “FP Video”.
USB 2.0/3.0 Ports
The front I/O panel includes two USB 2.0/3.0 ports. The USB ports are cabled to a blue 2x5
connector on the server board labeled “FP_USB”.
Due to signal strength limits associated with USB 3.0 ports cabled to a front panel, some
marginally compliant USB 3.0 devices may not be supported from these ports.

H-6156 (Rev A) 13
1211 Service Node

1211 Chassis Components


The following illustration provides a general overview of the major components of the server. Greater detail of
major components is provided in following subsections.
Figure 8. 1211 Chassis Components

Bay for Network Adapter PCIe riser bay 1


Module (OCP) PCIe riser bay 2
Riser card
Integrated RAID bracket - bay 1 Riser card
Module Support bracket - bays 2 and 3
Motherboard
CPU 1 PCIe riser bay 3
Hot swap system fans
2.5” SSD Bay (2x)

Power supply
bays (2x)
Rail kit
standoffs (4x)
DDR4 DIMMs

CPU 2

Chassis

Hot swap drives

Front control panel


(location differs by model) Air duct

H-6156 (Rev A) 14
1211 Service Node

1211 Drive Backplanes


The 1211 chassis uses different backplanes to support different drive configurations.
For 2.5” drives, backplane options:
● 8 x 2.5” drive combo backplane with support for SAS/SATA/NVMe
● 24 x 2.5" chassis uses three 8 x 2.5" backplanes with jumper cables attaching the I2C connectors
● 2 x 2.5” drive rear mount SATA SSD backplane
For 3.5” drives, backplane options:
● 8 x 3.5” drive backplane with support for SAS/SATA
● 12 x 3.5" drive backplane

8 x 2.5” Drive SATA/SAS/NVMe Combo Backplane


Chassis configurations supporting 2.5” drives include one or more eight drive backplanes capable of supporting
12 Gb/s SAS, 6 Gb/s SATA drives, and PCIe NVMe drives.
The front side of the backplane includes 68-pin SFF-8639 drive interface connectors, each capable of supporting
SAS, SATA, or NVMe drives. The connectors are numbered 0 through 7.
The backside of the backplane includes two multi-port mini-SAS HD connectors labeled SAS/SATA_0-3 and SAS/
SATA_4-7, and eight PCIe OCuLink connectors, each labeled PCIeSSD#, where # = 0-7, one connector for each
installed NVMe drive.
Figure 9. 8 x 2.5" SATA/SAS/NVMe Backplane

SAS/SATA SAS/SATA
drives 4-7 drives 0-3

I2C out 6 4 I2C in


2
3 0
7 5
1

Power PCIe OCuLink connectors Front side


(PCIeSSD0-7)

Back side

I2C cable connectors


The backplane includes two 5-pin cable connectors (labeled I2C_IN and I2C_OUT) used as
a management interface between the motherboard and the installed backplanes. In systems
configured with multiple backplanes (24 drives), a short jumper cable is attached between
backplanes, with connector A used on the first board and connector D used on the second
board, extending the SMBus to each installed backplane.

H-6156 (Rev A) 15
1211 Service Node

Multi-port mini-SAS HD cable connectors


The backplane includes two multi-port mini-SAS HD cable connectors (labeled PORT 0-3
and PORT 4-7), each providing SGPIO and I/O signals for up to four SAS/SATA devices
installed in the hot swap drive bay. Input cables can be routed from matching connectors on
the motherboard (on-board SATA only), from installed add-in SAS/SATA RAID cards, or
from an optionally installed SAS expander card for drive configurations of greater than eight
hard drives.
Power harness connector
The backplane includes a 2x2 connector supplying power to the backplane. Power is routed
to each installed backplane via a multi-connector power cable harness from the
motherboard.
PCIe OCuLink Connectors
The backplane has support for up to eight (8) NVMe SFF SSDs. The backside of the
backplane includes eight OCuLink* cable connectors, one for each drive connector on the
front side of the backplane. Each installed NVMe drive must have PCIe signals cabled to
the appropriate backplane OCuLink connector from onboard OCuLink connectors or PCIe
add-in cards.

2 x 2.5" Rear Accessory Drive Backplane


The 1211 service node supports an optional two 6-Gb/s hot swap SATA SSD drive bay accessory kit. The drive
bay is mounted in the back of the chassis. Because of thermal limits in this area of the chassis, this drive bay
option does not support hard disk drives.
By lowering the maximum supported ambient air temperature to 27°C, and limiting the system configuration to
support 8 or 16 devices up front and no storage devices configured on the air duct, supported SATA SSDs must
not exceed the following power and thermal limits:
● 1 or 2 SATA SSDs supporting up to 6.4W per device with a case temperature rating of 70°C
● 1 or 2 SATA SSDs supporting up to 3.6W per device with a case temperature rating of 60°C
The backplane includes several connectors and a jumper block, as defined in the following figure. Refer to 2 x
2.5" Rear Accessory Drive Cabling on page 30 for a cabling diagram.
Figure 10. 2 x 2.5" Backplane

Back side Front side


SATA
SGPIO drive 1 Power SATA drive connectors

SATA I2C
drive 0

H-6156 (Rev A) 16
1211 Service Node

I2C connector
The backplane includes a 1x5 pin I2C connector. This connector is cabled to a matching
HSBP I2C connector on the motherboard and is used as a communication path to the
onboard BMC.
SGPIO connector
The backplane includes a 1x5 pin Serial General Purpose Input/Output (SGPIO) connector.
When the backplane is cabled to the on-board SATA ports, this connector is cabled to a
matching SGPIO connector on the motherboard, and provides support for drive activity and
fault LEDs.

8 x 3.5” Drive Backplane


This backplane supports up to 6 Gb/s SATA drives or up to 12 Gb/s SAS drives. Mounted on the front side of the
backplane are eight 29-pin (SFF-8482) drive connectors. On the backside of the backplane are connectors for
power, I2C management, and SAS/SATA data.
Figure 11. 8 x 3.5" Backplane

Back side

Power Drives 4-7 Drives 0-3 I2C


(mini-SAS HD) (mini-SAS HD)

Front side

Power harness connector


The backplane includes a 2x2 connector supplying power to the backplane. Power is routed
to the backplane through a power cable harness from the motherboard.
Mini-SAS HD cable connectors
The backplane includes two multi-port mini-SAS cable connectors, each providing SGPIO
and I/O signals for four SAS/SATA hard drives on the backplane. Cables can be routed from
matching connectors on the motherboard, from add-in SAS/SATA RAID cards, or from an
optionally installed SAS expander card. Each mini-SAS HD connector includes a silk-screen
identifying which drives the connector supports: drives 0-3 and drives 4-7.
I2C cable connector
The backplane includes a 5-pin cable connector used as a management interface to the
motherboard.

H-6156 (Rev A) 17
1211 Service Node

12 x 3.5” Drive Backplane


This backplane supports 6 Gb/s SATA, 12 Gb/s SAS, and up to two PCIe NVMe drives. Mounted on the front side
of the backplane are ten 29-pin (SFF-8482) drive connectors supporting SAS/SATA drives only, and two 68-pin
SFF-8639 drive connectors supporting SAS/SATA/NVMe drives. On the backside of the backplane are connectors
for power, I2C management, SAS/SATA data, and PCIe NVMe.
The backplane includes two PCIe OCuLink cable connectors (labeled PCIe SSD0 and PCIe SSD1), each
providing support for one NVMe drive.
Figure 12. 12 x 3.5" Drive Backplane

PCIe SSD1 PCIe SSD0 Back side

Power Drives 8-11 Drives 4-7 Power Drives 0-3 I2C


(mini-SAS HD) (mini-SAS HD) (mini-SAS HD)

Front side

PCIe Riser Card Support


The motherboard provides three riser card slots. Based on the PCIe specification, each riser card slot supports a
maximum 75W of power. The PCIe* bus lanes for each riser card slot are supported by each of the two installed
processors.
The riser card slots are specifically designed to support riser cards only. Attempting to install a PCIe* add-in card
directly into a riser card slot may damage the motherboard, the add-in card, or both.
A dual processor configuration is required when using slot 2 and slot 3, as well as the bottom add-in card slot for
2U riser cards installed in slot 1. Slot 3 does not support SMBus device aliasing. SMBus aliasing prevents devices
with common address requirements from conflicting with each other. Any PCIe add-in card requiring SMBus
support should be installed into an available add-in card slot in riser 1 or 2.

H-6156 (Rev A) 18
1211 Service Node

Figure 13. PCIe add-in card support

Low profile 3 slot 3 slot


riser card riser card riser card
(bay 3) (bay 2) (bay 1)

Controlled by CPU1:

Controlled by CPU2:

Low profile 2 slot 2 slot


riser card riser card riser card
(bay 3) (bay 2) (bay 1)

Controlled by CPU1:

Controlled by CPU2:

Add-in Card Sizes


* Numbers in circles indicate the PCIe slot
enumeration order managed by the Bays 1 and 2: Top and middle - Full height, full length
operating system. Bottom - Full height, half length
Bay 3: Both slots - Low profile

H-6156 (Rev A) 19
1211 Service Node

Figure 14. PCIe Port Mapping

Slot 1

Slot 2

Slot 3
X24 X24 X12
X16 – CPU 1 X24 – CPU 2 X8 – CPU 2
+ +
X8 – CPU 2 X4 DMI - CPU 2

Riser slot 1 root port mapping


Slot 3-Slot Riser Card 2-Slot Riser Card

Top CPU 1 – Ports 1A and 1B (x8 elec, x16 mech) CPU 1 – Ports 1A thru 1D (x16 elec, x16 mech)
Middle CPU 1 – Ports 1C and 1D (x8 elec, x16 mech) N/A
Bottom CPU 2 – Ports 1C and 1D (x8 elec, x8 mech) CPU 2 – Ports 1C and 1D (x8 elec, x8 mech)

Riser slot 2 root port mapping


Slot 3-Slot Riser Card 2-Slot Riser Card

Top CPU 2 – Ports 2A and 2B (x8 elec, x16 mech) CPU 2 – Ports 2A thru 2D (x16 elec, x16 mech)
Middle CPU 2 – Ports 2C and 2D (x8 elec, x16 mech) N/A
Bottom CPU 2 – Ports 1A and 1B (x8 elec, x8 mech) CPU 2 – Ports 1A and 1B (x8 elec, x8 mech)

Riser slot 3 root port mapping


Slot Low Profile Riser Card Card

Top CPU 2 – DMI x4 (x4 elec, x8 mech) Low profile cards only.
Bottom CPU 2 – Ports 3C and 3D (x8 elec, x8 mech) Low profile cards only.

Riser Card Bracket Assemblies


The system includes two different riser cards assemblies, one supporting riser slot 1 and one supporting both riser
slots 2 and 3 in a back-to-back butterfly configuration. Two guide brackets on the air duct provide support for full
height/full length add-in cards when installed in either the middle or top add-in card slots of each installed riser
card assembly.
When installed, riser slot 3 supports up to two low profile add-in cards. To avoid possible add-in card bracket
interference when installing add-in cards into both riser card 2 and 3, add-in cards in riser 2 should be installed
before those to be installed in riser 3.

H-6156 (Rev A) 20
1211 Service Node

No tools are needed to install the riser card assemblies into the chassis. Hooks on the back edge of the riser card
assembly are aligned with slots on the chassis, then each assembly is pushed down into the respective riser card
slots on the motherboard.
Figure 15. Riser Card Assembly Installation

Hooks (2)

Slots (2)

Power Supplies
The 1211 chassis uses two 1300W power supply modules in a 1+1 redundant power configuration. Each power
supply module has dual inline 40mm cooling fans with one mounted inside the enclosure and the other extending
outside the enclosure. The power supplies are modular and can be inserted and removed from the chassis
without tools. When inserted, the card edge connector of the power supply mates blindly to a matching slot on the
motherboard. In the event a power supply fails, hot-swap replacement is available.
Figure 16. 1300W Power Supply

1300W AC
common redundant power supply (CRPS) 80+
titanium efficiency
power supply module

AC Input
Input connector: C14
AC input voltage range: 115 VAC to 220 VAC

H-6156 (Rev A) 21
1211 Service Node

Redundant 1+1 power is automatically configured depending on the total power draw of the chassis. If total
chassis power draw exceeds the power capacity of a single power supply, then power from the second power
supply module is used. Should this occur, power redundancy is lost.
CAUTION:
● Power supply units with different wattage ratings
● Installing two power supply units with different wattage ratings in a system is not supported. Doing so
will not provide power supply redundancy and will result in multiple errors being logged by the
system.

The power supply recovers automatically after an AC power failure. AC power failure is defined to be any loss of
AC power that exceeds the dropout criteria.
The power supplies have over-temperature protection (OTP) circuits that protect the power supplies against high
temperature conditions caused by loss of fan cooling or excessive chassis/ambient temperatures. In an OTP
condition, the power supplies will shut down. Power supplies restore automatically when temperatures drop to
specified limits, while the 12 VSB always remains on.
The server has a throttling system to prevent the system from crashing if a power supply module is overloaded or
over heats. If server system power reaches a preprogrammed limit, system memory and/or processors are
throttled back to reduce power. System performance is impacted if this occurs.

Power Supply Status LED


The 1300W power supply modules have a single two-color LED to indicate power supply status:

LED State Power Supply Condition

Off No AC power to all power supplies

Solid green Power On and OK

Blinking green, 1 Hz AC present, only 12 VSB on (PS off) or PS in cold redundant state

Blinking green, 2 Hz Power supply firmware updating

Amber solid AC cord unplugged or AC power lost, with a second power supply in parallel still
with AC input power
Blinking amber, 1 Hz Power supply warning events where the power supply continues to operate; high
temp, high power, high current, slow fan

Solid amber solid Power supply critical event causing a shutdown, failure, OCP, OVP, Fan Fail

Power Supply Fans


Each installed power supply module includes embedded (non-removable) 40-mm fans. They are responsible for
airflow through the power supply module. These fans are managed by the fan control system. Should a fan fail,
the power supply shuts down.

1211 Chassis Cooling


Several components within the 1211 chassis are used to dissipate heat from within the chassis. Components
include six system fans (fan assembly), a fan integrated into each installed power supply module, an air duct,

H-6156 (Rev A) 22
1211 Service Node

populated drive carriers, and installed CPU heats sinks. Drive carriers can be populated with a storage device
(SSD or HDD) or supplied drive blank. In addition, it may be necessary to have specific DIMM slots populated
with DIMMs or supplied DIMM blanks.
The CPU 1 processor and heatsink must be installed first. The CPU 2 heatsink must be installed at all times, with
or without a processor installed.
Figure 17. System Fans

Air duct

Air flow
Fan module assembly
(six fans)

Individual fan

With fan redundancy, should a single fan failure occur (system fan or power supply fan), integrated platform
management changes the state of the system status LED to blinking green, reports an error to the system event
log, and automatically adjusts fan speeds as needed to maintain system temperatures below maximum thermal
limits.
All fans in the fan assembly and power supplies are controlled independent of each other. The fan control system
may adjust fan speeds for different fans based on increasing/decreasing temperatures in different thermal zones
within the chassis.
If system temperatures continue to increase above thermal limits with system fans operating at their maximum
speed, platform management may begin to throttle bandwidth of either the memory subsystem or processors or
both, to keep components from overheating and keep the system operational. Throttling of these subsystems will
continue until system temperatures are reduced below preprogrammed limits.
The power supply module will shut down if its temperature exceeds an over-temperature protection limit. If system
thermals increase to a point beyond the maximum thermal limits, the server will shut down, the System Status

H-6156 (Rev A) 23
1211 Service Node

LED changes to solid Amber, and the event is logged to the system event log. If power supply temperatures
increase beyond maximum thermal limits or if a power supply fan fail, the power supply will shut down.

System Fans
The system is designed for fan redundancy when configured with two power supply modules. Should a single
system fan fail, platform management adjusts air flow of the remaining system fans and manages other platform
features to maintain system thermals. Fan redundancy is lost if more than one system fan is in a failed state.
The fan assembly must be removed when routing cables inside the chassis from back to front, or when
motherboard replacement is necessary.
The system fan assembly is designed for ease of use and supports several features:
● Each individual fan is hot-swappable.
● Each fan is blind mated to a matching 6-pin connector located on the motherboard.
● Each fan is designed for tool-less insertion and extraction from the fan assembly.
● Each fan has a tachometer signal that allows the integrated BMC to monitor its status.
● Fan speed for each fan is controlled by integrated platform management. As system thermals fluctuate high
and low, the integrated BMC firmware increases and decreases the speeds to specific fans within the fan
assembly to regulate system thermals.
● An integrated fault LED is located on the top of each fan. Platform management illuminates the fault LED for
the failed fan.

Chassis Air Duct


An air duct provides proper air flow for add-on cards that use a passive heatsink. The air duct is mounted behind
the fan assembly. Cables in the chassis that run from front to back are routed in cable channels between the
chassis sidewall and sidewalls of the air duct. Cables should not be run through the center of the chassis or
between the system fans and DIMM slots.
Figure 18. Chassis Air Duct

Add-in card
Air duct
support brackets (2)
(clear plastic) Air duct posts (2)

Tabs
(snap underneath the top
edge of the riser card
Alignment tabs (3) assemblies)
(align to matching slots
in the fan assembly
Dual SATA SSD or RAID
mounting location

Air duct
left side wall
(black plastic)

Always operate rackmount servers with the air duct


in place. The air duct is required for proper airflow
Air duct
within the server chassis.
right side wall
Air ducts can vary between different rackmount server models. (side walls screw to
motherboard)

H-6156 (Rev A) 24
1211 Service Node

Internal Cabling
The system fan must be removed when routing cables internally from front to back. All cables should be routed
using the cable channels in between the chassis sidewalls and the air duct side walls as shown by the blue
arrows in the following illustration. When routing cables front to back, none should be routed through the center of
the chassis or between system fans or DIMM slots.
Cable routing diagrams for each of the different drive backplane configurations appear on the following pages.
Figure 19. Internal Cable Routing Channels

H-6156 (Rev A) 25
1211 Service Node

Figure 20. 8 x 2.5" Cabling

PCIe
SSD 1

SATA
PCIe 0-3
SSD 0
Front Panel
USB 2.0 SATA
4-7

HSBP
Front Panel Power
USB 2.0/3.0

Front
Panel
Standard Video
HSBP I2C Front Panel
Control

Fan 1 Fan 2 Fan 3 Fan 4 Fan 5 Fan 6

Power cable
SAS/SATA cable
I2C cable
Front control panel and I/O cable

H-6156 (Rev A) 26
1211 Service Node

Figure 21. 24 x 2.5" Cabling

PCIe
SSD 1

SATA
PCIe 0-3
SSD 0
Front Panel
USB 2.0 SATA
4-7

HSBP
Front Panel Power
USB 2.0/3.0

Front
Panel
Standard Video
HSBP I2C Front Panel
Control

Fan 1 Fan 2 Fan 3 Fan 4 Fan 5 Fan 6

8 x 2.5” backplane 8 x 2.5” backplane 8 x 2.5” backplane


Power cable
SAS/SATA cable
I2C cable
Front control panel and I/O cable

H-6156 (Rev A) 27
1211 Service Node

Figure 22. 8 x 3.5" Cabling

PCIe
SSD 1

SATA
PCIe 0-3
SSD 0
Front Panel
USB 2.0 SATA
4-7

HSBP
Front Panel Power
USB 2.0/3.0

Front
Panel
Standard Video
HSBP I2C Front Panel
Control

Fan 1 Fan 2 Fan 3 Fan 4 Fan 5 Fan 6

8 x 3.5” backplane

Power cable
SAS/SATA cable
I2C cable
Front control panel and I/O cable

H-6156 (Rev A) 28
1211 Service Node

Figure 23. 12 x 3.5" Cabling

PCIe
SSD 1

SATA
PCIe 0-3
SSD 0
Front Panel
USB 2.0 SATA
4-7

HSBP
Front Panel Power
USB 2.0/3.0

Front
Panel
Standard Video
HSBP I2C Front Panel
Control

Fan 1 Fan 2 Fan 3 Fan 4 Fan 5 Fan 6

12 x 3.5” backplane
Power cable
SAS/SATA cable
I2C cable
Front control panel and I/O cable

H-6156 (Rev A) 29
1211 Service Node

Figure 24. 2 x 2.5" Rear Accessory Drive Cabling

sSATA 4
sSATA 5 HSBP SGPIO & SATA cable bundle

2 x 2.5” Backplane

0
I2C

Peripheral power

Fan 1 Fan 2 Fan 3 Fan 4 Fan 5 Fan 6

Power cable
SAS/SATA cable
I2C/SGPIO cable

H-6156 (Rev A) 30
3211 Compute Server

3211 Compute Server


The 3211 rackmount compute server is used in Cray CS500 systems. The 3211 is designed to deliver high
performance and system responsiveness with outstanding memory bandwidth for data-intensive applications.
The 3211 is a 2U rackmount server that contains four compute modules/nodes, each with an Intel S2600BP
(Buchanan Pass) motherboard. The S2600BP contains two Intel® Xeon® Scalable family processors.
Figure 25. CS500 3211 Server Chassis

Power supply unit (2)

Compute node
tray (4)

Power distribution
module cover

Disk drive cage


Drive bays
Front control panel
(one on each side)

Table 4. 3211 Features

Feature Description
Chassis Type ● 19-inch wide, 2U rackmount chassis
● Up to four compute modules/nodes

Motherboard Intel® S2600BP (Buchanan Pass)


● S2600BPB – Dual 10GbE RJ45 NIC ports
● S2600BPS – Dual 10GbE SFP+ NIC ports

Processors ● Up to two Intel® Xeon® Scalable family processors


● Support for Intel Xeon Scalable family with Intel Omni-path Integrated fabric
connectors – One 100 Gb/s port per processor

Memory 16 DIMM slots per node

H-6156 (Rev A) 31
3211 Compute Server

Feature Description

Up to 1024 GB per node

DDR4 (2400|2666 MT/s)

PCIe Network Cards ● Ethernet 1G/10G/40G/100G


● InfiniBand EDR and HDR (Q1-2018)
● Omni-Path fabric

SATA Support Onboard each node:


● Four SATA 6 Gbps ports via Mini-SAS HD (SFF-8643) connector

M.2 Support Onboard each node:


● One 42 mm M.2 SATA/PCIe (x4)
● One 80 mm M.2 PCIe (x4) on back of riser card 2

Power supplies Two 2130W power supplies (80 Plus Platinum efficiency)
Cooling ● Three 40 x 56 mm dual-rotor fans per node optimized by fan speed control
● One transparent air duct per node
● One passive processor heatsink per node
● One 40 mm fan in each power supply unit

Storage options 2.5" hot swap drives


● 4x SATA/SAS
● 24x SATA/SAS/up to 8 NVMe
3.5" hot swap drives
● 12x SATA/SAS

Riser support (per node) ● Slot 1: One PCIe 3.0 x16


○ Supports a low-profile adapter
● Slot 2: One PCIe 3.0 x24
○ x16 low-profile adapter
○ x4 low-profile adapter when fabric is used

3211 Drive Bay Options


The 3211 server supports a variety of different storage configurations. Options vary depending on the server
model and available accessory options installed.
The drives may be electrically hot-swapped while the chassis power is applied, but you must take caution before
hot-swapping while the compute module is functioning under operating system/application control or data may be
lost. Replace a faulty drive only with one from the same manufacturer with the same model and capacity.

H-6156 (Rev A) 32
3211 Compute Server

Drive numbering. The following figure shows numbers/groups for drives routed to the same compute node
through the backplane. These numbers/groups are not indicated on the hardware.
● 2.5” drives
○ 4x SATA (6 Gbps) / SAS (12 Gbps)
○ 24x SATA (6 Gbps) / SAS (12 Gbps)/ NVMe (8 total, max. 2 per node)
● 3.5: drives
○ 12x SATA (6 Gbps) / SAS (12 Gbps)
Figure 26. Front Bay Drive Options

4 - 2.5” Drives
This drive configuration includes 4x 3.5” drive carriers.
Front However, to maintain the thermal requirements to support Front
control panel 165W TDP processors, only 2.5” drives are supported. control panel

24 - 2.5” Drives
Front Front
control panel control panel

Node 1 Node 3 Node 2 Node 4

12 - 3.5” Drives
Front Front
control panel Node 3 Node 4 control panel

Node 1 Node 2

H-6156 (Rev A) 33
3211 Compute Server

● For 24 x 2.5” drive configurations, the drive bay supports 12 Gb SAS or 6 Gb SAS drives. The SAS drives are
hot‑swappable. The front side of the backplane includes 24 drive interface connectors. All the 24 connectors
can support SAS drives, but only the connector #4 and #5 of each compute module are capable of supporting
PCIe* SFF devices. Two different drive carriers are included in the drive bay. Drive carriers with a Blue latch
are used to identify support of PCIe* SFF devices or SAS drives. Drives carriers with a Green latch are used
to identify support of SAS drives only
● NVME SSDs have hot swap / hot plug capability. Support and usage models are OS dependent.
● For a given compute node, any combination of NVMe and SAS drives can be supported, as long as the
number of NVMe drives does not exceed two and they are installed only in the last two drive connectors on
the backplane (4 and 5) and the remaining drives are SAS drives (0, 1, 2, 3).
● Mixing of NVMe and SAS drives in an alternating manner is not a recommended configuration.

3211 Rear View


The 3211 chassis supports four 1U compute modules, each designed to operate as a single system node within
the four node chassis. Each module supports two PCIe 3.0 low-profile add-in cards, slot 1 (x16) and slot 2 (x24).
The compute module bays in the chassis require either a compute module being installed and powered up or a
dummy tray cover installed to maintain proper thermal environment for the other running compute modules in the
same chassis. If a module fails, remove the failed module and replace it with a dummy tray cover until a
replacement module is installed.

H-6156 (Rev A) 34
3211 Compute Server

Figure 27. CS500 3211 Rear View

Module 4 Power Supply 2 Module 3

Module 2 Power Supply 1 Module 1

1
Slot 1 riser card Video port Slot 2 riser card
add-in card bay (VGA) add-in card bay

Compute module

RJ45

Dedicated 2
management NIC 1 NIC 2
port
2x Stacked SFP+
USB 3.0

1. The VGA port is provided as a default if no slot 1 add-in card is ordered.


2. The onboard NIC ports vary (RJ45 | SFP+) with different motherboard models.

Front Control Panel Buttons and LEDs


The Cray 3211 server chassis contains a set of control panels in the left and right rack handles. Each control
panel contains two sets of control buttons and LEDs, one for each compute node. Each control panel assembly is
pre-assembled and fixed within the rack handle.
The control panel houses two independent LEDs and two button integrated LEDs for each node, which are
viewable to display the system’s operating status. The system BIOS and the integrated BMC provide functions for
the control panel buttons and LEDs.

H-6156 (Rev A) 35
3211 Compute Server

Figure 28. Front Panel Controls and Indicators

Power button with LED

ID button with LED


Network link/Activity LED
Status LED

Power button with LED. Toggles the node power on and off. Pressing this button sends a signal to the BMC,
which either powers the system on or off. The integrated LED is a single color (green) and is capable of
supporting different indicator states.
The power LED sleep indication is maintained on standby by the chipset. If the compute node is powered down
without going through the BIOS, the LED state in effect at the time of power off is restored when the compute
node is powered on until the BIOS clear it.
If the compute node is not powered down normally, it is possible the Power LED will blink at the same time the
compute node status LED is off due to a failure or configuration change that prevents the BIOS from running.

State Power Mode LED Description


Power-off Non-ACPI Off Node power is off, and the BIOS has not initialized the
chipset.
Power-on Non-ACPI On Node power is on, and the BIOS has not initialized the
chipset.
S5 ACPI Off Mechanical is off and the operating system has not saved any
context to the drive.
S1 ACPI Blink DC power is still on. The operating system has saved context
and changed to a low-power state. (Blink rate is ~ 1Hz at 50%
duty cycle.)

S0 ACPI On Node and operating system are up and running.

ID button with LED. Toggles the integrated ID LED and blue ID LED on the rear of the node motherboard on and
off. The ID LED is used to visually identify a specific compute node in the server chassis or among several
servers in the rack. If the LED is off, pushing the ID button lights the ID LED. Issuing a chassis identify command
causes the LED to blink. The LED remains lit until the button is pushed again or until a chassis identify command
is received.
Network link/Activity link. When a network link from the compute node is detected, the LED turns on solid. The
LED blinks consistently while the network is being used.
Status LED. This is a bicolor LED that is tied directly to the Status LED on the motherboard (if present). This LED
indicates the current health of the compute node.

Color Condition What it describes


Green Off Power off: Compute node unplugged

H-6156 (Rev A) 36
3211 Compute Server

Color Condition What it describes

Power on: Compute node powered off and in standby, no prior


degraded\non-critical\critical state

On Compute node ready/No alarm


Blinking Compute node ready, but degraded: redundancy lost such as the
power supply or fan failure; noncritical temp/voltage threshold;
battery failure; or predictive power supply failure.
Amber On Critical alarm: Critical power nodes failure, critical fans failure,
voltage (power supply), critical temperature and voltage.
Blinking Non-Critical Alarm: Redundant fan failure redundant power node
failure, non-critical temperature and voltage

When the compute node is powered down (transitions to the DC-off state or S5), the BMC is still on standby
power and retains the sensor and front panel Status LED state established before the power-down event.
When AC power is first applied to the compute node, the Status LED turns solid amber and then immediately
changes to blinking green to indicate that the BMC is booting. If the BMC boot process completes with no errors,
the Status LED will change to solid green.
When power is first applied to the compute node and 5V-STBY is present, the BMC controller on the motherboard
requires 15-20 seconds to initialize. During this time, the compute node status LED will be solid on, both amber
and green. Once BMC initialization has completed, the status LED will stay green solid on. If power button is
pressed before BMC initialization completes, the compute node will not boot to POST.

Color State Criticality LED State Definition


Off System is not Not ready ● System is powered off (AC and/or DC).
operating
● System is in EuP Lot6 Off Mode.
● System is in S5 Soft-Off State.

Green Solid On Okay Indicates that the System is running (in S0


State) and its status is ‘Healthy’. The system is
not exhibiting any errors. AC power is present
and BMC has booted and manageability
functionality is up and running.

After a BMC reset, and in conjunction with the


Chassis ID solid ON, the BMC is booting Linux*.
Control has been passed from BMC uBoot to
BMC Linux* itself. It will be in this state for ~10-
~20 seconds.

Blinking ~ 1 Hz Degraded - system is System degraded:


operating in a degraded state
● Redundancy loss such as power-supply or
although still functional, or
fan. Applies only if the associated platform
system is operating in a
sub-system has redundancy capabilities.
redundant state but with an
impending failure warning.

H-6156 (Rev A) 37
3211 Compute Server

Color State Criticality LED State Definition

● Fan warning or failure when the number of


fully operational fans is less than minimum
number needed to cool the system.
● Non-critical threshold crossed –
Temperature (including HSBP temp),
voltage, input power to power supply, output
current for main power rail from power
supply and Processor Thermal Control
(Therm Ctrl) sensors.
● Power supply predictive failure occurred
while redundant power supply configuration
was present.
● Unable to use all of the installed memory
(more than 1 DIMM installed).
● Correctable Errors over a threshold and
migrating to a spare DIMM (memory
sparing). This indicates that the system no
longer has spared DIMMs (a redundancy
lost condition). Corresponding DIMM LED
lit.
● In mirrored configuration, when memory
mirroring takes place and system loses
memory redundancy.
● Battery failure.
● BMC executing in uBoot. (Indicated by
Chassis ID blinking at 3Hz). System in
degraded state (no manageability). BMC
uBoot is running but has not transferred
control to BMC Linux*. Server will be in this
state 6-8 seconds after BMC reset while it
pulls the Linux* image into flash.
● BMC Watchdog has reset the BMC.
● Power Unit sensor offset for configuration
error is asserted.
● HDD HSC is off-line or degraded.

Amber Blinking ~ 1 Hz Non-critical - System is Non-fatal alarm – system is likely to fail:


operating in a degraded state ● Critical threshold crossed – Voltage,
with an impending failure temperature (including HSBP temp), input
warning, although still power to power supply, output current for
functioning main power rail from power supply and
PROCHOT (Therm Ctrl) sensors.
● VRD Hot asserted.
● Minimum number of fans to cool the system
not present or failed.

H-6156 (Rev A) 38
3211 Compute Server

Color State Criticality LED State Definition

● Hard drive fault.


● Power Unit Redundancy sensor –
Insufficient resources offset (indicates not
enough power supplies present).
● In non-sparing and non-mirroring mode if
the threshold of correctable errors is
crossed within the window.

Solid on Critical, non-recoverable – Fatal alarm – system has failed or shut down:
System is halted
● CPU CATERR signal asserted
● MSID mismatch detected (CATERR also
asserts for this case).
● CPU 1 is missing
● CPU Thermal Trip
● No power good – power fault
● DIMM failure when there is only 1 DIMM
present and hence no good memory
present.
● Runtime memory uncorrectable error in
nonredundant mode.
● DIMM Thermal Trip or equivalent
● SSB Thermal Trip or equivalent
● CPU ERR2 signal asserted
● BMC/Video memory test failed. (Chassis ID
shows blue/solid-on for this condition)
● Both uBoot BMC FW images are bad.
(Chassis ID shows blue/solid-on for this
condition)
● 240VA fault
● Fatal Error in processor initialization:
○ Processor family not identical
○ Processor model not identical
○ Processor core/thread counts not
identical
○ Processor cache size not identical
○ Unable to synchronize processor
frequency
○ Unable to synchronize QPI link
frequency
● Uncorrectable memory error in a non-
redundant mode

H-6156 (Rev A) 39
3211 Compute Server

3211 Chassis Components


The 3211 compute server supports four compute modules/nodes that are installed in the rear of the chassis,
including two 2130W power supplies. The compute node bay in the chassis requires either a compute node tray
being installed and powered up or a dummy tray cover installed to maintain proper thermal environment for the
other running compute nodes. In case of a compute node failure, remove the failed compute node, and replace
with a dummy tray cover until the new compute node is installed.
Figure 29. 3211 Chassis Components

Compute node trays


(node 3 - top
node 1 - bottom)

Power supply units


(PSU 2 - top
PSU 1 - bottom)

Compute node trays


(node 4 - top
node 2 - bottom)

Power bay enclosure

Power distribution module:


includes 2 boards, cables (not shown),
a bus pair, and a support bracket
for the power supplies

Drive cage
(rotated)
Backplane
(attached to drive cage)

Compute Node Tray


Each 1U compute node tray is designed to operate as a single system node within the 3211 chassis. Each
compute node plugs into the backplane in the 3211 chassis. Parts and components of the compute node are
shown in 3211 Compute Node Tray on page 41.
When adding or removing components from the compute node, make sure cables are routed correctly before
plugging the node back into the chassis. Use caution to make sure no cables or wires are pinched and that the
airflow from the fans is not blocked.

H-6156 (Rev A) 40
3211 Compute Server

Figure 30. 3211 Compute Node Tray

Power docking board


The power docking board provides hot swap docking of 12V main power between the compute node motherboard
and the power supplies. The power docking board from each compute node plugs into the backplane. The bridge
board on each compute node also plugs into the HDD backplane.
Depending on the compute node model, one of the following power docking boards is used to enable hot swap
support of the compute node into or out of the 3211 chassis:
● Standard power docking board
● SAS/NVMe combination power docking board
The power docking board implements the following features:

H-6156 (Rev A) 41
3211 Compute Server

● Main 12V hot swap connectivity between compute node tray and chassis power distribution boards.
● Current sensing of 12V main power for use with node manager.
● Three 8-pin dual rotor fan connectors.
● Four screws secure the power docking board to the compute node tray.
Figure 31. Power Docking Boards

Standard Power Docking Board Power Docking Board for 24x Drive Chassis

Fan control (2x7 pin) Signal connector


(to bridge board, 40 pin)
Fan control (2x7 pin)
Fan 1 (8 pin) Signal connector (to backplane, 40 pin)
Main power output (2x6 pin) Fan 1 (8pin)
Main power output (2x6 pin)

Power blade connector (to backplane)


Fan 2 (8 pin)
Main power input (12 pin) Fan 2 (8-pin)

Fan 3 (8 pin) Fan 3 (8 pin)

Bridge Board
The bridge board extends motherboard I/O signals by delivering SATA/SAS/NVMe signals, disk backplane
management signals, BMC SMBus signals, control panel signals, and various compute node specific signals. The
bridge board provides hot swap interconnect of all electrical signals to the chassis backplane (except for main
12V power). One bridge board is used on each compute node. The bridge board is secured to the compute node
tray with six screws through the side of the tray. A black, plastic mounting plate at the end of the bridge board
protects and separates the bridge board from the side of the tray.
There are different bridge board options to support the different drive options in the front of the server. Dual
processor system configurations are required to support a bridge board with 12G SAS support. The 12G SAS
bridge boards are not functional in a single processor system configuration.

H-6156 (Rev A) 42
3211 Compute Server

Figure 32. Bridge Boards

6G SATA Bridge Board PN: 101805900


Provides data lanes for up four SATA ports to the backplane.
Used in 4x-2.5” and 12x-3.5” chassis.

2x 40-pin connector 2x 40-pin connector


(to backplane) (to motherboard
bridge board)

12G SAS/PCIe NVMe Combo Bridge Board PN: 101848000


Includes one embedded SAS controller to support up to six 12 Gb/s SAS ports and
two x4 PCIe 3.0 lanes to support up to two PCIe SFF devices. This bridge board
includes support for RAID Levels 0, 1, and 10. RAID 5 can be supported with the
addition of an optional RAID 5 key. Used in 24x-2.5” chassis.

RAID key
connector

2x 40-pin connector 40-pin connector 200-pin connector 80-pin connector


(to backplane) (to power docking (to motherboard (to motherboard
board) slot 3 ) bridge board)

12G SAS Pass-through Bridge Board PN: 101848100


Provides I/O connectivity to support up to six 12 Gb/s SAS ports between an add-in
Host Bus Adapter (HBA) card and the backplane. Used in 24x-2.5” chassis.

Cable cover Slimline SAS


cable connectors
Slimline SAS 40-pin connector
(to the RAID card)
connector (to power docking
board)
2x 40-pin connector 200-pin connector 80-pin connector
(to backplane) (to motherboard slot 3 ) (to motherboard
bridge board)

H-6156 (Rev A) 43
3211 Compute Server

System fans
The three dual rotor 40 x 40 x 56 system managed fans provide front to back air flow through the compute node.
Each fan is mounted within a metal housing on the compute node base. System fans are not held in place using
any type of fastener. They are tightly held in place by friction, using a set of four blue sleeved rubber grommets
that sit within cutouts in the chassis fan bracket.
Each system fan is cabled to separate 8-pin connectors on the power docking board. Fan control signals for each
system fan are then routed to the motherboard through a single 2x7 connector on the power docking board, which
is cabled to a matching fan controller header on the motherboard.
Each fan within the compute node can support variable speeds. Fan speed may change automatically when any
temperature sensor reading changes. Each fan connector within the node supplies a tachometer signal that
allows the baseboard management controller (BMC) to monitor the status of each fan. The fan speed control
algorithm is programmed into the motherboard’s integrated BMC.
Compute nodes do not support fan redundancy. Should a single rotor stop working, the following events will most
likely occur:
● The integrated BMC detects the fan failure.
● The event is logged to the system event log (SEL).
● The System Status LED on the server board and chassis front panel will turn flashing green, indicating a
system is operating at a degraded state and may fail at some point.
● In an effort to keep the compute node at or below pre-programmed maximum thermal limits monitored by the
BMC, the remaining functional system fans will operate at 100%.
Fans are not hot swappable. Should a fan fail, it should be replaced as soon as possible.
Air duct
Each compute node requires the use of a transparent plastic air duct to direct airflow over critical areas within the
node. To maintain the necessary airflow, the air duct must be properly installed and seated before sliding the
compute node into the chassis.
Figure 33. Compute Node Air Duct

Air duct
(installed)

In system configurations where CPU 1 is configured with an integrated Intel® Omni-Path Host Fabric Interface, an
additional plastic air baffle is attached to the bottom side of the air duct. The air baffle must be attached to the air
duct to ensure proper airflow to the chip set and the Intel Fabric Through (IFT) carrier when installed.

H-6156 (Rev A) 44
3211 Compute Server

Figure 34. Air Baffle Addition

Omni-Path air baffle

M.2 SSD Support


The 3211 compute node supports two M.2 SSD drives. One through an onboard connector on the motherboard
and one located on the backside of riser card 2.
The motherboard includes an M.2 SSD connector capable of supporting a PCIe or SATA SSD up to 42 mm in
length. The sSATA controller embedded within the chipset provides this connector with the SATA port. PCIe x4
lanes are routed from the chipset and can be supported in single processor configurations. Circuitry within the
motherboard automatically detects the type of device plugged into the M.2 connector.
The M.2 connector located on riser card 2 is capable of supporting PCIe M.2 SSDs that conform to the 80 mm
length. PCIe x4 lanes to the M.2 connector are routed through the PCI Riser slot and are supported from CPU 1.
Figure 35. M.2 Connectors

M.2 connector
(80 mm)

Riser card 2 Slot 2

Riser card 2 M.2 connector


(back side) (42 mm)

H-6156 (Rev A) 45
3211 Compute Server

System Boards and Internal Cabling


Each node tray includes a series of boards that are described below. When adding or removing components from
the node tray, make sure cables are routed correctly before plugging the node tray back into the chassis. Use
caution to make sure no cables or wires are pinched and that the airflow from the fans is not blocked.
Refer to these two figures: System Board and Cabling Connections (8x, 12x drive chassis) on page 47 and
System Board and Cabling Connections (24x drive chassis) on page 48.
Power Distribution Module
The power distribution module is at the middle of the chassis and consists of two Power
Distribution Boards (PDBs) to support Common Redundant Power Supplies (CRPS). The
PDB provides +12V and +12VSTB output to the server chassis. Each PDB has two 2x9
power output cables that connect to the backplane and one 2x8 signal control cable for
power management that also connects to the backplane.
Backplane Interposer Board (BIB)
The backplane interposer board (BIB) is only used in 24 x 2.5” drive chassis as the
interposer between the backplane and the power docking board to connect the power and
miscellaneous signals from the backplane to the compute modules. Two backplane
interposer boards are pre-assembled with the 24 x 2.5” drive backplane in the server
chassis to support four compute nodes.
The BIB is a completely passive board, which contains connectors on both sides of the
board to connect to the backplane on the front side and the power docking board on the
back side. Two front panel connectors with the same signals routed are placed on the BIB
for easy of cabling to the front panel on each side of the chassis. Each connector integrates
the control signals of two compute nodes.
Power Interposer Board (PIB)
The power interposer board is only used in 24 x 2.5” drive chassis as an electrical interface
between the power distribution board and the backplane.
Backplane
Drives interface with the passive backplane through a blind mate connection when drives
are installed into a drive bay using hot-swap drive carriers.
Each compute module has dedicated Hot Swap Controller (HSC) to manage three or four
drives. There are a total of four sets of independent Programmable System On Chip
(PSOC) devices on the backplane, to function as HSCs to the four compute modules.
Backplane LED Support
Each drive tray includes separate LED indicators for drive Activity and drive Status. Light
pipes integrated into the drive tray assembly direct light emitted from LEDs mounted next to
each drive connector on the backplane to the drive tray faceplate, making them visible from
the front of the system.

H-6156 (Rev A) 46
3211 Compute Server

Figure 36. System Board and Cabling Connections (8x, 12x drive chassis)

Chassis backplane Front control


(3.5” shown, Power supply Power control Node 1 main panel connector
drive connectors on other side ) input connectors connector power connector (nodes 1 and 3)

Bridge board
connector

Front control 2x40 pin edge


panel connector connector
(nodes 2 and 4) Power
docking board Fan control
connector
Fan
connectors
Main power
connectors to bottom
power distribution board Main power
(PSU 1) to motherboard
Fan module Bridge board
(6 GB SATA)

PMBus* cable

Power
supply cage CPU 2

Control signal
connector
Main power
output connectors

Power
distribution
board
(PSU 2, top)
(PSU 1, bottom)

CPU 1
SATA/PCIe

Power supply
M.2

units 2x40 pin edge


connector
PSU 2 (top)
PSU 1 (bottom)

Motherboard

Node Tray

H-6156 (Rev A) 47
3211 Compute Server

Figure 37. System Board and Cabling Connections (24x drive chassis)

Front control
Chassis backplane Signal connector Power control 5V power Node 1 main Backplane panel connector
(24x - 2.5” drives) (to BIB) connector (to PIB) power connector interposer board (nodes 1 and 3)

Bridge board
connector

100 pin edge


Power mate pins connector
(4x to BIB)
Fan control
connector
Power
interposer
board (PIB)

Bridge board
(12 GB SAS/
PCIe SFF
Main power combo)
connectors to bottom
power distribution board 5V
(PSU 1)

CPU 2
Power
supply cage
Control signal
connector
Main power
output connectors

Power
distribution
board
(PSU 2, top)
(PSU 1, bottom)

CPU 1

RAID key
connector

Power supply
units 2x40 pin
edge
PSU 2 (top) connector
PSU 1 (bottom)

Motherboard

Node Tray

H-6156 (Rev A) 48
CCS Environmental Requirements

CCS Environmental Requirements


The following table lists shipping, operating and storage environment requirements for Cray Cluster Systems.
Table 5. CCS Environmental Requirements

Environmental Factor Requirement


Operating
Operating temperature 41° to 95° F (5° to 35° C) [up to 5,000 ft (1,500 m)]
● Derate maximum temperature (95° F [35° C]) by 1.8° F (1° C)
● 1° C per 1,000 ft [305 m] of altitude above 5,000 ft [1525 m])
● Temperature rate of change must not exceed 18° F (10° C) per hour

Operating humidity 8% to 80% non-condensing


Humidity rate of change must not exceed 10% relative humidity per hour

Operating altitude Up to 10,000 ft. (up to 3,050 m)


Shipping
Shipping temperature -40° to 140° F (-40° to 60° C)
Temperature rate of change must not exceed 36° F (20° C) per hour

Shipping humidity 10% to 95% non-condensing


Shipping altitude Up to 40,000 ft (up to 12,200 m)
Storage
Storage temperature 41° to 113° F (5° to 45° C)
Temperature rate of change must not exceed 36° F (20° C) per hour

Storage humidity 8% to 80% non-condensing


Storage altitude: Up to 40,000 ft (12,200 m)

H-6156 (Rev A) 49
S2600WF Motherboard Description

S2600WF Motherboard Description


The S2600WF motherboard is used in Cray CS500 3211 management servers. This section provides a high-level
overview of the features, functions, architecture, and support specifications of the S2600WF motherboard.
The S2600WF (Wildcat Pass) motherboard is offered with two different networking configurations:
● S2600WF0: Base S2600WT - no onboard LAN
● S2600WFT: Base S2600WT - dual 10GbE RJ45 ports
Figure 38. S2600WF Motherboard
PCIe / SATA

Support for
OmniPath
carrier card

Support for dual


hot swap power
supply modules.
(1+0, 1+1, 2+0
configurations)

upport forSupport
Intel®S for
SAS RAID
card

H-6156 (Rev A) 50
S2600WF Motherboard Description

Table 6. S2600WF Motherboard Specifications

Feature Description
Processor support Support for two Intel Xeon Scalable family processors:
● Two LGA3647, (Socket-P0) processor sockets
● Maximum thermal design power (TDP) of 205 W (board only)

Memory ● 24 DIMM slots - 6 memory channels per processor / 2 DIMMs per


channel
● Memory capacity: up to 1.5 TB
● Registered DDR4 (RDIMM) and load reduced DDR4 (LRDIMM)
● Memory data transfer rates:
○ Up to 2666 MT/s @ 1 DPC and 2 DPC (DIMMs per channel)
● DDR4 standard I/O voltage of 1.2V

Chipset Intel C624


PCIe support PCIe 3.0 (2.5, 5, 8 GT/s) backward compatible with PCIe Gen 1 and
Gen 2 devices
Riser card support Concurrent support for up to three PCIe 3.0 riser cards
● Riser 1 – x24 (CPU1 x16, CPU2 x8) – 2 and 3 slot riser card
options available
● Riser 2 – x24 (CPU2 x24) – 2 and 3 slot riser card options
available
● Riser 3 (2U systems only) – x12 (CPU 2) – 2 slot riser card
available

Onboard PCIe NVMe support Four PCIe OCuLink connectors


Onboard SATA support ● 12x SATA 6Gbps ports (6Gb/s, 3 Gb/s and 1.5Gb/s transfer rates
are supported)
○ Two single port 7-pin SATA connectors
○ Two M.2 connectors – SATA / PCIe*
○ Two 4-port mini-SAS HD (SFF-8643) connectors
● Embedded SATA Software RAID
○ Intel Rapid Storage RAID Technology (RSTe) 5.0
○ Intel Embedded Server RAID Technology 2 (ESRT2) 1.60 with
optional RAID 5 key support
NOTE: ESRT2 is only supported on S2600WFT and S2600WF0
boards
System Fan Support ● Six 10-pin managed system fan connectors
● Fans integrated in power supply modules

H-6156 (Rev A) 51
S2600WF Motherboard Description

Feature Description
USB Support ● Three external USB 3.0 ports
● One internal Type-A USB 2.0 port
● One internal 20-pin connector for optional front panel USB 3.0
ports (2x)
● One Internal 10-pin connector for optional front panel USB 2.0
ports (2x)

Video ● Integrated 2D video controller


● 16MB of DDR4 video memory
● One DB-15 external connector
● One 14-Pin Internal connector for optional Front Panel Video
support

Serial Port Support ● One external RJ-45 Serial-A port connector


● One internal DH-10 Serial-B port header for optional front or rear
serial port

Security Intel Trusted Platform Module 2.0 (TPM) accessory option


Server Management ● Integrated BMC, IPMI 2.0 compliant
● Support for Intel server management software
● On-board dedicated RJ45 management port
● Support for Advanced Server Management features via a Remote
Management Module 4 Lite accessory option

S2600WF Component Locations


The S2600WFT board is shown below. Some features may not be present on the S2600WF0 and S2600WFQ
boards.

H-6156 (Rev A) 52
S2600WF Motherboard Description

Figure 39. S2600WF Motherboard

External I/O Connectors


Figure 40. S2600WF External I/O Connectors

NIC 1 NIC 2 Video Serial A Stacked 3-port Dedicated


(RJ45) (RJ45) (DB15) (RJ45) USB 2.0/3.0 Management Port
(RJ45) (RJ45)

H-6156 (Rev A) 53
S2600WF Motherboard Description

RJ45 Connectors

LED Color LED State NIC State

Left Green Off LAN link not established

On LAN link is established

Blinking LAN transmit and receive activity

Right Amber On 1 Gbit/sec data rate

Green On 10 Gbit/sec data rate

Dedicated management port/NIC LEDs


This port is typically configured with a separate IP address to access the BMC. It provides a
port for monitoring, logging, recovery, and other maintenance functions independent of the
main CPU, BIOS, and OS. The management port is active with or without the RMM4 Lite
key installed. The dedicated management port and the two onboard NICs support a BMC
embedded web server and GUI.

Jumper Settings
Jumpers can be used to modify the operation of the motherboard. They can be used to configure, protect, or
recover specific features of the motherboard. Jumpers create shorts between two pins to change the function of
the connector. The location of each jumper block is shown in the following figure. Pin 1 of each jumper block is
identified by the arrowhead (▼) silk screened on the board next to the pin.
Figure 41. Jumper Blocks

BIOS
Default Default
J2B1 Enabled
BIOS
Recovery Default
Password J5A3 Enabled
Clear Default
J2B2 Enabled
ME FW
Update Default
BMC
Default J5A4 Enabled
Force
Update Enabled
J1C2

H-6156 (Rev A) 54
S2600WF Motherboard Description

BIOS Default
This jumper resets BIOS options, configured using the <F2> BIOS Setup Utility, back to their
original default factory settings. This jumper does not reset Administrator or User
passwords. In order to reset passwords, the Password Clear jumper must be used.
1. Move the “BIOS DFLT” jumper from pins 1 - 2 (default) to pins 2 - 3 (Set BIOS Defaults).
2. Wait 5 seconds then move the jumper back to pins 1 – 2.
3. During POST, access the <F2> BIOS Setup utility to configure and save desired BIOS
options.
The system will automatically power on after AC is applied to the system. The system time
and date may need to be reset. After resetting BIOS options using the BIOS Default jumper,
the Error Manager Screen in the <F2> BIOS Setup Utility will display two errors: 0012-
System RTC date/time not set and 5220-BIOS Settings reset to default settings.
Password Clear
This jumper causes both the User password and the Administrator password to be cleared if
they were set. The operator should be aware that this creates a security gap until
passwords have been installed again through the <F2> BIOS Setup utility. This is the only
method by which the Administrator and User passwords can be cleared unconditionally.
Other than this jumper, passwords can only be set or cleared by changing them explicitly in
BIOS Setup or by similar means. No method of resetting BIOS configuration settings to de-
fault values will affect either the Administrator or User passwords.
1. Move the “Password Clear” jumper from pins 1 – 2 (default) to pins 2 – 3 (password
clear position).
2. Power up the server and access the <F2> BIOS Setup utility.
3. Verify the password clear operation was successful by viewing the Error Manager
screen. Two errors should be logged: 5221-Passwords cleared by jumper and 5224-
Password clear jumper is set.
4. Exit the BIOS Setup utility and power down the server. For safety, remove the AC power
cords.
5. Move the “Password Clear” jumper back to pins 1 - 2 (default).
6. Power up the server.
7. Boot into <F2> BIOS Setup immediately, go to the Security tab and set the
Administrator and User passwords if you intend to use BIOS password protection.
BMC Force Update
The BMC Force Update jumper is used to put the BMC in Boot Recovery mode for a low-
level update. It causes the BMC to abort its normal boot process and stay in the boot loader
without executing any Linux code. This jumper should only be used if the BMC firmware has
become corrupted and requires re-installation.
1. Power down the system and remove the AC power cords. If the BMC FRC UPD jumper
is moved with AC power applied to the system, the BMC will not operate properly.
2. Move the “BMC FRC UPD” Jumper from pins 1 - 2 (default) to pins 2 - 3 (Force Update
position).
3. Boot the system into the EFI shell.

H-6156 (Rev A) 55
S2600WF Motherboard Description

4. Change directories to the folder containing the update files.


5. Update the BMC firmware using the command: FWPIAUPD -u -bin -ni -b -o -
pia -if=usb <file name.BIN>
6. After the update completes successfully, power down the system and remove the AC
power cords.
7. Move the “BMC FRC UPD” jumper back to pins 1-2 (default).
8. Boot the system into the EFI shell.
9. Change directories to the folder containing the update files.
10. Re-install the board/system SDR data by running the FRUSDR utility.
11. After the SDRs have been loaded, reboot the system.
BIOS Recover
When the BIOS Recovery jumper block is moved from its default pin position (pins 1–2), the
system will boot using a backup BIOS image to the uEFI shell, where a standard BIOS
update can be performed. See the BIOS update instructions that are included with System
Update Package (SUP). This jumper is used when the system BIOS has become corrupted
and is non-functional, requiring a new BIOS image to be loaded on to the motherboard.
Important: The BIOS Recovery jumper is ONLY used to reinstall a BIOS image in the event
the BIOS has become corrupted. This jumper is NOT used when the BIOS is operating
normally and you need to upgrade the BIOS from one version to another.
1. Move the “BIOS Recovery” jumper from pins 1 – 2 (default) to pins 2 – 3 (BIOS
Recovery position).
2. Power down the system.
3. The system will automatically boot to the EFI shell. Update the BIOS using the standard
BIOS update instructions provided with the system update package.
4. After the BIOS update has successfully completed, power off the system.
5. Move the BIOS Recovery jumper back to pins 1 – 2 (default).
6. Power on the system and access the <F2> BIOS Setup utility.
7. Configure desired BIOS settings.
8. Press the <F10> key to save and exit the utility.
Management Engine (ME) Firmware Force Update
When the ME Firmware Force Update jumper is moved from its default position, the ME is
forced to operate in a reduced minimal operating capacity. This jumper should only be used
if the ME firmware has gotten corrupted and requires re-installation.
1. Power down the system and remove the AC power cords. If the ME FRC UPD jumper is
moved with AC power applied to the system, the ME will not operate properly.
2. Move the “ME FRC UPD” Jumper from pins 1 – 2 (default) to pins 2 – 3 (Force Update
position).
3. Boot to the EFI shell.
4. Change directories to the folder containing the update files.

H-6156 (Rev A) 56
S2600WF Motherboard Description

5. Update the ME firmware using the command: iflash32 /u /ni


<version#>_ME.cap
6. When the update has completed successfully, power off the system.
7. Move the “ME FRC UPD” jumper back to pins 1-2 (default).
8. Power on the system.

S2600WF Architecture
The architecture of the S2600WF motherboard is developed around the integrated features and functions of the
Intel® Xeon® Scalable family, the Intel® C620 Series Chipset family, Intel® Ethernet Controller X557, and the
ASPEED AST2500 Server Board Management Controller. Previous generations of Xeon E5-2600 processors are
not supported.
The following figure provides an overview of the S2600WF architecture, showing the features and interconnects
of each of the major subsystem components.
Figure 42. S2600WF Block Diagram

H-6156 (Rev A) 57
S2600WF Motherboard Description

BIOS/Firmware Software Stack


The motherboard includes a system software stack that consists of the following components. Together, they
configure and manage features and functions of the server system.
● Motherboard BIOS
● Manageability Engine (ME)
● Baseboard Management Controller (BMC)
● Sensor Data Record (SDR/FRUSDR)
Caution: Motherboard BIOS packages are released as a combined System Firmware Update Package. Cray
does not support mixing-and-matching package components from different releases. Doing so could damage the
motherboard. Customers should NOT obtain BIOS packages directly from Intel, unless specifically instructed to
do so by Cray. Cray Engineering validates and supplies appropriate BIOS upgrade packages for download
through the CrayPort system.
Features and Functions. Many features and functions of the server system are managed jointly by the System
BIOS and the BMC firmware, including:
● IPMI Watchdog timer.
● Messaging support, including command bridging and user/session support.
● BIOS boot flags support.
● Event receiver device: The BMC receives and processes events from the BIOS.
● Serial-over-LAN (SOL).
● ACPI state synchronization: The BMC tracks ACPI state changes that are provided by the BIOS.
● Fault resilient booting (FRB): FRB2 is supported by the watchdog timer functionality.
● Front panel management: The BMC controls the system status LED and chassis ID LED. It sup-ports Secure
lockout of certain front panel functionality and monitors button presses. The chassis ID LED is turned on using
a front panel button or a command.
● DIMM temperature monitoring: New sensors and improved acoustic management using closed-loop fan
control algorithm taking into account DIMM temperature readings.
● Integrated KVM.
● Integrated Remote Media Redirection.
● Sensor and SEL logging additions/enhancements (e.g., additional thermal monitoring capability).
● Embedded platform debug feature, which allows capture of detailed data for later engineering analysis.

Hot Keys Supported During POST


Certain "Hot Keys" are recognized during Power-On Self-Test (POST). A Hot Key is a key or key combination that
is recognized as an unprompted command input. The Hot Key is normally recognized even while other processing
is in progress. The BIOS recognizes a number of Hot Keys during POST. After the OS is booted, BIOS supported
Hot Keys are no longer recognized.
● <F2> - Enter the BIOS setup utility. To enter this utility, press the <F2> key during boot time when the logo
screen or POST diagnostic screen is displayed. Wait until the BIOS recognizes and activates the keyboard
before entering key strokes.

H-6156 (Rev A) 58
S2600WF Motherboard Description

● <F6> - Pop-up BIOS boot menu. Displays all available boot devices. The boot order in the pop-up menu is not
the same as the boot order in the BIOS setup. The pop-up menu simply lists all of the available devices from
which the system can be booted, and allows a manual selection of the desired boot device.
● <F12> - Network boot
● <Esc> - Switch from logo screen to diagnostic screen
● <Pause> - Stop POST temporarily

Field Replaceable Unit (FRU) and Sensor Data Record (SRD) Data
The server/node chassis and motherboard needs accurate FRU and SDR data to ensure the embedded platform
management system is able to monitor the appropriate sensors and operate the chassis/system with optimum
cooling and performance. The BMC automatically updates initial FRU/SDR configuration data after changes are
made to the server hardware configuration when any of the following components are added or removed:
● Processor
● Memory
● OCP Module
● Integrated SAS Raid module
● Power supply
● Fan
● Intel® Xeon Phi™ co-processor PCIe card
● Hot Swap Backplane
● Front Panel
The system may not operate with the best performance or best/appropriate cooling if the proper FRU and SDR
data is not installed.

Processor Socket Assembly


Unlike previous Intel® Xeon® E5-2600 (v3/v4) processors that use an integrated loading mechanism (ILM), the
S2600WF motherboard includes two Socket-P0 (LGA 3647) processor sockets that supports Intel Xeon Scalable
family processors. The socket features a rectangular shape which is different from the square shape of previous
sockets. The LGA 3647 socket contains 3647 pins and provides I/O, power, and ground connections from the
motherboard board to the processor package. Because of the large size of the socket, it is made of two C-shaped
halves. The two halves are not interchangeable and are distinguishable from one another by the keying and pin1
insert colors: The left half keying insert is the same color as the body, the right half inserts are white.

Important:
● Previous-generation Intel® Xeon® (v3/v4) processors and their supported CPU heatsinks are not supported
on the S2600WF.
● The LGA 3647 socket also supports the Intel Xeon Scalable processors with embedded Omni-Path Host
Fabric Interconnect (HFI).
● The pins inside the processor socket are extremely sensitive. No object except the processor package should
make contact with the pins inside the processor socket. A damaged socket pin may render the socket
inoperable, and will produce erroneous CPU or other system errors.

H-6156 (Rev A) 59
S2600WF Motherboard Description

The parts of the socket assembly is described below and shown in the following figure.
Processor Heat Sink Module (PHM)
The PHM refers to the sub-assembly where the heatsink and processor are fixed together
by the processor package carrier prior to installation on the motherboard. The PHM is
properly installed when it is securely seated over the two Bolster Plate guide pins and it sits
evenly over the processor socket. Once the PHM is properly seated over the processor
socket assembly, the four heatsink Torx screws must be tightened in the order specified on
the label affixed to the top side of the heatsink.
Processor Package Carrier (Clip)
The carrier is an integral part of the PHM. The processor is inserted into the carrier, then the
heatsink with thermal interface material (TIM) are attached. The carrier has keying/
alignment features to align to cutouts on the processor package. These keying features
ensure the processor package snaps into the carrier in only one direction, and the carrier
can only be attached to the heatsink in one orientation.
The processor package snaps into the clips located on the inside ends of the package
carrier. The package carrier with attached processor package is then clipped to the
heatsink. Hook like features on the four corners of the carrier grab onto the heatsink. All
three pieces are secured to the bolster plate with four captive nuts that are part of the
heatsink.
Important: Fabric supported processor models require the use of a Fabric Carrier Clip
which has a different design than the standard clip shown in the figure below. Attempting to
use a standard processor carrier clip with a Fabric supported processor may result in
component damage and result in improper assembly of the PHM.
Bolster Plate
The bolster plate is an integrated subassembly that includes two corner guide posts placed
at opposite corners and two springs that attach to the heatsink via captive screws. Two
Bolster Plate guide pins of different sizes allows the PHM to be installed only one way on
the processor socket assembly.
The springs are pulled upward as the heatsink is lowered and tightened in place, creating a
compressive force between socket and heatsink. The bolster plate provides extra rigidity,
helps maintain flatness to the motherboard, and provides a uniform load distribution across
all contact pins in the socket.
Heatsink
The heatsink is integrated into the PHM which is attached to the bolster plate springs by two
captive nuts on either side of the heatsink. The bolster plate is held in place around the
socket by the backplate. The heatsink's captive shoulder nuts screw onto the corner
standoffs and bolster plate studs. Depending on the manufacturer/model, some heatsinks
may have a label on the top showing the sequence for tightening and loosening the four
nuts.
There are two types of heatsinks, one for each of the processors. These heatsinks are NOT
interchangeable and must be installed on the correct processor/socket, front versus rear.

H-6156 (Rev A) 60
S2600WF Motherboard Description

Figure 43. Processor Socket Assembly

Heatsink
(1U 80 x 107 mm)

Captive shoulder nuts


(T-20 Torx)

Heatsink align and attach clips


Processor
Heat Sink
Module (PHM) Alignment
Assembly keys
Processor package clip

Package carrier (clip)


Key notches
Pin A1 Processor package
alignment
marks Integrated heat spreader (IHS)
Small guide post

Socket-P (LGA 3647)

Bolster plate assembly

Compression spring

Spring stud

Large guide post


Corner standoff
(Heatsink leveling stud)

Processor Socket Cover


To protect the pins within a processor socket from being damaged, a motherboard with no processor or heatsink
installed must have a plastic cover installed over each processor socket, as shown below.
Processor socket covers must be removed before processor installation.

H-6156 (Rev A) 61
S2600WF Motherboard Description

Figure 44. Protective Processor Socket Cover

Processor socket cover

(Save for future use.)

Make sure socket cover snaps into place.

Small guide
post

Large guide post

S2600WF Memory Support and Population


The S2600WF has 24 DDR4 DIMM slots with 12 DIMMs per processor. Each installed processor supports 6
memory channels via two Integrated Memory Controllers (IMC). Memory channels are assigned an identifier letter
A thru F. On the S2600WF, each memory channel includes two DIMM slots.
Figure 45. Memory Architecture

DDR4 Channel 0 DDR4 Channel 0


UPI 10.4 GT/s
CPU 1 CPU 2
DDR4 Channel 1 UPI 10.4 GT/s DDR4 Channel 1

DDR4 Channel 2 DDR4 Channel 2

DDR4 Channel 3 DDR4 Channel 3

DDR4 Channel 4 DDR4 Channel 4

DDR4 Channel 5 DDR4 Channel 5

The server board supports the following:


● Only DDR4 DIMMs are supported
● Only Error Correction Code (ECC) enabled RDIMMs or LRDIMMs are supported

H-6156 (Rev A) 62
S2600WF Motherboard Description

● Registered DIMMs (RDIMMs), Load Reduced DIMMs (LRDIMMs), and NVDIMMs (Non-Volatile Dual Inline
Memory Module):
● Only RDIMMs and LRDIMMs with integrated Thermal Sensor On Die (TSOD) are supported
● DIMM sizes of 4 GB, 8 GB, 16 GB, 32 GB, 64 GB and 128 GB depending on ranks and technology
● Maximum supported DIMM speeds will be dependent on the processor SKU installed in the system:
○ Intel® Xeon® Platinum 81xx processor – Max. 2666 MT/s (Mega Transfers / second)
○ Intel® Xeon® Gold 61xx processor – Max. 2666 MT/s
○ Intel® Xeon® Gold 51xx processor – Max. 2400 MT/s
○ Intel® Xeon® Silver processor – Max. 2400 MT/s
○ Intel® Xeon® Bronze processor – Max. 2133 MT/s
● DIMMs organized as Single Rank (SR), Dual Rank (DR), or Quad Rank (QR):
○ RDIMMS – Registered DIMMS – SR/DR/QR, ECC only
○ LRDIMMs – Load Reduced DIMMs – QR only, ECC only
○ Maximum of 8 logical ranks per channel
○ Maximum of 10 physical ranks loaded on a channel

Supported Memory
Figure 46. DDR4 RDIMM and LRDIMM Support

Speed (MT/s); Voltage (V); Slots per Channel (SPC)


and DIMMs per Channel (DPC)

Type Ranks DIMM


Per Capacity 2 Slots per Channel
DIMM
and (GB)
Data 1 DPC 2 DPC
Width
4 Gb 8 Gb 1.2 V 1.2 V

RDIMM SRx4 8 GB 16 GB 2666 2666

RDIMM SRx8 4 GB 8 GB 2666 2666

RDIMM DRx8 8 GB 16 GB 2666 2666

RDIMM DRx4 16 GB 32 GB 2666 2666

RDIMM QRx4 N/A 2H-64 GB


2666 2666
3DS 8Rx4 N/A 4H-128 GB

LRDIMM QRx4 32 GB 64 GB 2666 2666

LRDIMM QRx4 N/A 2H-64 GB


3DS 8Rx4 N/A 4H-128 GB

Memory Slot Identification and Population Rules


A total of 24 DIMM slots are provided – 2 CPUs, 6 memory channels/CPU, 2 DIMMs per channel. The following
figure identifies all DIMM slots on the motherboard.

H-6156 (Rev A) 63
S2600WF Motherboard Description

Although mixed DIMM configurations may be functional, Cray only supports and performs platform validation on
systems that are configured with identical DIMMs installed.
Figure 47. Memory Slot Layout

CPU 1 DIMM Slots CPU 2 DIMM Slots

CPU 1 CPU 2
D1
D2

C2
C1
B2
B1
A2
A1
E1
E2
F1
F2

D1
D2

C2
C1
B2
B1
A2
A1
E1
E2
F1
F2
● Each installed processor provides six channels of memory. Memory channels from each processor are
identified as Channels A – F.
● Each memory channel supports two DIMM slots, identified as slots 1 and 2.
○ Each DIMM slot is labeled by CPU #, memory channel, and slot # as shown in the following examples:
CPU1_DIMM_A2; CPU2_DIMM_A2
● DIMM population rules require that DIMMs within a channel be populated starting with the BLUE DIMM slot or
DIMM farthest from the processor in a “fill-farthest” approach.
● When only one DIMM is used for a given memory channel, it must be populated in the BLUE DIMM slot
(furthest from the CPU).
● Mixing of DDR4 DIMM Types (RDIMM, LRDIMM, 3DS RDIMM, 3DS LRDIMM, NVDIMM) within a channel
socket or across sockets produces a Fatal Error Halt during Memory Initialization.
● Mixing DIMMs of different frequencies and latencies is not supported within or across processor sockets. If a
mixed configuration is encountered, the BIOS will attempt to operate at the highest common frequency and
the lowest latency possible.
● When populating a Quad-rank DIMM with a Single- or Dual-rank DIMM in the same channel, the Quad-rank
DIMM must be populated farthest from the processor. Intel MRC will check for correct DIMM placement. A
maximum of 8 logical ranks can be used on any one channel, as well as a maximum of 10 physical ranks
loaded on a channel.
● In order to install 3 QR LRDIMMs on the same channel, they must be operated with Rank Multiplication as
RM = 2, this will make each LRDIMM appear as a DR DIMM with ranks twice as large.
● The memory slots associated with a given processor are unavailable if the corresponding processor socket is
not populated.
● A processor may be installed without populating the associated memory slots, provided a second processor is
installed with associated memory. In this case, the memory is shared by the processors. However, the
platform suffers performance degradation and latency due to the remote memory.

H-6156 (Rev A) 64
S2600WF Motherboard Description

● Processor sockets are self-contained and autonomous. However, all memory subsystem support (such as
Memory RAS, Error Management,) in the BIOS setup are applied commonly across processor sockets.
● For multiple DIMMs (RDIMM, LRDIMM, 3DS RDIMM, 3DS LRDIMM) per channel, always populate DIMMs
with higher electrical loading in slot1, followed by slot 2.

H-6156 (Rev A) 65
S2600BP Motherboard Description

S2600BP Motherboard Description


The Intel® S2600BP (Buchanan Pass) motherboard is designed to support the Intel® Xeon® Scalable processor
family, previously codenamed “Skylake". Previous generation Xeon processors are not supported.
Figure 48. S2600BP Motherboard

Table 7. S2600BP Motherboard Specifications

Feature Description
Processor Support Support for two Intel Xeon Scalable processors:
● Two LGA 3647, (Socket-P0) processor sockets
● Maximum thermal design power (TDP) of 165W
● 40 lanes of Integrated PCIe® 3.0 low-latency I/O

Memory ● 16 DIMM slots in total across six memory channels


● Support for DDR4 DIMMs only. DDR3 DIMMs are not supported
● Registered DDR4 (RDIMM), Load Reduced DDR4 (LRDIMM)
● Memory DDR4 data transfer rates of 1600/1866/2133/2400/2666 MT/s
● Up to two DIMM slots per channel
● 1,536 GB memory (maximum)

Chipset Intel 621 Platform Controller Hub (PCH)


External I/O Connections ● One dedicated management port. RJ45, 100Mb/1Gb/10Gb, for remote server
management (Embedded dedicated NIC module from BMC)
● Stacked dual port USB 3.0 (port 0/1) connector
● Two RJ-45 10 GbE network interface controller (NIC) ports

H-6156 (Rev A) 66
S2600BP Motherboard Description

Feature Description
Internal I/O Connectors ● Bridge slot to extend board I/O
● One 1x12 internal Video header
● One 1x4 IPMB header
● One internal USB 2.0 connector
● One 1x12 pin control panel header
● One DH-10 serial Port connector
● One 2x4 pin header for Intel® RMM4 Lite
● One 1x4 pin header for Storage Upgrade Key
● Two 2x12 pin header for Fabric Sideband CPU1/CPU2

PCIe Support PCIe 3.0 (2.5, 5, 8 GT/s)


Power Connections ● Two sets of 2x3 pin connectors (main power 1/2)
● One backup power 1x8 connector

System Fan Support ● One 2x7 pin fan control connector


● Three 1x8 pin system fan connectors

Riser Card Support ● One bridge board slot for board I/O expansion
● Riser Slot 1 (Rear Right Side)
● VGA Bracket is installed on Riser slot 1 as a standard
● Riser Slot 2 (Rear Left Side) providing a x24 PCIe 3.0 lanes: CPU1
● Riser Slot 3 (Front Left Side) providing a x24 PCIe 3.0 lanes: CPU2
● Riser Slot 4 (Middle Left Side) providing a x16 PCIe 3.0 lanes: CPU2

Video ● Integrated on ASPEED AST2500 BMC


● 16MB of DDR4 video memory (512 total for BMC)

Onboard Storage ● One M.2 SATA/PCIe connector (42 mm drive support only)
Controllers and Options
● Four SATA 6 Gbps ports via Mini-SAS HD (SFF-8643) connector

Fabric Dual port Intel Omni-Path fabric via processor

or

Single port Omni-Path fabric via x16 Gen 3 PCIe adapter

Server Management ● Onboard ASPEED AST2500 BMC controller


● Support for optional Intel Remote Management Module 4 Lite (RMM4)
● Intel Light-Guided Diagnostics on field replaceable units
● Support for Intel System Management Software

H-6156 (Rev A) 67
S2600BP Motherboard Description

Feature Description

● Support for Intel Intelligent Power Node Manager (Require PMBus compliant
power supply)

RAID Support ● Intel Rapid Storage RAID Technology (RSTe) 5.0


● Intel Embedded Server RAID Technology 2 (ESRT2) with optional Intel RAID
C600 Upgrade Key to enable SATA RAID 5

S2600BP Component Locations


S7200BP component locations and connector types are shown in the following figure. The motherboard includes
a status and ID LED for identifying system status. These LEDs and the rear ports are described below.
Figure 49. S2600BP Component Locations

RAID key
Fan connectors Riser slot 3 Riser slot 4 Front panel Bridge board Riser slot 2
1 2 3 POST code
LEDs
F1 F1
System E1 E1 Beep LED
fan D1 D1 M.2 ID LED
connector D2 D2 SATA/PCIe Status LED
NIC 2

Main Fabric Fabric NIC 1


power 1 sideband sideband
CPU 2 CPU 1 CPU1 CPU2
Socket-P Socket-P J4B4 J4B1 USB 3.0
Main (LGA 3647) (LGA 3647)

power 2 Dedicated
management port
A2 A2 USB 2.0
External cooling 7
A1 A1 IPMB
B1 B1 J6B3 External cooling 6
C1 C1 Serial port

Mini-SAS-HD Backup Battery Riser Slot 1 VGA


power (12-pin ribbon cable)

Jumpers. The motherboard includes several jumper blocks that can be used to configure, protect, or recover
specific features of the motherboard. These jumper blocks are shown in the default position in the above figure.
Refer to S2600BP Configuration and Recovery Jumpers on page 78 for details.
POST code LEDs. There are several diagnostic (POST code and beep) LEDs to assist in troubleshooting
motherboard level issues.
Figure 50. S2600BP Rear Connectors

Status LED
Chassis ID LED
Beep LED

Dedicated USB 3.0 NIC 1 NIC 2 POST Code


Management Port 2 (RJ45) (RJ45) LEDs (8)
Port
(IPMI/RJ45)

Dedicated management port. This port with a separate IP address to access the BMC. It provides a port for
monitoring, logging, recovery, and other maintenance functions independent of the main CPU, BIOS, and OS. The

H-6156 (Rev A) 68
S2600BP Motherboard Description

management port is active with or without the RMM4 Lite key installed. The dedicated management port and the
two onboard NICs support a BMC embedded web server and GUI.
Dedicated management port/NIC LEDs. The link/activity LED (at the right of the connector) indicates network
connection when on, and transmit/receive activity when blinking. The speed LED (at the left of the connector)
indicates 10-Gbps operation when green, 1-Gbps operation when amber, and 100-Mbps when off. Figure 58
provides an overview of the LEDs.

LED Color LED State NIC State


Left Green Off LAN link not established
On LAN link is established
Blinking LAN transmit and receive activity
Right -- Off 100 Mbit/sec data rate is selected
Amber On 1 Gbit/sec data rate is selected.
Green On 10 Gbit/sec data rate is selected

Status LED. This bicolor LED lights green (status) or amber (fault) to indicate the current health of the server.
Green indicates normal or degraded operation. Amber indicates the hardware state and overrides the green
status. The state detected by the BMC and other controllers are included in the Status LED state. TRUE? The
Status LED on the chassis front panel and this motherboard Status LED are tied together and show the same
state. When the server is powered down (transitions to the DC-off state or S5), the Integrated BMC is still on
standby power and retains the sensor and front panel status LED state established prior to the power-down event.
The Status LED displays a steady Amber color for all Fatal Errors that are detected during processor initialization.
A steady Amber LED indicates that an unrecoverable system failure condition has occurred.
A description of the Status LED states follows.

Color State Criticality Description


Off System is not Not ready ● System is powered off (AC and/or DC)
operating
● System is in Energy-using Product (EuP) Lot6 Off
mode/regulation1
● System is in S5 Soft-off state.

Green Solid on OK Indicates the system is running (in S0 state) and status is
healthy. There are no system errors.

AC power is present, the BMC has booted, and


management is up and running. After a BMC reset with a
chassis ID solid on, the BMC is booting Linux.
Control has been passed from BMC uboot to BMC Linux.
Remains in this state for approximately 10-20 seconds.

Blinking (~1 Degraded: System is System degraded:


Hz) operating in a
● Power supply/fan redundancy loss
degraded state
although still ● Fan warning or failure when the number of fully
functional, or system is operational fans is less than minimum number needed
operating in a to cool the system
redundant state but

H-6156 (Rev A) 69
S2600BP Motherboard Description

Color State Criticality Description


with an impending ● Non-critical threshold crossed (temperature, voltage,
failure warning. power)
● Power supply failure
● Unable to use all installed memory
● Correctable memory errors beyond threshold
● Battery failure
● Error during BMC operation
● BMC watchdog has reset the BMC
● Power Unit sensor offset for configuration error is
asserted
● HDD HSC is off-line or degraded

Amber Solid on Critical, non- Fatal alarm: System has failed or shutdown
recoverable - system is
halted
Blinking (~1 Non-critical: System is Non-fatal alarm: System failure likely
Hz) operating in a
● Critical threshold crossed (temperature, voltage, power)
degraded state with an
impending failure ● VRD Hot asserted
warning, although still ● Minimum number of fans to cool the system not present
functioning. or failed
● Hard drive fault
● Insufficient power from PSUs
1.The overall power consumption of the system is referred to as System Power States. There are a total of six
different power states ranging from: S0 (the system is completely powered ON and fully operational), to S5 (the
system is completely powered OFF), and the states (S1, S2, S3, and S4) referred to as sleeping states.

Chassis ID LED. This blue LED is used to visually identify a specific motherboard/server installed in the rack or
among several racks of servers. The ID button on front of the server/node toggles the state of the chassis ID LED.
There is no precedence or lock-out mechanism for the control sources. When a new request arrives, all previous
requests are terminated. For example, if the chassis ID LED is blinking and the ID button is pressed, then the ID
LED changes to solid on. If the button is pressed again with no intervening commands, the ID LED turns off.

LED State State


On (steady) The LED has a solid On state when it is activated through the ID button. It remains lit until
the button is pushed again or until an ipmitool chassis identify command is
received to change the state of the LED.
Blink (~1 Hz) The LED blinks after it is activated through a command.
Off Off. Pushing the ID button lights the ID LED.

BMC Boot/Reset Status LED Indicators. During the BMC boot or BMC reset process, the System Status and
Chassis ID LEDs are used to indicate BMC boot process transitions and states. A BMC boot occurs when AC

H-6156 (Rev A) 70
S2600BP Motherboard Description

power is first applied to the system. A BMC reset occurs after a BMC firmware update, after receiving a BMC cold
reset command, and upon a BMC watchdog initiated reset. These two LEDs define states during the BMC boot/
reset process.

BMC Boot/Reset State Chassis ID LED Status LED Condition


BMC/Video memory test Solid blue Solid amber Non-recoverable condition. Contact
failed Cray service for information on
replacing the motherboard.
Both universal bootloader Blink blue (6 Hz) Solid amber Non-recoverable condition. Contact
(u-Boot) images bad Cray service for information on
replacing the motherboard.
BMC in u-Boot Blink blue (3 Hz) Blink green (1 Hz) Blinking green indicates degraded
state (no manageability), blinking blue
indicates u-Boot is running but has not
transferred control to BMC Linux.
System remains in this state 6-8
seconds after BMC reset while it pulls
the Linux image into Flash.
BMC booting Linux Solid blue Solid green Solid green with solid blue after an AC
cycle/BMC reset, indicates control
passed from u-Boot to BMC Linux.
Remains in this state for ~10-20
seconds.
End of BMC boot/reset Off Solid green Indicates BMC Linux has booted and
process. Normal system manageability functionality is up and
operation running. Fault/Status LEDs operate
normally.

Beep LED. The S2600BP does not have an audible beep code component. Instead, it uses a beep code LED that
translates audible beep codes into visual light sequences. Prior to system video initialization, the BIOS uses these
Beep_LED codes to inform users on error conditions. A user-visible beep code is followed by the POST Progress
LEDs.

Beep_LED Error Message POST Progress Code Description


Sequence
1 blink USB device action N/A Short LED blink whenever USB device is
discovered in POST, or inserted or removed
during runtime.
1 long blink Intel® TXT security 0xAE, 0xAF System halted because Intel® Trusted
violation Execution Technology detected a potential
violation of system security.
3 blinks Memory error Multiple System halted because a fatal error related to
the memory was detected.
3 long CPU mismatch error 0xE5, 0xE6 System halted because a fatal error related to
blinks the CPU family/core/cache mismatch was
followed by detected.
1

H-6156 (Rev A) 71
S2600BP Motherboard Description

Beep_LED Error Message POST Progress Code Description


Sequence
The following “Beep_LED” Codes are lighted during BIOS Recovery.
2 blinks Recovery started N/A Recovery boot has been initiated.
4 blinks Recovery failed N/A Recovery has failed. This typically happens so
quickly after recovery is initiated that it lights
like a 2-4 LED code.

The Integrated BMC may generate beep codes upon detection of failure conditions. Beep codes are translated
into visual LED sequences each time the problem is discovered, such as on each power-up attempt, but are not lit
continuously. Codes that are common across all Intel server boards and systems that use the same generation of
chipset are listed in the following table. Each digit in the code is represented by a LED lit/off sequence of whose
count is equal to the digit.

Code Associated Sensors Reason for Beep LED lit


1-5-2-1 No CPUs installed or first CPU CPU1 socket is empty, or sockets are populated incorrectly.
socket is empty. CPU1 must be populated before CPU2.

1-5-2-4 MSID Mismatch MSID mismatch occurs if a processor is installed into a system
board that has incompatible power capabilities.
1-5-4-2 Power fault DC power unexpectedly lost (power good dropout) – Power
unit sensors report power unit failure offset
1-5-4-4 Power control fault (power Power good assertion timeout – Power unit sensors report soft
good assertion timeout). power control failure offset
1-5-1-2 VR Watchdog Timer sensor VR controller DC power on sequence was not completed in
assertion time.
1-5-1-4 Power Supply Status The system does not power on or unexpectedly powers off
and a Power Supply Unit (PSU) is present that is an
incompatible model with one or more other PSUs in the
system.

POST Code Diagnostic LEDs


There are two rows of four POST code diagnostic LEDs (eight total) on the back edge of the motherboard. These
LEDs are difficult to view through the back of the server/node chassis. During the system boot process, the BIOS
executes a number of platform configuration processes, each of which is assigned a specific hex POST code
number. As each configuration routine is started, the BIOS displays the given POST code to the POST code
LEDs. To assist in troubleshooting a system hang during the POST process, the LEDs display the last POST
event run before the hang.
During early POST, before system memory is available, serious errors that would prevent a system boot with data
integrity cause a System Halt with a beep code and a memory error code to be displayed through the POST Code
LEDs. Less fatal errors cause a POST Error Code to be generated as a major error. POST Error Codes are
displayed in the BIOS Setup error manager screen and are logged in the system event log (SEL), which can be
viewed with the selview utility. The BMC deactivates POST Code LEDs after POST is completed.

H-6156 (Rev A) 72
S2600BP Motherboard Description

S2600BP Processor Socket Assembly


Unlike previous Intel® Xeon® E5-2600 (v3/v4) processors that use an integrated loading mechanism (ILM), the
S2600BP motherboard includes two Socket-P0 (LGA 3647) processor sockets that supports Intel Xeon Scalable
processors. The socket features a rectangular shape which is different from the square shape of previous
sockets. The LGA 3647 socket contains 3647 pins and provides I/O, power, and ground connections from the
motherboard board to the processor package. Because of the large size of the socket, it is made of two C-shaped
halves. The two halves are not interchangeable and are distinguishable from one another by the keying and pin1
insert colors: The left half keying insert is the same color as the body, the right half inserts are white.
The LGA 3647 socket also supports both the Intel Xeon Scalable processors with embedded Intel Omni-Path
Host Fabric Interconnect (HFI).
The parts of the socket assembly is described below and shown in the following figure.
Important: The pins inside the processor socket are extremely sensitive. No object except the processor package
should make contact with the pins inside the processor socket. A damaged socket pin may render the socket
inoperable, and will produce erroneous CPU or other system errors.
Processor Heat Sink Module (PHM)
The PHM refers to the sub-assembly where the heatsink and processor are fixed together
by the processor package carrier prior to installation on the motherboard.
Processor Package Carrier (Clip)
The carrier is an integral part of the PHM. The processor is inserted into the carrier, then the
heatsink with thermal interface material (TIM) are attached. The carrier has keying/
alignment features to align to cutouts on the processor package. These keying features
ensure the processor package snaps into the carrier in only one direction, and the carrier
can only be attached to the heatsink in one orientation.
The processor package snaps into the clips located on the inside ends of the package
carrier. The package carrier with attached processor package is then clipped to the
heatsink. Hook like features on the four corners of the carrier grab onto the heatsink. All
three pieces are secured to the bolster plate with four captive nuts that are part of the
heatsink.
Important: Fabric supported processor models require the use of a Fabric Carrier Clip
which has a different design than the standard clip shown in the figure below. Attempting to
use a standard processor carrier clip with a Fabric supported processor may result in
component damage and result in improper assembly of the PHM.
Bolster Plate
The bolster plate is an integrated subassembly that includes two corner guide posts placed
at opposite corners and two springs that attach to the heatsink via captive screws. The
springs are pulled upward as the heatsink is lowered and tightened in place, creating a
compressive force between socket and heatsink. The bolster plate provides extra rigidity,
helps maintain flatness to the motherboard, and provides a uniform load distribution across
all contact pins in the socket.
Heatsink
The heatsink is integrated into the PHM which is attached to the bolster plate springs by two
captive nuts on either side of the heatsink. The bolster plate is held in place around the
socket by the backplate. The heatsink's captive shoulder nuts screw onto the corner
standoffs and bolster plate studs. Depending on the manufacturer/model, some heatsinks
may have a label on the top showing the sequence for tightening and loosening the four
nuts.

H-6156 (Rev A) 73
S2600BP Motherboard Description

There are two types of heatsinks, one for each of the processors. These heatsinks are NOT
interchangeable and must be installed on the correct processor/socket, front versus rear.
Figure 51. Processor Socket Assembly

Heatsink
(1U 80 x 107 mm)

Captive shoulder nuts


(T-20 Torx)

Heatsink align and attach clips


Processor
Heat Sink
Module (PHM) Alignment
Assembly keys
Processor package clip

Package carrier (clip)


Key notches
Pin A1 Processor package
alignment
marks Integrated heat spreader (IHS)
Small guide post

Socket-P (LGA 3647)

Bolster plate assembly

Compression spring

Spring stud

Large guide post


Corner standoff
(Heatsink leveling stud)

Bolster Plate Insulator


The 3211 compute nodes include a bolster insulator plate on CPU 1 to prevent potential
contact between a PCIe add in card (when installed) and the metal bolster plate assembly.
The insulator should only be removed when installing a processor SKU that supports the
Intel® Omni-Path host fabric interconnect (HFI). The insulator must be reinstalled after the
setup is completed. Do not operate the compute node without the insulator when configured
with a PCIe add-in card. Doing so may critically damage the PCIe add-in card, the
motherboard, or both.

H-6156 (Rev A) 74
S2600BP Motherboard Description

Figure 52. Bolster Insulator Plate for CPU1

Bolster plate insulator

Bolster plate

Processor socket

S2600BP Architecture
The architecture of Intel® Server Board S2600BP is developed around the integrated features and functions of
the Intel® Xeon® Scalable processor family, the Intel® C621 Series Chipset family, Intel® Ethernet Controller
X550, and the ASPEED* AST2500* Server Board Management Controller.
The following figure provides an overview of the S2600BP architecture, showing the features and interconnects of
each of the major subsystem components.

H-6156 (Rev A) 75
S2600BP Motherboard Description

Figure 53. S2600BP Block Diagram

DDR4 Channel A DDR4 Channel A

DDR4 Channel B UPI 10.4 GT/s DDR4 Channel B

® ® ® ®
DDR4 Channel C Intel Xeon UPI 10.4 GT/s Intel Xeon DDR4 Channel C

DDR4 Channel D
Scalable Scalable DDR4 Channel D
processor processor
DDR4 Channel E DDR4 Channel E

DDR4 Channel F DDR4 Channel F

PCIe 3.0 x16 (32 GB/s)


DMI 3.0 x4 (Not Used)
PCIe 3.0 x16 (32 GB/s) PCIe 3.0 x16 (32 GB/s)

DMI 3.0 x4 (16 GB/s)


PCIe 3.0 x8 (16 GB/s) PCIe 3.0 x8 (16 GB/s)

Riser slot 2 PCIe 3.0 x16 (32 GB/s)


Riser slot 3
Riser slot 4
NIC 2
Riser slot 1 PCIe 3.0 x4 (8 GB/s)
®
Intel (10 GbE)
MiniSAS HD
SATA 6 GB/s NC-SI (RMII PHY Port B)
X550 NIC 1
(ports 0-3) NC-SI (RMII PHY Port B)

(10 GbE)
M.2 SATA/PCIe sSATA 6 GB/s (Port 2) PCIe 2.0 x1 GPIOs, Reset/PowerGood

® RGM II (Port A) PHY (1 GbE)


Intel C621 USB 2.0 (Port 7) AST2500
Used as USB 1.1
Chipset BMC BMC/Video
Dedicated
SATA 6 GB/s USB 2.0 (Port 8) DDR4 Memory Maint. Port
(ports 4-7) (512 MB)

eSPI
sSATA 6 GB/s BMC SPI (50 MHz)
BMC FW
(port 0) Flash
PLD SGPIO (32 MB)

SPI (50 MHz)


BMC SMBusses

Control, Misc

USB 2.0 Fan PWMs, Tachs Serial Port Video


(Port 5) 2x5 (internal)
USB 3.0 (Port 1)

USB 3.0 (Port 0)

S2600BP Processor Population Rules


Although the S2600BP motherboard supports using different processors on each socket, Cray performs platform
validation only on systems that are configured with identical processors. For optimal system performance and
reliability, install identical processors.
If needed, the S2600BP may operate with one processor in the CPU 1 socket. However, some board features
may not be functional if a second processor is not installed. Riser slots 3 and 4 can be used only in dual
processor configurations.
When two processors are installed, both processors must:
● be of the same processor family,
● have the same number of cores,
● and have the same cache sizes for all levels of processor cache memory.
Processors with different core frequencies can be mixed in a system, given the prior rules are met. If this condition
is detected, all processor core frequencies are set to the lowest common denominator (highest common speed)
and an error is reported.

H-6156 (Rev A) 76
S2600BP Motherboard Description

Processors that have different Intel® UltraPath (UPI) Link Frequencies may operate together if they are otherwise
compatible and if a common link frequency can be selected. The common link frequency would be the highest link
frequency that all installed processors can achieve.
Processor stepping within a common processor family can be mixed as long as it is listed in the processor
specification updates published by Intel Corporation.

S2600BP Memory Support and Population Rules


Each processor includes two integrated memory controllers (IMC), each capable of supporting three DDR4
memory channels. Each memory channel is capable of supporting two DIMM slots. On the S2600BP, channel A
and channel D support two memory slots and B, C, E, and F supports one memory slot, for a total possible of 16
DIMMs.
The processor IMC supports the following:
● For maximum memory performance, 12 DIMMs (one DIMM per channel) are recommended
● Registered DIMMs (RDIMMs), Load Reduced DIMMs (LRDIMMs) and LRDIMM 3DS are supported
● DIMMs of different types (RDIMM, LRDIMM) may not be mixed – this results in a Fatal Error during memory
initialization at the beginning of POST
● DIMMs using x4 or x8 DRAM technology DIMMs organized as Single Rank (SR), Dual Rank (DR), or Quad
Rank (QR)
● Maximum of 8 logical ranks per channel
● Maximum of 10 physical ranks loaded on a channel
● DIMM sizes of 4 GB, 8 GB, 16 GB, 32 GB, 64 GB and 128 GB depending on ranks and technology
● DIMM speeds of 1600/1866/2133/2400/2666 MT/s
● Only Error Correction Code (ECC) enabled RDIMMs or LRDIMMs are supported
● Only RDIMMs and LRDIMMs with integrated Thermal Sensor On Die (TSOD) are supported

Memory Population Rules


Although mixed DIMM configurations are supported, Cray performs platform validation only on systems that are
configured with identical DIMMs installed. Each memory slot should be populated with identical DDR4 DIMMs.
● The memory channels from processor socket 1 and processor socket 2 are identified as “CPU# plus A, B, C,
D, E or F” respective channel.
● The memory slots associated with a given processor are unavailable if the corresponding processor socket is
not populated.
● The silk screened DIMM slot identifiers on the board provide information about the channel, and therefore the
processor to which they belong. For example, CPU1_DIMM_A1 is the first slot on Channel A on processor 1;
CPU2_DIMM_A1 is the first DIMM socket on Channel A on processor 2.
● A processor may be installed without populating the associated memory slots, if a second processor is
installed along with its associated memory. In this case, the memory is shared by the processors. However,
the platform suffers performance degradation and latency due to the remote memory.
● The S2600BP uses a “2-1-1” configuration--populate first the slot closest to processor in the channel with 2
slots.

H-6156 (Rev A) 77
S2600BP Motherboard Description

● Processor sockets are self-contained and autonomous. However, all memory subsystem support (such as
Memory RAS and Error Management) in the BIOS setup is applied commonly across processor sockets.
● Mixing DIMMs of different frequencies and latencies is not supported within or across processor sockets.
● A maximum of 8 logical ranks can be used on any one channel, as well as a maximum of 10 physical ranks
loaded on a channel.
● DIMM slot 1 closest to the processor socket must be populated first in the channel with 2 slots. Only remove
factory installed DIMM blanks when populating the slot with memory. Intel MRC will check for correct DIMM
placement

S2600BP Configuration and Recovery Jumpers


The motherboard has several 3-pin jumper blocks that can be used to configure, protect, or recover specific
features of the motherboard. Refer to S2600BP Component Locations on page 68to locate each jumper block on
the motherboard. Pin 1 of each jumper block can be identified by the arrowhead (▼) silk screened next to the pin.
The default position for each jumper block is pins 1 and 2.

BMC Force Update (J6B3)


When performing a standard BMC firmware update procedure, the update utility places the BMC into an update
mode, allowing the firmware to load safely onto the flash device. In the unlikely event the BMC firmware update
process fails due to the BMC not being in the proper update state, the server board provides a BMC Force
Update jumper (J6B3) which will force the BMC into the proper update state. The following procedure should be
followed in the event the standard BMC firmware update process fails.
Normal BMC functionality is disabled with the Force BMC Update jumper set to the enabled position. You should
never run the server with the BMC Force Update jumper set in this position. You should use this jumper setting
only when the standard firmware update process fails. This jumper should remain in the default/disabled position
when the server is running normally
To perform a Force BMC Update, follow these steps:
1. Move the jumper (J6B3) from the default operating position (covering pins 1 and 2) to the enabled position
(covering pins 2 and 3).
2. Power on the server by pressing the power button on the front panel.
3. Perform the BMC firmware update procedure as documented in the Release Notes included in the given BMC
firmware update package. After successful completion of the firmware update process, the firmware update
utility may generate an error stating the BMC is still in update mode.
4. Power down the server.
5. Move the jumper from the enabled position (covering pins 2 and 3) to the disabled position (covering pins 1
and 2).
6. Power up the server.

ME Force Update (J4B1)


When this 3-pin jumper is set, it manually puts the ME firmware in update mode, which enables the user to update
ME firmware code when necessary.
Normal ME functionality is disabled with the Force ME Update jumper set to the enabled position. You should
never run the server with the ME Force Update jumper set in this position. You should only use this jumper setting

H-6156 (Rev A) 78
S2600BP Motherboard Description

when the standard firmware update process fails. This jumper should remain in the default/disabled position when
the server is running normally.
To perform a Force ME Update, follow these steps:
1. Move the jumper (J4B1) from the default operating position (covering pins 1 and 2) to the enabled position
(covering pins 2 and 3).
2. Power on the server by pressing the power button on the front panel.
3. Perform the ME firmware update procedure as documented in the Release Notes file that is included in the
given system update package.
4. Power down the server.
5. Move the jumper from the enabled position (covering pins 2 and 3) to the disabled position (covering pins 1
and 2).
6. Power up the server.

Password Clear (J4B2)


The user sets this 3-pin jumper to clear the password. This jumper causes both the User password and the
Administrator password to be cleared if they were set. The operator should be aware that this creates a security
gap until passwords have been installed again.
No method of resetting BIOS configuration settings to the default values will affect either the Administrator or User
passwords.
This is the only method by which the Administrator and User passwords can be cleared unconditionally. Other
than this jumper, passwords can only be set or cleared by changing them explicitly in BIOS Setup or by similar
means
The recommended steps for clearing the User and Administrator passwords are:
1. Move the jumper (J4B2) from the default operating position (covering pins 1 and 2) to the enabled position
(covering pins 2 and 3).
2. Power on the server by pressing the power button on the front panel.
3. Boot into the BIOS Setup. Check the Error Manager tab for POST Error Codes:
● 5221 Passwords cleared by jumper
● 5224 Password clear jumper is set
4. Power down the server.
5. Move the jumper from the enabled position (covering pins 2 and 3) to the disabled position (covering pins 1
and 2).
6. Power up the server.
7. Strongly recommended: Boot into the BIOS Setup immediately, go to the Security tab and set the
Administrator and User passwords if you intend to use BIOS password protection.

BIOS Recovery Mode (J4B3)


If a system is completely unable to boot successfully to an OS, hangs during POST, or even hangs and fails to
start executing POST, it may be necessary to perform a BIOS Recovery procedure, which can replace a defective
copy of the Primary BIOS.

H-6156 (Rev A) 79
S2600BP Motherboard Description

The BIOS introduces three mechanisms to start the BIOS recovery process, which is called Recovery Mode:
● The Recovery Mode Jumper causes the BIOS to boot in Recovery Mode.
● The Boot Block detects partial BIOS update and automatically boots in Recovery Mode.
● The BMC asserts Recovery Mode GPIO in case of partial BIOS update and FRB2 time-out.
The BIOS Recovery takes place without any external media or Mass Storage device as it utilizes the Backup
BIOS inside the BIOS flash in Recovery Mode. The Recovery procedure is included here for general reference.
However, if in conflict, the instructions in the BIOS Release Notes are the definitive version
When Recovery Mode Jumper is set, the BIOS begins with a “Recovery Start” event logged to the SEL, loads and
boots with the Backup BIOS image inside the BIOS flash itself. This process takes place before any video or
console is available. The system boots up into the Shell directly while a “Recovery Complete” SEL logged. An
external media is required to store the BIOS update package and steps are the same as the normal BIOS update
procedures. After the update is complete, there will be a message displayed stating that the “BIOS has been
updated successfully" indicating the BIOS update process is finished. The User should then switch the recovery
jumper back to normal operation and restart the system by performing a power cycle.
If the BIOS detects partial BIOS update or the BMC asserts Recovery Mode GPIO, the BIOS will boot up with
Recovery Mode. The difference is that the BIOS boots up to the Error Manager Page in the BIOS Setup utility. In
the BIOS Setup utility, boot device, Shell or Linux for example, could be selected to perform the BIOS update
procedure under Shell or OS environment.
Again, before starting to perform a Recovery Boot, be sure to check the BIOS Release Notes and verify the
Recovery procedure shown in the Release Notes.
The following steps demonstrate this recovery process:
1. Move the jumper (J4B3) from the default operating position (covering pins 1 and 2) to the BIOS Recovery
position (covering pins 2 and 3).
2. Power on the server.
3. The BIOS will load and boot with the backup BIOS image without any video or display.
4. When the compute module boots into the EFI shell directly, the BIOS recovery is successful.
5. Power off the server.
6. Move the jumper (J4B3) back to the normal position (covering pins 1 and 2).
7. Put the server back into the rack. A normal BIOS update can be performed if needed.

BIOS Default (J4B4)


This jumper causes the BIOS Setup settings to be reset to their default values. On previous generations of server
boards, this jumper has been referred to as “Clear CMOS”, or “Clear NVRAM”. Setting this jumper according to
the procedure below will clear all current contents of NVRAM variable storage, and then load the BIOS default
settings.
This jumper does not reset Administrator or User passwords. In order to reset passwords, the Password Clear
jumper must be used.
The recommended steps to reset to the BIOS defaults are:
1. Move the jumper from pins 1-2 to pins 2-3 momentarily. It is not necessary to leave the jumper in place while
rebooting.
2. Restore the jumper from pins 2-3 to the normal setting of pins 1-2.

H-6156 (Rev A) 80
S2600BP Motherboard Description

3. Boot the system into Setup. Check the Error Manager tab, and you should see POST Error Codes:
● 0012 System RTC date/time not set
● 5220 BIOS Settings reset to default settings
4. Go to the Setup Main tab, and set the System Date and System Time to the correct current settings. Make
any other changes that are required in Setup – for example, Boot Order.

S2600BP BIOS Features

Hot Keys Supported During POST


The BIOS-supported Hot Keys are recognized by the BIOS during the system boot-time POST process only. A
Hot Key is recognized as an unprompted command input, where the operator is not prompted to press the Hot
Key. Once the POST process has completed and handed off the system boot process to the OS, BIOS-supported
Hot Keys are no longer recognized.

Hot Key Function


Combination

<F2> Enter the BIOS Setup Utility

<F6> Pop-up BIOS Boot Menu

<F12> Network boot

<Esc> Switch from Logo Screen to Diagnostic Screen

<Pause> Stop POST temporarily

BIOS Security Features


The motherboard BIOS supports the following system security options designed to prevent unauthorized system
access or tampering of server/node settings:
● Password protection
● Front panel lockout
The <F2> BIOS Setup Utility, accessed during POST, includes a Security tab with options to configure passwords
and front panel lockout.

H-6156 (Rev A) 81
S2600BP Motherboard Description

Figure 54. BIOS Setup Security Tab

Security

Administrator Password Status Not Installed Administrator password is


User Password Status Not Installed used if Power On Password is
enabled and to control
Set Administrator Password change access in BIOS Setup.
Set User Password Length is 1-14 characters.
Power On Password <Disabled> Case sensitive alphabetic,
numeric and sp[ecial
Front Panel Lockout <Disabled> characters !@#$%&()-_+=?
are allowed. The change of
this option will take effect
immediately.
Note: Administrator password
must be set in order to use
the User account.

Entering BIOS Setup


To enter the BIOS Setup Utility using a keyboard (or emulated keyboard), press the <F2> function key during boot
time when the POST Diagnostic Screen is displayed. If using a USB keyboard, it is important to wait until the
BIOS “discovers” the keyboard and beeps. Until the USB Controller has been initialized and the USB keyboard
activated, key presses will not be read by the system.
When the Setup Utility is entered, the Main screen is displayed initially. However, in the event that a serious error
occurs during POST, the system will enter the BIOS Setup Utility and display the Error Manager screen instead of
the Main screen.
Typically, changing BIOS settings has been done primarily through the BIOS Setup utility. After navigating through
the menu screen and making desired changes, <F10> is pressed to "Save and Exit" the utility. BIOS changes are
saved and the system is rebooted to make all changes take effect.

Password Setup
The BIOS uses passwords to prevent unauthorized access to the server. Passwords can restrict entry to the BIOS
Setup utility, restrict use of the Boot Device pop-up menu during POST, suppress automatic USB device
reordering, and prevent unauthorized system power on. It is strongly recommended that an Administrator
Password be set. A system with no Administrator password set allows anyone who has access to the server to
change BIOS settings.
An Administrator password must be set in order to set the User password.
The maximum length of a password is 14 characters and can be made up of a combination of alphanumeric (a-z,
A-Z, 0-9) characters and any of the following special characters:
!@#$%^&*()–_+=?

H-6156 (Rev A) 82
S2600BP Motherboard Description

Passwords are case sensitive.


The Administrator and User passwords must be different from each other. An error message will be displayed and
a different password must be entered if there is an attempt to enter the same password for both. The use of
“Strong Passwords” is encouraged, but not required. In order to meet the criteria for a strong password, the
password entered must be at least 8 characters in length, and must include at least one each of alphabetic,
numeric, and special characters. If a weak password is entered, a warning message will be displayed, and the
weak password will be accepted.
Once set, a password can be cleared by changing it to a null string. This requires the Administrator password,
and must be done through BIOS Setup or other explicit means of changing the passwords. Clearing the
Administrator password will also clear the User password. Passwords can also be cleared by using the Password
Clear jumper on the motherboard. Resetting the BIOS configuration settings to default values (by any method)
has no effect on the Administrator and User passwords.
As a security measure, if a User or Administrator enters an incorrect password three times in a row during the
boot sequence, the system is placed into a halt state. A system reset is required to exit out of the halt state. This
feature makes it more difficult to guess or break a password.

System Administrator Password Rights


When the correct Administrator password is entered when prompted, the user has the ability to perform the
following actions:
● Access the <F2> BIOS Setup Utility
● Configure all BIOS setup options in the <F2> BIOS Setup Utility
● Clear both the Administrator and User passwords
● Access the <F6> Boot Menu during POST
If the Power On Password function is enabled in BIOS Setup, the BIOS will halt early in POST to request a
password (Administrator or User) before continuing POST.

Authorized System User Password Rights and Restrictions


When the correct User password is entered, the user has the ability to perform the following:
● Access the <F2> BIOS Setup Utility
● View, but not change any BIOS Setup options in the <F2> BIOS Setup Utility
● Modify System Time and Date in the BIOS Setup Utility
● If the Power On Password function is enabled in BIOS Setup, the BIOS will halt early in POST to request a
password (Administrator or User) before continuing Post
In addition to restricting access to most Setup fields to viewing only when a User password is entered, defining a
User password imposes restrictions on booting the system. In order to simply boot in the defined boot order, no
password is required. However, the F6 Boot pop-up menu prompts for a password, and can only be used with the
Administrator password. Also, when a User password is defined, it suppresses the USB Reordering that occurs, if
enabled, when a new USB boot device is attached to the system. A User is restricted from booting in anything
other than the Boot Order defined in the Setup by an Administrator.

Front Panel Lockout


If enabled in BIOS setup, this option disables the following front panel features:

H-6156 (Rev A) 83
S2600BP Motherboard Description

● The OFF function of the Power button


● System Reset button
If [Enabled], the power and reset buttons on the server front panel are locked, and they must be controlled via a
system management interface.

H-6156 (Rev A) 84

You might also like