Project Olympus Intel XSP Motherboard
Project Olympus Intel XSP Motherboard
Author:
Mark A. Shaw, Principal Hardware Engineering Manager, Microsoft
Open Compute Project • Project Olympus Intel® Xeon® Scalable Processor Motherboard Specification
Revision History
Date Description
11/1/2017 Version 1.0
11/17/2017 Version 1.1
- Added Section 11 to describe PCB Stack-up
http://opencompute.org ii
Open Compute Project • Project Olympus Intel® Scalable Processor Motherboard Specification
As of November 117, 2017, the following persons or entities have made this Specification available under the Open Web
Foundation Final Specification Agreement (OWFa 1.0), which is available at http://www.openwebfoundation.org/legal/the-
owf-1-0-agreements/owfa-1-0
Microsoft Corporation.
You can review the signed copies of the Open Web Foundation Agreement Version 1.0 for this Specification at Project Olympus
License Agreements, which may also include additional parties to those listed above.
Your use of this Specification may be subject to other third party rights. THIS SPECIFICATION IS PROVIDED "AS IS." The
contributors expressly disclaim any warranties (express, implied, or otherwise), including implied warranties of merchantability,
non-infringement, fitness for a particular purpose, or title, related to the Specification. The entire risk as to implementing or
otherwise using the Specification is assumed by the Specification implementer and user. IN NO EVENT WILL ANY PARTY BE
LIABLE TO ANY OTHER PARTY FOR LOST PROFITS OR ANY FORM OF INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL
DAMAGES OF ANY CHARACTER FROM ANY CAUSES OF ACTION OF ANY KIND WITH RESPECT TO THIS SPECIFICATION OR ITS
GOVERNING AGREEMENT, WHETHER BASED ON BREACH OF CONTRACT, TORT (INCLUDING NEGLIGENCE), OR OTHERWISE,
AND WHETHER OR NOT THE OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
CONTRIBUTORS AND LICENSORS OF THIS SPECIFICATION MAY HAVE MENTIONED CERTAIN TECHNOLOGIES THAT ARE MERELY
REFERENCED WITHIN THIS SPECIFICATION AND NOT LICENSED UNDER THE OWF CLA OR OWFa. THE FOLLOWING IS A LIST OF
MERELY REFERENCED TECHNOLOGY: INTELLIGENT PLATFORM MANAGEMENT INTERFACE (IPMI); I 2C IS A TRADEMARK AND
TECHNOLOGY OF NXP SEMICONDUCTORS ; EPYC IS A TRADEMARK AND TECHNOLOGY OF ADVANCED MICRO DEVICES INC.;
ASPEED AST 2400/2500 FAMILY PROCESSORS IS A TECHNOLOGY OF ASPEED TECHNOLOGY INC.; MOLEX NANOPITCH, NANO
PICOBLADE, AND MINI-FIT JR AND ASSOCIATED CONNECTORS ARE TRADEMARKS AND TECHNOLOGIES OF MOLEX LLC;
WINBOND IS A TRADEMARK OF WINBOND ELECTRONICS CORPORATION; NVLINK IS A TECHNOLOGY OF NVIDIA; INTEL XEON
SCALABLE PROCESSORS, INTEL QUICKASSIST TECHNOLOGY, INTEL HYPER-THREADING TECHNOLOGY, ENHANCED INTEL
SPEEDSTEP TECHNOLOGY, INTEL VIRTUALIZATION TECHNOLOGY, INTEL SERVER PLATFORM SERVICES, INTEL MANAGABILITY
ENGINE, AND INTEL TRUSTED EXECUTION TECHNOLOGY ARE TRADEMARKS AND TECHNOLOGIES OF INTEL CORPORATION;
SITARA ARM CORTEX-A9 PROCESSOR IS A TRADEMARK AND TECHNOLOGY OF TEXAS INSTRUMENTS; GUIDE PINS FROM
PENCOM; BATTERIES FROM PANASONIC. IMPLEMENTATION OF THESE TECHNOLOGIES MAY BE SUBJECT TO THEIR OWN LEGAL
TERMS.
http://opencompute.org iii
Contents
1 Project Olympus Specifications List................................................................................................................ 1
2 Overview........................................................................................................................................................ 1
3 Background .................................................................................................................................................... 2
4 Block Diagram ................................................................................................................................................ 2
5 Features ......................................................................................................................................................... 3
CPUs....................................................................................................................................................... 4
PCH ........................................................................................................................................................ 4
Flexible I/O Mapping .............................................................................................................................. 4
DIMMs ................................................................................................................................................... 5
PCIe Support........................................................................................................................................... 5
5.5.1 CPU PCIe Mapping ............................................................................................................................. 5
5.5.2 PCIe x8 Slots....................................................................................................................................... 6
5.5.3 PCIe x16 Slots..................................................................................................................................... 6
5.5.4 PCIe Cables ........................................................................................................................................ 6
5.5.5 M.2 Modules...................................................................................................................................... 6
5.5.6 Intel® QuickAssist Technology Support ............................................................................................... 7
PCIe/SATA Expansion.............................................................................................................................. 7
SATA Storage.......................................................................................................................................... 7
TPM Module........................................................................................................................................... 7
Management Subsystem ........................................................................................................................ 7
5.9.1 BMC................................................................................................................................................... 8
5.9.2 DRAM ................................................................................................................................................ 9
5.9.3 BMC Boot Flash.................................................................................................................................. 9
5.9.4 BIOS Flash .......................................................................................................................................... 9
5.9.5 1GbE PHY ........................................................................................................................................... 9
5.9.6 UARTS................................................................................................................................................ 9
5.9.7 PECI ................................................................................................................................................. 10
5.9.8 VGA ................................................................................................................................................. 10
5.9.9 I2C ................................................................................................................................................... 10
5.9.10 JTAG ............................................................................................................................................ 12
5.9.11 Jumpers ....................................................................................................................................... 12
LEDs ..................................................................................................................................................... 13
5.10.1 UID LED........................................................................................................................................ 13
5.10.2 Power Status LED ......................................................................................................................... 14
5.10.3 Attention LED............................................................................................................................... 14
5.10.4 PSU Status LEDs ........................................................................................................................... 14
Fan Control........................................................................................................................................... 14
6 Power Management .................................................................................................................................... 15
PWRBRK# ............................................................................................................................................. 18
7 Motherboard Layout .................................................................................................................................... 18
8 Serviceability................................................................................................................................................ 19
Debug features..................................................................................................................................... 19
LED Visibility ......................................................................................................................................... 20
9 Motherboard Interfaces............................................................................................................................... 20
PCIe x8 Connectors ............................................................................................................................... 20
OCuLink x8 Connector........................................................................................................................... 32
NCSI Connector..................................................................................................................................... 33
USB 2.0 Internal Header ....................................................................................................................... 34
Connector Quality................................................................................................................................. 36
10 Electrical Specifications................................................................................................................................ 36
http://opencompute.org v
12 Physical Specification ................................................................................................................................... 38
13 Environmental ............................................................................................................................................. 39
14 Electromagnetic Interference Mitigation ..................................................................................................... 40
Table of Figures
Figure 1. Top Level Block Diagram ..................................................................................................................... 2
Figure 2. Management Block Diagram .............................................................................................................. 8
Figure 3. UART Block Diagram .......................................................................................................................... 10
Figure 4. I2C Block Diagram .............................................................................................................................. 11
Figure 5. JTAG Block Diagram ........................................................................................................................... 12
Figure 6. Power Management Block Diagram ................................................................................................ 15
Figure 7. PROCHOT Block Diagram .................................................................................................................. 18
Figure 8. PWRBRK# Block Diagram .................................................................................................................. 18
Figure 9. Motherboard Layout ......................................................................................................................... 19
Figure 10. SATA Power Connector Pin Numbering......................................................................................... 28
Figure 11. SATA Power Expansion Connector Pin Numbering ...................................................................... 29
Figure 12. Mini-Fit Connector Pin Numbering ................................................................................................ 30
Figure 13. Management Connector Pin Numbering ...................................................................................... 31
Figure 14. Management Expansion Connector Pin Numbering .................................................................... 31
Figure 15. OCuLink x8 Pin Numbering ............................................................................................................. 33
Figure 16. NCSI Connector Pin Numbering ..................................................................................................... 34
Figure 17. Internal USB Connector Pinout ...................................................................................................... 34
Figure 18. Fan Control Connector Pin Numbering.......................................................................................... 35
Figure 19. TPM Connector Pin Numbering...................................................................................................... 36
Figure 20. PCB Stackup ...................................................................................................................................... 38
Figure 21. Example Motherboard Drawing ..................................................................................................... 39
http://opencompute.org vii
Table of Tables
Table 1. List of Specifications ............................................................................................................................. 1
Table 2. PCH Flex I/O Mapping........................................................................................................................... 4
Table 3. CPU PCIe Port Mapping ........................................................................................................................ 5
Table 4. Jumpers................................................................................................................................................ 12
Table 5. LEDs ...................................................................................................................................................... 13
Table 6. Power Status LED Description............................................................................................................ 14
Table 7. Attention LED Description .................................................................................................................. 14
Table 8. Slot ID Decode ..................................................................................................................................... 16
Table 9. PCIe x8 connector pinout ................................................................................................................... 21
Table 10. PCIe x16 connector pinout ............................................................................................................... 22
Table 11. PCIe Cable Connector Pinout ........................................................................................................... 25
Table 12. M.2 connector pinout....................................................................................................................... 26
Table 13. SATA Power Connector .................................................................................................................... 28
Table 14. SATA Power Expansion Connector Pinout ...................................................................................... 28
Table 15. 12V Power Connector ...................................................................................................................... 29
Table 16. Management Connector .................................................................................................................. 30
Table 17. Management Expansion Connector ................................................................................................ 31
Table 18. OCuLink x8 Connector Pinout .......................................................................................................... 32
Table 19. NCSI Connector ................................................................................................................................. 33
Table 20. Internal USB Connector .................................................................................................................... 34
Table 21. Fan Control Connector ..................................................................................................................... 34
Table 22. TPM Connector Pinout ..................................................................................................................... 35
Table 23. Input Power Requirements ............................................................................................................... 37
Table 24. Environmental Requirement ........................................................................................................... 39
Project Olympus Server Rack Describes the mechanical rack hardware used in the system.
Specification
Project Olympus Server Mechanical Describes the mechanical structure for the server used in the
Specification system.
Project Olympus Universal Describes the server motherboard general requirements.
Motherboard Specification
Project Olympus PSU Specification Describes the Power Supply Unit (PSU) used in the server.
Project Olympus Power Describes the Power Management Distribution Unit (PMDU).
Management Distribution Unit
Specification
Project Olympus Rack Manager Describes the Rack Manager PCBA used in the PMDU.
Specification
This document is intended for designers and engineers who will be building servers for Project Olympus
systems.
2 Overview
This specification describes the Project Olympus Intel® Server Motherboard. This is an implementation
specific specification under the Project Olympus Universal Motherboard Specification.
Refer to respective specifications for other elements of the Project Olympus system such as Power
Supply Unit (PSU), Rack Manager (RM), Power and Management Distribution Unit (PMDU), and Server
Rack.
This specification covers block diagram, management sub-system, power management, FPGA Card
support, IO connectors, and physical specifications of the Server Motherboard.
http://opencompute.org 1
3 Background
The server motherboard is the computational element of the server. The motherboard includes a full
server management solution and supports interfaces to integrated or rear-access 12V Power Supply
Units (PSUs).
The Server optionally interfaces to a rack-level Power and Management Distribution Unit (PMDU).
The PMDU provides power to the Server and interfaces to the Rack Manager (RM).
The motherboard design provides optimum front-cable access (cold aisle) for external IO such as
networking and storage as well as standard PCIe cards. This enables flexibility to support many
configurations.
4 Block Diagram
Figure 1 shows the baseline block diagram describing general requirements for the server motherboard.
5 Features
The motherboard includes support for the following features:
Processor
Memory
DIMM Type Double data rate fourth generation (DDR4) Registered DIMM (RDIMM)
with Error-Correcting Code (ECC)
On-Board Devices
Server Management
http://opencompute.org 3
System Firmware
Version, Vendor Unified Extensible Firmware Interface (UEFI) 2.3.1, AMI (TBD)
PCI-Express Expansion
Networking
CPUs
The motherboard supports two Intel® Xeon® Scalable processors for all server class SKUs. The maximum
TDP supported is 205W.
PCH
The motherboard supports all SKUs of the Intel® C620 series chipset (PCH).
DIMMs
The motherboard supports 24 DDR4 RDIMMs with 12 RDIMMS per CPU socket. It supports all available
configurations for single, dual, and quad rank RDIMMs supported by the Intel® Xeon® Scalable Platform.
The DIMM pitch is 370mils.
The motherboard is designed to support the following technologies but use of these technologies has
not been validated.
• LRDIMM
• 3DS RDIMM
• DDR4 NVDIMM with 12V support through the DIMM connector
PCIe Support
http://opencompute.org 5
CPU PCIe Bus Destination
0 DMI PCH
0 PE1(A-D) PCIe Slot #3
0 PE2(A-D) PCH PCIe Uplink (QAT)
0 PE3(A-B) PCIe Slot #1
0 PE3(C-D) PCIe Slot #2
1 DMI[0] BMC
1 PE1(A-D) PCIe Slot #4
1 PE2(A) M.2 Module #3
1 PE2(B) M.2 Module #4
1 PE2(C-D) OCuLink x8
1 PE3(A-D) PCIe Slot #5
All slots support bifurcation below 1x16 but utilize the standard PCIe connector pinout and do not
contain additional clocks. Additional clocks required for bifurcation below 1x16 must be handled with
buffer circuitry on the PCIe card.
• Standard M.2 connector mounted directly on the motherboard. The motherboard supports up
to four on-board M.2 modules.
• M2 Riser Cards in Slots 1 and 2: PCIe Riser card supporting up to two M.2. modules in each slot.
Two of the modules are connected to a PCIe endpoint on the PCH. The other two modules are
connected to a PCIe endpoint on CPU1.
• Quad M.2 Carrier Card (OCP AVA).: FHHL PCIe Card in standard PCIe format supporting up to
four M.2 modules.
For both motherboard and PCIe Card applications, the supported M.2 modules are 60mm, 80mm, and
110mm dual sided form factors (Type 2260, 2280, and 22110).
PCIe/SATA Expansion
The motherboard supports two MiniSAS-HD x4 connectors for optional PCIe or SATA expansion from
root ports on the PCH. The connectors are connected to Flex I/O PCIE_RP(19:16) and PCIE_RP(15:12) on
the PCH. The functionality of the ports (PCIe vs SATA) are controlled by two GPIO outputs of the BMC
enabling each nibble to be configured independently.
SATA Storage
The motherboard support cabling for up to 12 SATA storage drives. This is accomplished with four x1
SATA connectors connected to PCH sSATA ports and two MiniSAS-HD x4 connectors connected to PCH
Flex I/O ports described in section 5.6.
TPM Module
The motherboard includes a connector to support a TPM 2.0 module connected to the PCH SPI bus.
Management Subsystem
The Baseboard Management Controller (BMC) circuitry for the motherboard uses an ASPEED AST2400
family processor. This section describes the requirements for management of the motherboard.
Primary features include.
http://opencompute.org 7
• Low pin count (LPC) connection to the chipset to support in-band management
• Out of band environmental controls for power and thermal management
• FRUID EEPROM for storage of manufacturing data and events (I2C)
• Thermal sensors for inlet and exhaust temperature monitoring (I2C)
• Power monitoring through the 12V Hot Swap Controller circuitry (I2C)
• Service LEDs
Figure 2 shows the management block diagram.
5.9.1 BMC
The design for the BMC is based on the Aspeed 2400 family and supports either the AST1250 or the
AST2400 processor. Primary features include:
5.9.2 DRAM
The BMC supports a minimum of 256MB of DDR3 memory.
5.9.6 UARTS
The motherboard supports 2 debug UARTS connected to the BMC as follows:
http://opencompute.org 9
BMC
AST1250
BMC
Debug
Console
Host Debug
Console
5.9.7 PECI
The motherboard uses the PECI Host Controller inside the PCH to support integrated thermal monitoring
of the two CPU sockets. PECI is connected from the CPU to the BMC by default. The motherboard
contains BOM load options to connect PECI to the PCH.
5.9.8 VGA
The motherboard includes optional support for VGA assuming the AST2400 processor. To support VGA,
PCIe is provided from the DMI[0] PCIe port of CPU1.
5.9.9 I2C
The motherboard supports I2C devices available to the BMC and PCH. A block diagram of the I2C tree is
shown in Figure 4. A brief description of the entities is included below. Note that addresses shown are
8-bit address with the R/W bit as the LSB set to 0 (0xA8=1010100x).
BMC
AST1250 PCH
CLKBUF CLK GEN Debug
XDP
DB1200Z/ZL 9FGP204
0xC2, 0xD8
0xN/A
0xD0
MM[4] SMBus
VRD CPU0 VRD CPU0 VRD CPU0 VRD CPU0 VRD CPU1 VRD CPU1 VRD CPU1 VRD CPU1 VRD PCH Intel®
Debug
VCCSA VCCIO VddqABC VddqDEF VCSSA VCCIO VddqGHJ VddqKLM VNN ME/IE
0x8A 0xD0 0x82 0x88 0x9E 0xDC 0x92 0x98 0xD2
MM[6] SML2 (M)
TMP421 Debug
PSU 1
Blade Outlet
0xB0
0x98
MM[2] SML5 (M)
OCP NIC
PSU 2
Mezz
0xB0
L1 0xTBD
MM[3]
MM[5] SML0
PCIe Debug
MiniSAS
MM[1] SML4
TCA9548 TCA9548
PCIe
PCIe M.2 #1
SLOT #1a CH0 CH4 CH0 CH4
SLOT #3 0xD4
0xD4
PCIe
PCIe M.2 #2
SLOT #1b CH1 CH5 CH1 CH5
SLOT #4 0xD4 SFP+ SFP0
0xD4
SFP1
PCIe
PCIe M.2 #3
SLOT #2a CH2 CH6
SLOT #5 0xD4
CH2 CH6 SFP2
0xD4
SFP3
PCIe
M.2 #4
SLOT #2b CH3 CH3
0xD4
0xD4 0xE0 0xE2
http://opencompute.org 11
5.9.9.4 Voltage Regulators
The motherboard includes I2C support for all the CPU and Memory Subsystem voltage regulators
enabling the BMC to monitor health of the individual power rails.
5.9.10 JTAG
The motherboard supports muxing of the JTAG master on the BMC to support multiple uses as follows.
BMC
AST1250 CPLD
GPIO
SEL JTAG
B1
JTAG MUX SLAVE
A
MASTER 1:2 B2
PCIE SLOT #4
FPGA
JTAG
SLAVE
5.9.11 Jumpers
The motherboard supports the jumpers listed in Table 4.
Table 4. Jumpers
Jumper Name REF Status Function
Des
BMC Disable (default) JP21 1-2 Normal (default) Disables BMC (sets all pins to High impedance)
2-3 Hold BMC in Reset
BMC Disable (backup) TBD High (not installed) Disables BMC (holds BMC in reset. Disconnected by
0ohm resistor)
ME Recovery Mode JP2 1-2 Normal (default) Force Intel® Manageability Engine (ME)
2-3 Force ME Update update
BIOS USB Recovery JP12 1-2 BIOS USB flash recovery mode Enables BIOS recovery via USB image update
2-3 Normal (default)
BIOS A18 Top Swap JP9 1-2 Normal (default) Swaps BIOS image by booting from other half of BIOS
2-3 Recovery BIOS Flash
Clear CMOS J13 1-2 Normal (default) Clears CMOS
2-3 Clear RTC registers
Password Clear JP11 1-2 Normal (default) Clears the password
2-3 Password clear
LEDs
The following sections describe the light-emitting diodes (LEDs) used as indicators on the motherboard.
Table 5 lists the minimum LEDs required and provides a brief description. Greater detail for some LEDs
is included in subsequent sections below. The visible diameter, color (λ) and brightness requirements of
the LEDs are TBD. All LEDs are visible at the front of the motherboard (cold aisle).
Table 5. LEDs
LED Name Color Description
UID LED Blue Unit Identification LED
Attention LED Red Indicates that Server requires servicing
Power Status LED Amber/Green Indicates Power Status of the Server
SATA HDD Activity Green Indicates R/W activity to HDDs
Post Code Green Indicates the Boot status of the Server (Port 80)
Catastrophic Error Red Indicates that a CPU catastrophic error has occurred
BMC Heartbeat Green Blinks to indicate BMC is alive
GbE Port 0 Activity Green Indicates activity on 10GbE Port 0 (not supported for production)
GbE Port 0 Speed Green/Orange Green=high speed, Orange=Low speed (not supported for production)
PSU1 Status LED1 Green Status LED for PSU1
(P2010) • Solid Green = AC and DC Power Good
• Blinking Green = Battery Power Good
PSU1 Status LED2 Amber Status LED for PSU1
(P2010) • Solid Amber = Failure of PSU Phase
• Blinking Amber = Failure of 2 PSU Phases
PSU2 Status LED1 Green Status LED for PSU2 (optional)
(P2010) • Solid Green = AC and DC Power Good
• Blinking Green = Battery Power Good
PSU2 Status LED2 Status LED for PSU2 (optional)
(P2010) • Solid Amber = Failure of PSU Phase
• Blinking Amber = Failure of 2 PSU Phases
http://opencompute.org 13
5.10.2 Power Status LED
When a server is initially inserted, the Power Status LED shall turn amber if 12V is present at the output
of the hot swap controller. This assures that the 12V power is connected and present at the
motherboard and that the hot swap controller is enabled.
When the server management software turns on the system power (CPU/Memory/PCIe), the Power
Status LED turns green. Note that the power status LED may be driven by an analog resistor network
tied directly to a power rail and is not an indication of the health of the server. Table 6 describes the
operation of the Power Status LED.
Fan Control
The motherboard supports control of twelve 40mm fans located at the rear of the server assembly. Fan
control is divided between two connectors enabling two separate fan zones. Each connector supports
12V power, a single PWM, and six TACH signals for controlling up to 6 fans in a single zone.
6 Power Management
The motherboard provides a rear connector for interfacing the motherboard to a 12V PSU. The
motherboard also provides a separate rear management connector for enabling external control of
server power. A block diagram of the interface is shown Figure 6.
Rack Management
The motherboard supports server control through the PMDU. The following describes the management
interfaces.
• PWR_EN# - Active low signal used to enable/disable power to the P2010 PSU. A 1K ohm
pulldown resistor is used on the motherboard to ensure a default low state (active) if the Rack
Manager is not present. This signal connects to the PS_ON# signal of the PSU. When in high
state (inactive), this signal disables output power from the P2010 PSU.
• SERVER_PRESENT# - Active low signal used to communicate physical presence of the server to
the Rack Manager. This signal should be tied to GND on the motherboard.
• SERVER_THROTTLE - Active high signal used to put the motherboard into a low power (power
cap) state. This signal should default low (inactive) if the Rack Manager is not present. This
signal is fanned out from the Rack Manager to multiple servers and therefore the circuit design
must support electrical isolation of this signal from the motherboard power planes.
• SLOT_ID[5:0] – Identifies the physical rack slot in which the server is installed. ID is hard set by
the PMDU. The ID decoding is shown below in Table 8.
http://opencompute.org 15
• LR_SELECT – Spare signal. Used to differentiate between left and right slots for a dual-node
implementation.
Table 8. Slot ID Decode
PSU Management
The motherboard supports management of the P2010 PSU. Below is a description of the signals
supported by the PMDU.
• PS_ON# - Active low signal used to enable/disable power to the PSU. This signal is driven by the
PWR_EN# signal from the Rack Manager. A 1K ohm pulldown resistor is used on the
motherboard to ensure a default low state if the RM is not present.
• PSU_ALERT# - Active low signal used to alert the motherboard that a fault has occurred in the
PSU. Assertion of this signal by the PSU shall put the motherboard into a low power (PROCHOT)
state. This signal is also connected to the BMC for monitoring of PSU status.
• PMBUS – I2C interface to the PSU. The BMC uses this signal is used by the BMC to read the
status of the PSU.
• STATUS_LED – Controls LED to provide visual indication of a PSU fault.
Power Capping
The motherboard supports throttling of the processors using the Fast PROCHOT mechanism based on
monitoring of the motherboards input voltage and power. A block diagram detailing these triggers is
shown in Figure 7.
The following triggers are monitored by the BMC and PCH and can directly generate PROCHOT# events.
Each trigger is filtered by the CPLD and the CPLD ensure that any trigger event generates a minimum
100mS PROCHOT# pulse.
• Undervoltage – A comparator monitors the 12V output of the HSC and asserts if this voltage falls
below 11.5V.
• Overcurrent Alert – The HSC monitors the input current and asserts the trigger if the input
current exceeds 93A.
• Overcurrent Protect – The HSC monitors the input current and disables power to the
motherboard if the current exceeds 95A.
• HSC ALERT #1 and #2 – The HSC provides two programmable alerts. These alerts are spare
inputs and are disabled by default.
Note that the CPU voltage regulators also can generate PROCHOT triggers to the CPUs.
The motherboard enables power capping of the server from different trigger sources. Assertion of
either of the following causes the motherboard to assert PROCHOT and the PCH to initiate power
capping of the server.
• RM THROTTLE# - Throttle signal driven by the Rack Manager indicating that the rack has
exceeded its power limit.
• PSU ALERT# - Alert signal driven by the PSU. Assertion indicates an over-current event or that
the Olympus PSU has transitioned its power source from AC to battery backup.
• FM_THROTTLE# - Test signal that allows BMC to assert power cap
http://opencompute.org 17
FAST PROCHOT Triggers RM_THROTTLE# (SPARE)
PSU_ALRT# (SPARE)
12V
UV Monitor
CPLD
11.5V
12V
OC Mon itor 100ms
60A Pulse Gen
HSC
ALERT#1
SPARE
HSC
ALERT#2
BMC
HSC_ALERT2#
HSC_ALERT1# SPARE
CPU 0 VR
PVCCIN VRHOT
CPU0
OC_DETECT# OD
UV_ALERT# CPU 0 PROCHOT#
RM_THROTTLE# PWRINALERT#
PMB_ALERT#
PMB_ALERT_EN #
RM_THROTTLE_EN# FM_THROTTLE#
FM_THROTTLE# FM_THROTTLE_IN#
CPU 1 VR
PVCCIN VRHOT CPU1
OD
CPU 1 PROCHOT#
PWRINALERT#
PCH
VR PROCHOT Triggers
OC_DETECT#
Power Cap Triggers UV_ALERT#
RM
THROTTLE# FM_THROTTLE#
FM_THROTTLE_IN# FM_THROTTLE#
PSU PMBUS
ALERT#
PWRBRK#
The motherboard supports Emergency Power Reduction mechanism (PWRBRK#) for the x16 PCIe slots.
The primary purpose is to provide a power reduction mechanism for GPGPU cards as part of the throttle
and power capping strategy. Figure 8 shows the block diagram for PWRBRK#. PWRBRK# can be
triggered by either the RM_THROTTLE# or PSU2_ALERT#. Logic for PWRBRK# is contained in the CPLD.
The BMC controls enable/disable monitoring of the two triggers and can also force an event.
7 Motherboard Layout
Figure 9 shows a representative layout of the motherboard with the approximate location of critical
components and connectors.
8 Serviceability
Debug features
The motherboard supports the following debug features:
• I2C Debug headers on all I2C ports. Headers are compatible with standard I2C Protocol
Analyzers such as Beagle or Aardvark.
• Debug connector on all UARTS. 4-pin 2.54mm headers.
• POST LEDs.
http://opencompute.org 19
• BIOS Debug Support Including:
o Two socketed BIOS Flash (socket to be removed for production)
o BIOS recovery jumper connected to PCH GPIO
o Flash security override driven from BMC GPIO (to PCH)
• CPU and PCH XDP60 support.
• Intel® Trace Hub and Direct Connect Interface (DCI)
o DCI is supported in an open chassis through XDP60 connector and Intel® ITP-XDP3BR.
o DCI Boundary Scan Side Band (BSSB) is supported in a closed chassis through USB3.0 and
Intel® SVT Closed Chassis Adapter.
o DCI USB3 is supported through front panel USB3 connector and USB3 Debug Cable
• Two USB 3.0 debug ports connected to PCH USB1 Port 2 and Port 3 available at the front of
motherboard (cold aisle)
• PCH Recovery Mode Jumper
• HW jumper to enable BIOS serial debug output
LED Visibility
The following LEDs determined to be important for communicating status to service personnel are
visible at the front (cold aisle) of the motherboard.
• UID LED
• Power Status LED
• Attention LED
9 Motherboard Interfaces
This section describes the connector interfaces to the motherboard.
PCIe x8 Connectors
The PCIe x8 connectors interfaces are designed to support a standard PCIe x8 card as well as the M.2
riser card. The M.2 riser card is a custom edge card PCA that supports two M.2 SSD Modules (NGFF
form factor cards) in the SSD Socket 3 format per the PCI Express M.2 Specification. To support two M.2
modules, the PCIe connector interface is altered to support two x4 PCIe Gen3 interfaces as well as the
SSD specific signals per the PCIE M.2 specification. Table 9 describes the connector pinout. The signals
satisfy the electrical requirements of the PCIe Card Electromechanical Specification. The following is a
list of pin assignment deviations from that specification needed to support two M.2 modules. These
signals are highlighted in red.
• SUSCLK is assigned to pin A6 replacing JTAG TDI pin. Support for SUSCLK is not required.
• LINKWIDTH is assigned to pin A17 replacing PRSNT#2. Enables auto-detection of 2x4 M.2
Interposer or 1x8 standard PCIe card. The design is not required to support PCIe x1 cards.
• SMBCLK2 and SMBDAT2 are assigned to pins A6/A7 replacing TDO and TMS pins.
http://opencompute.org 21
26 GND Ground PERN(2) Differential pair
27 PETP(3) Transmitter Lane 3, GND Ground
28 PETN(3) Differential pair GND Ground
29 GND Ground PERP(3) Receiver Lane 3,
30 RSVD Reserved PERN(3) Differential pair
31 PRSNT#2 Presence Detect GND Ground
32 GND Ground REFCLK2+ Reference Clock
33 PETP(4) Transmitter Lane 4, REFCLK2- Differential pair
http://opencompute.org 23
36 GND Ground PERN(4) Differential pair
37 PETP(5) Transmitter Lane 5, GND Ground
38 PETN(5) Differential pair GND Ground
39 GND Ground PERP(5) Receiver Lane 5,
40 GND Ground PERN(5) Differential pair
41 PETP(6) Transmitter Lane 6, GND Ground
42 PETN(6) Differential pair GND Ground
43 GND Ground PERP(6) Receiver Lane 6,
44 GND Ground PERN(6) Differential pair
45 PETP(7) Transmitter Lane 7, GND Ground
46 PETN(7) Differential pair GND Ground
47 GND Ground PERP(7) Receiver Lane 7,
48 PRSNT#2 Hot plug detect PERN(7) Differential pair
49 GND Ground GND Ground
50 PETP(8) Transmitter Lane 8, RSVD Reserved
51 PETN(8) Differential pair GND Ground
52 GND Ground PERP(8) Receiver Lane 8,
53 GND Ground PERN(8) Differential pair
54 PETP(9) Transmitter Lane 9, GND Ground
55 PETN(9) Differential pair GND Ground
56 GND Ground PERP(9) Receiver Lane 9,
57 GND Ground PERN(9) Differential pair
58 PETP(10) Transmitter Lane 10, GND Ground
59 PETN(10) Differential pair GND Ground
60 GND Ground PERP(10) Receiver Lane 10,
61 GND Ground PERN(10) Differential pair
62 PETP(11) Transmitter Lane 11, GND Ground
63 PETN(11) Differential pair GND Ground
64 GND Ground PERP(11) Receiver Lane 11,
65 GND Ground PERN(11) Differential pair
66 PETP(12) Transmitter Lane 12, GND Ground
67 PETN(12) Differential pair GND Ground
68 GND Ground PERP(12) Receiver Lane 12,
69 GND Ground PERN(12) Differential pair
70 PETP(13) Transmitter Lane 13, GND Ground
71 PETN(13) Differential pair GND Ground
http://opencompute.org 25
C8 TX3- PCIe Transmit Lane 3
C9 GND Ground
B9 GND Ground
B8 RX2- PCIe Receive Lane 2
B7 RX2+ PCIe Receive Lane 2
B6 GND Ground
B5 RX0- PCIe Receive Lane 0
B4 RX0+ PCIe Receive Lane 0
B3 GND Ground
B2 GND GND
B1 PERST# PERST#
A1 REFCLK+ PCIe Reference Clock
A2 REFCLK- PCIe Reference Clock
A3 GND Ground
A4 RX1+ PCIe Receive Lane 1
A5 RX1- PCIe Receive Lane 1
A6 GND Ground
A7 RX3+ PCIe Receive Lane 3
A8 RX3- PCIe Receive Lane 3
A9 GND Ground
M.2 Connectors
M.2 connectors are integrated into the motherboard for supporting flash memory expansion. The M.2
connector pinout is shown in Table 12. For more information about the M.2 interface, refer to the PCI
Express M.2 Specification.
http://opencompute.org 27
SATA Power Connector
The motherboard supports two 4-pin Mini-Fit® JrTM 5566 series power connectors, Molex P/N 39-28-
1043 or equivalent for supplying power to up to 4 SATA devices. Each connector pin has a maximum
13A current capacity. Table 13 describes the connector pinout. Figure 10 shows a top view of the
physical pin numbering.
Pin 4 5V 9A (Red)
1 2
3 4
1 2 3 4
5 6 7 8
Management Connector
Pin Signal I/O Voltage Description
1 GND I 0V GND from PSU
2 GND I 0V GND from PSU
3 GND I 0V GND from PSU
4 GND I 0V GND from PSU
5 GND I 0V GND from PSU
6 GND I 0V GND from PSU
7 P12V_PSU I 12V 12V Power from PSU
8 P12V_PSU I 12V 12V Power from PSU
9 P12V_PSU I 12V 12V Power from PSU
10 P12V_PSU I 12V 12V Power from PSU
11 P12V_PSU I 12V 12V Power from PSU
12 P12V_PSU I 12V 12V Power from PSU
13 GND I 0V GND from PSU
14 GND I 0V GND from PSU
15 GND I 0V GND from PSU
16 GND I 0V GND from PSU
17 GND I 0V GND from PSU
18 GND I 0V GND from PSU
19 P12V_PSU I 12V 12V Power from PSU
20 P12V_PSU I 12V 12V Power from PSU
21 P12V_PSU I 12V 12V Power from PSU
22 P12V_PSU I 12V 12V Power from PSU
http://opencompute.org 29
23 P12V_PSU I 12V 12V Power from PSU
24 P12V_PSU I 12V 12V Power from PSU
1 2 3 4 5 6 7 8 9 10 11 12
13 14 15 16 17 18 19 20 21 22 23 24
Management Connector
The motherboard supports a connector for interfacing management signals from the motherboard to
the WCS P2010 PSU. The connector for supporting the cable is an 18 pin Molex Picoblade™ series
connector, Molex part number 87831-1820 or equivalent. Table 16 describes the connector pinout.
Figure 13 shows a top view of the physical pin numbering.
1U Management Connector
Pin Signal I/O Voltage Description
Enable signal from Rack Manager to HSC on the
1 PWR_EN# I 3.3V
Server
2 LR_SELECT I RS232 Left/Right Node Select
3 SLOT_ID0 I 3.3V SLOT ID from PMDU to the Server
4 SLOT_ID1 I 3.3V SLOT ID from PMDU to Server
5 SERVER_THROTTLE# I 3.3V Server Throttle control from Rack Manager
6 PRESENT# O 3.3V Indicates Server presence to the Rack Manager
7 I2C _SCL O 3.3V I2C Clock to PSU (PMBus)
8 I2C_SDA I/O 3.3V I2C Data to PSU (PMBus)
9 I2C_GND I 0V GND reference for I2C bus
10 PS_ON# O 3.3V Turns on PSU. Pulled low on MB
11 PSU_ALERT# I 3.3V I2C Alert from the PSU
12 PSU_LED0 I 3.3V PSU LED 0(Green)
13 PSU_LED1 I 3.3V PSU LED 1(Yellow)
14 SLOT_ID2 I 3.3V SLOT ID from PMDU to Server
15 SLOT_ID3 I 3.3V SLOT ID from PMDU to Server
16 SLOT_ID4 I 3.3V SLOT ID from PMDU to Server
17 SLOT_ID5 I 3.3V SLOT ID from PMDU to Server
18 NC NA NA No Connect
17 15 13 11 9 7 5 3 1
18 16 14 12 10 8 6 4 2
Management Connector
Pin Signal I/O Voltage Description
1 NC NA NA No Connect
2 I2C1_SDA I 3.3V I2C Data (expansion)
3 I2C1_SCL I 3.3V I2C Clock (expansion)
4 GND I 0V Ground
5 NC I 0V Server Throttle control from Rack Manager
6 PRESENT# O 3.3V Indicates Server presence to the Rack Manager
7 I2C2_SCL O 3.3V I2C Clock to expansion PSU (PMBus)
8 I2C2_SDA I/O 3.3V I2C Data to expansion PSU (PMBus)
9 GND I 0V Ground
10 PS_ON# O 3.3V Turns on PSU. Pulled low on MB
11 PSU_ALERT# I 3.3V I2C Alert from the PSU
12 PSU_LED0 I 3.3V PSU LED 0(Green)
13 PSU_LED1 I 3.3V PSU LED 1(Yellow)
14 NC NA NA No Connect
15 NC NA NA No Connect
16 NC NA NA No Connect
17 NC NA NA No Connect
18 NC NA NA No Connect
17 15 13 11 9 7 5 3 1
18 16 14 12 10 8 6 4 2
http://opencompute.org 31
OCuLink x8 Connector
The motherboard supports a connector for cabling PCIe x8 from the motherboard to the FPGA Card.
The connector for supporting the cable is an 80-pin vertical Molex Nanopitch™ series connector, (Molex
part number 173162-0334 or equivalent. Table 18 describes the connector pinout. Figure 15 shows a
top view of the physical pin numbering.
Row A Row B
Signal Signal
Pin Description Pin Description
Name Name
A1 GND Ground B1 GND Ground
A2 PERp0 PCIe Receive Data to CPU B2 PETp0 PCIe Transmit Data from CPU
A3 PERn0 PCIe Receive Data to CPU B3 PETn0 PCIe Transmit Data from CPU
A4 GND Ground B4 GND Ground
A5 PERp1 PCIe Receive Data to CPU B5 PETp1 PCIe Transmit Data from CPU
A6 PERn1 PCIe Receive Data to CPU B6 PETn1 PCIe Transmit Data from CPU
A7 GND Ground B7 GND Ground
A8 NC No Connect B8 NC No Connect
A9 NC No Connect B9 NC No Connect
A10 GND Ground B10 GND Ground
A11 NC No Connect B11 NC No Connect
A12 NC No Connect B12 CPRSNT# Cable Present
A13 GND Ground B13 GND Ground
A14 PERp2 PCIe Receive Data to CPU B14 PETp2 PCIe Transmit Data from CPU
A15 PERn2 PCIe Receive Data to CPU B15 PETn2 PCIe Transmit Data from CPU
A16 GND Ground B16 GND Ground
A17 PERp3 PCIe Receive Data to CPU B17 PETp3 PCIe Transmit Data from CPU
A18 PERn3 PCIe Receive Data to CPU B18 PETn3 PCIe Transmit Data from CPU
A19 GND Ground B19 GND Ground
A20 RSVD Reserved B20 RSVD Reserved
A21 RSVD Reserved B21 RSVD Reserved
A22 GND Ground B22 GND Ground
A23 PERp4 PCIe Receive Data to CPU B23 PETp4 PCIe Transmit Data from CPU
A24 PERn4 PCIe Receive Data to CPU B24 PETn4 PCIe Transmit Data from CPU
A25 GND Ground B25 GND Ground
A26 PERp5 PCIe Receive Data to CPU B26 PETp5 PCIe Transmit Data from CPU
A27 PERn5 PCIe Receive Data to CPU B27 PETn5 PCIe Transmit Data from CPU
A28 GND Ground B28 GND Ground
A29 NC No Connect B29 NC No Connect
A30 NC No Connect B30 NC No Connect
NCSI Connector
The motherboard supports a connector for cabling NCSI signals from the motherboard to a Network
Interface Controller (NIC). The connector for supporting the cable is a 14-pin header connector, Molex
part number 87831-1428 or equivalent. Table 19 describes the connector pinot. Table 19 shows a top
view of the physical pin numbering.
NCSI Connector
Pin Signal I/O Voltage Description
Enable signal from Rack Manager to HSC on the
1 RXER O 3.3V
motherboard
2 GND I 0V Ground
3 TXD1 I 3.3V Transmit Data from BMC to NIC
4 CLK_50M I 0V 50Mhz Clock
5 TXD0 I 3.3V Transmit Data from BMC to NIC
6 GND I 0V Ground
7 TXEN I 3.3V Transmit Enable from BMC to NIC
8 GND I 0V Ground
9 CRSDV O 3.3V Receive carrier sense and data valid from NIC to BMC
10 GND I 0V Ground
11 RXD1 O 3.3V Receive Data from NIC to BMC
http://opencompute.org 33
12 GND I 0V Ground
13 RXD0 O 3.3V Receive Data from NIC to BMC
14 GND I 0V Ground
2 4 6 8 10 12 14
1 3 5 7 9 11 13
4 3 2 1
Management Connector
Pin Signal I/O Voltage Description
1 FAN5_PWM O 5V Fan #5 PWM
2 FAN4_PWM O 5V Fan #4 PWM
3 FAN5_TACH I 5V Fan #5 Tachometer
4 FAN4_TACH I 5V Fan #4 Tachometer
5 P12V O 12V 12V Fan Power
6 P12V O 12V 12V Fan Power
7 GND O 0V Ground
8 GND O 0V Ground
9 FAN3_PWM O 5V Fan #3 PWM
10 FAN2_PWM O 5V Fan #2 PWM
11 FAN3_TACH I 0V Fan #3 Tachometer
12 FAN2_TACH I 3.3V Fan #2 Tachometer
13 P12V I 3.3V 12V Fan Power
14 P12V I 3.3V 12V Fan Power
15 GND O 0V Ground
16 GND O 0V Ground
17 FAN1_PWM O 5V Fan #1 PWM
18 FAN0_PWM O 5V Fan #0 PWM
19 FAN1_TACH I 5V Fan #1 Tachometer
20 FAN0_TACH I 5V Fan #0 Tachometer
21 P12V O 12V 12V Fan Power
22 P12V O 12V 12V Fan Power
23 GND O 0V Ground
24 GND O 0V Ground
2 4 6 8 10 12 14 16 18 20 22 24
1 3 5 7 9 11 13 15 17 19 21 23
TPM Connector
The motherboard supports a header connector for interfacing to a TPM Module. The connector is FCI
91932-32111L or equivalent. Table 22 describes the connector pinout. Figure 19 shows a top view of
the physical pin numbering.
http://opencompute.org 35
TPM Connector
Pin Signal I/O Voltage Description
1 SPI_CLK O 3.3V SPI Clock
2 RESET# O 3.3V Reset
3 SPI_MOSI O 3.3V SPI Data Out
4 SPI_MISO I 3.3V SPI Data In
5 SPI_CS# O 3.3V SPI Chip Select
6 I2C_SDA I/O 3.3V I2C Data
7 P3V3_STBY O 3.3V 3.3V Standby Power
8 TPM_PRESENT# I 3.3V Indicates physical presence of TPM
9 SPI_IRQ# I 3.3V Interrupt
10 I2C_SCL O 3.3V I2C Clock
11 GND O 0V Ground
2 4 6 8 10
1 3 5 7 9 11
Connector Quality
The Project Olympus system is designed for use in datacenters with a wide range of humidity. The
connectors for these deployments are be capable of withstanding high humidity during shipping and
installation. The baseline for plating DIMMs and PCIe connectors is required to be 30μ”-thick gold.
DIMM connectors also required to include lubricant/sealant applied by the connector manufacturer that
can remain intact after soldering and other manufacturing processes. The sealant is required to displace
any voids in the connector gold plating.
10 Electrical Specifications
The following sections provide specifications for the blade input voltage and current, as well as the
primary blade signals.
12V power to the motherboard is supplied through a 24 pin Mini-Fit Jr or equivalent connector. The
blade provides inrush current control through the 12V bus rail; return-side inrush control is not used.
The inrush current rises linearly from 0A to the load current over a minimum 5 millisecond (ms) period
(this time period must be no longer than 200ms).
The blade also provides a way to interrupt current flow within 1 microsecond (μs) of exceeding the
maximum current load.
http://opencompute.org 37
11 PCB Stack-up
Figure 20 shows the 8-layer PCB stack-up. The stackup uses standard mid-loss FR4 PCB material. The
PCB thickness is 72 mils.
Differential Single Ended
0.072", 8 Layers, DDR (1 SPC
DDR (2 SPC
TU863/IT170GRA/NPG171 PCIe; UPI; DMI; Diff Clocks; DQ/DQS);
Signals -> SFI; 10G-KR, option 1 (*5) 10G-KR, option 2 (*5) Signal Integrity DQ/DQS; all
Lead-Free / OSP / Tg>170 USB2/3; SATA3; SATAe Misc I/O;
C/A/CLK)
All dimensions in mils SE clocks
Z0-> 85 Ohms 93 Ohms 100 Ohms Insertion Loss 50 Ohms 40 Ohms
Copper Layer Max Max
Layer Plane Description Weight Thickness SDD21 SDD21
Name (oz) (mil) Width Space Z0 tol. Width Space Z0 tol. Width Space Z0 tol. 4GHz 8GHz Width Z0 tol. Width Z0 tol.
solder mask 0.5
L1 SIGNAL 0.5 oz. 1.9 5.00 7.00 +/-15% 4.50 13.50 +/-10% 4.01 13.99 +/-10% 0.69 1.33 4.00 +/-15% 6.50 +/-15%
prepreg 2.7
L2 GND POWER 1.0 oz. 1.3
core 4.0
L3 SIGNAL GND 1.0 oz. 1.3 5.00 6.50 +/-10% 4.90 10.30 +/-10% 4.40 13.00 +/-10% 0.65 1.25 4.75 +/-10% 7.75 +/-10%
prepreg 19.7
L4 GND POWER 2.0 oz. 2.6
Core 4.0
L5 GND POWER 2.0 oz. 2.6
prepreg 19.7
L6 SIGNAL GND 1.0 oz. 1.3 5.00 6.50 +/-10% 4.90 10.30 +/-10% 4.40 13.00 +/-10% 0.65 1.25 4.75 +/-10% 7.75 +/-10%
core 4.0
L7 GND POWER 1.0 oz. 1.3
prepreg 2.7
L8 SIGNAL 0.5 oz. 1.9 5.00 7.00 +/-15% 4.50 13.50 +/-10% 4.01 13.99 +/-10% 0.69 1.33 4.00 +/-15% 6.50 +/-15%
solder mask 0.5
9.0 oz. 72.0
12 Physical Specification
The motherboard is fully compatible with the mechanical requirement of the Project Olympus Universal
Motherboard Specification. The motherboard is intended to be deployable in a variety of server
mechanical configurations. Figure 21 depicts the dimensions of the motherboard. The front of the
chassis is on the bottom side. For detailed mechanical information including mounting hole location and
dimensions, reference the Project Olympus mechanical data package.
13 Environmental
The motherboard is designed to be deployed in an environmentally controlled location meeting the
environmental requirements describe in Table 24. The server must have the capability to provide full
functional operation under the conditions given.
http://opencompute.org 39
Specification Requirement