OCS Open CloudServer Blade v2.1
OCS Open CloudServer Blade v2.1
Blade Specification
Version 2.1
Author:
Mark Shaw, Director of Hardware Engineering, Microsoft
Martin Goldstein, Principal Hardware Engineer, Microsoft
Mark A. Shaw, Senior Hardware Engineering Manager, Microsoft
Open Compute Project Open CloudServer OCS Blade
Revision History
Date Description
10/30/2014 Version 2.0
2/3/2016 Version 2.1 – Major updates
Support for full width blades
Support for Future Intel® Xeon® Processor product family, 135W maximum
Decreased HDD support to enable cooling for 135W processors
Eliminated support for 10G networking
Support for external SAS attached JBOD has been removed
http://opencompute.org ii
Open Compute Project Open CloudServer OCS Blade
As of October 30, 2014, the following persons or entities have made this Specification available under the Open Web
Foundation Final Specification Agreement (OWFa 1.0), which is available at http://www.openwebfoundation.org/legal/the-owf-
1-0-agreements/owfa-1-0
Microsoft Corporation.
You can review the signed copies of the Open Web Foundation Agreement Version 1.0 for this Specification at
http://opencompute.org/licensing/, which may also include additional parties to those listed above.
Your use of this Specification may be subject to other third party rights. THIS SPECIFICATION IS PROVIDED "AS IS." The
contributors expressly disclaim any warranties (express, implied, or otherwise), including implied warranties of merchantability,
noninfringement, fitness for a particular purpose, or title, related to the Specification. The entire risk as to implementing or
otherwise using the Specification is assumed by the Specification implementer and user. IN NO EVENT WILL ANY PARTY BE
LIABLE TO ANY OTHER PARTY FOR LOST PROFITS OR ANY FORM OF INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL
DAMAGES OF ANY CHARACTER FROM ANY CAUSES OF ACTION OF ANY KIND WITH RESPECT TO THIS SPECIFICATION OR ITS
GOVERNING AGREEMENT, WHETHER BASED ON BREACH OF CONTRACT, TORT (INCLUDING NEGLIGENCE), OR OTHERWISE, AND
WHETHER OR NOT THE OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
CONTRIBUTORS AND LICENSORS OF THIS SPECIFICATION MAY HAVE MENTIONED CERTAIN TECHNOLOGIES THAT ARE MERELY
REFERENCED WITHIN THIS SPECIFICATION AND NOT LICENSED UNDER THE OWF CLA OR OWFa. THE FOLLOWING IS A LIST OF
MERELY REFERENCED TECHNOLOGY: INTELLIGENT PLATFORM MANAGEMENT INTERFACE (IPMI), I2C TRADEMARK OF PHILLIPS
SEMICONDUCTOR. IMPLEMENTATION OF THESE TECHNOLOGIES MAY BE SUBJECT TO THEIR OWN LEGAL TERMS.
http://opencompute.org iii
Contents
1 Overview of V2.1 Open CloudServer Specifications ....................................................................................... 1
4 Blade Features............................................................................................................................................... 5
5.4 HDDs.............................................................................................................................................................8
6 PCB Stackup .................................................................................................................................................. 8
iv January 7, 2016
Open Compute Project Open CloudServer OCS Blade
12.2 Current Interrupt Protection and Power, Voltage, and Current Monitoring ..............................................40
http://opencompute.org v
Table of Figures
Figure 1: View of OCS with rack .................................................................................................................... 2
Figure 2: Second view of OCS........................................................................................................................ 2
Figure 3: V2.1 OCS half-width blade ............................................................................................................. 3
Figure 4 V2.1 OCS full-width blade ............................................................................................................... 4
Figure 5: Baseline configuration of blade ..................................................................................................... 5
Figure 7 Major Component Labeling ............................................................................................................ 7
Figure 8: PCB Stackup ................................................................................................................................... 9
Figure 9: CPU-to-tray backplane mezzanine PCIe link topology ................................................................. 10
Figure 10: NIC 40GbE Topology .................................................................................................................. 11
Figure 11: PCIe M.2 topology ...................................................................................................................... 11
Figure 12: Blade PCIe riser topology ........................................................................................................... 12
Figure 13: AirMax power receptacle pinout arrangement ......................................................................... 14
Figure 14: Example of a coplanar blade signal connector .......................................................................... 14
Figure 15: AirMax VS2 pinout arrangement ............................................................................................... 15
Figure 16: Blade management block diagram ............................................................................................ 28
Figure 17: PCH / BMC I2C block diagram .................................................................................................... 30
Figure 18: Temperature sensor locations ................................................................................................... 31
Figure 19: HSC functional block diagram .................................................................................................... 33
Figure 20: Front blade LED locations .......................................................................................................... 35
Figure 21: Rear attention LED location ....................................................................................................... 35
Figure 22. Power Capping Block Diagram ................................................................................................... 38
Figure 23: NIC block diagram ..................................................................................................................... 39
Figure 24: Mechanical control outline ........................................................................................................ 42
Figure 25: Dimensions of the volume that holds the blade ........................................................................ 43
Figure 26: Two blades on a single tray........................................................................................................ 43
Figure 27: Blade-mounting envelope, rear view......................................................................................... 44
Figure 28: Front surface dimple .................................................................................................................. 45
Figure 29: Guide and latch details, top view............................................................................................... 47
Figure 30. Example of a front-blade guide and latch .................................................................................. 48
Figure 31: Example of a rear-blade guide pin ............................................................................................. 48
Figure 32: Tray with EMI enclosure, blade volume shown ......................................................................... 49
Figure 33: Blade EMI seal ............................................................................................................................ 49
Table of Tables
Table 1: List of specifications ........................................................................................................................ 1
Table 2: Blade features ................................................................................................................................. 6
Table 3: Disk Drive SATA Port Assignments .................................................................................................. 8
vi January 7, 2016
Open Compute Project Open CloudServer OCS Blade
http://opencompute.org vii
Open Compute Project Open CloudServer OCS Blade
Open CloudServer OCS Chassis Describes the hardware used in the Version 2.0 (V2.0) OCS
Specification Version 2.0 system, including the chassis, tray, and systems management.
Open CloudServer OCS Blade Describes the blade used in the V2.1 OCS system, including
Specification Version 2.1 interconnect and blade hardware and blade management.
Open CloudServer OCS Tray Describes the tray mezzanine card used in the V2.0 OCS
Mezzanine Specification Version 2.0 system, including interconnect, hardware, and management.
Open CloudServer OCS NIC Describes the Network Interface Controller (NIC) mezzanine
Mezzanine Specification Version 2.0 card used in the V2.0 OCS system.
Open CloudServer OCS Chassis Describes the chassis manager command-line interface (CLI).
Management Specification Version
2.0
This document is intended for designers and engineers who will be building blades for an OCS
system.
OCS is an off-the-shelf (OTS) commodity rack that is loaded with up to four modular chassis, each
with trays, power supplies, power distribution, rack management, system fans, and two side-walls,
as shown in Figure 1 and Figure 2.
http://opencompute.org 1
Figure 1: View of OCS with rack
Each chassis supports 12 rack unit (EIA 310-E standard U or 1U, each 17.7" wide and 1.75" tall) trays
that house up to 24 individual OCS blades (two blades per tray). Blades can be designed to use the
full width of the tray. It is also possible to use multiple rack units to house a single tall blade, with
certain restrictions.
Power, management, and networking are delivered through the tray backplane (TB) and the power
distribution backplane (PDB). The tray backplane is located at the back of each tray. The power
distribution backplane attaches vertically to the individual trays on one side and to the power
2 January 7, 2016
Open Compute Project Open CloudServer OCS Blade
supply unit (PSU) on the other side. This arrangement reduces the current carrying requirements of
the distribution board, eliminates cabling, and reduces costs.
Power and management signals are received from the PDB and distributed to the blades by
Ethernet networking cables that pass through a blind-mate connector and are routed to
attachments at the rear of the chassis. Note that running the cables through the rear of the blade
eliminates the need to connect directly to the servers. Once provisioned, the network cabling
should only be touched when a cable or switch fails or the typology is changed. The type and
number of networking switches depends on the specific deployment.
Following are the significant changes from the previous generation (V2.0) of the blade:
http://opencompute.org 3
Figure 4 V2.1 OCS full-width blade
4 January 7, 2016
Open Compute Project Open CloudServer OCS Blade
Support for up to four HDDs: two via motherboard connector, two via Serial AT Attachment
(SATA) cables to the Platform Control Hub (PCH)
Support for up to four SATA Small Form Factor (SFF) SSDs via SATA cable to the PCH
Support for up to four Samtec Peripheral Component Interconnect Express (PCIe) x8 slots, with
each slot capable of supporting two M.2 modules through an interposer board
Support for a standard PCIe x8 card via a riser attached to the Samtec PCIe x8 edge connector
Support for a Network Interface Controller (NIC) mezzanine card
Support for a tray backplane mezzanine card
4 Blade Features
Table 2 lists features supported by the new blade design.
http://opencompute.org 5
Table 2: Blade features
Processor
Memory
On-board devices
Server management
System firmware
PCI-Express expansion
6 January 7, 2016
Open Compute Project Open CloudServer OCS Blade
Networking
5.2 DIMMS
Figure 6 shows how the DIMMs will be labelled. The DIMMs will be color coded to ease loading.
Two colors will be used: one color for DIMMs A1,B1,C1,D1,E1,F1,G1,H1, and a second color for
http://opencompute.org 7
DIMMs A2,B2,C2,D2,E2,F2,G2,H2. Color coding can use either the DIMM connector body or the
latch. Note that color selection is at the discretion of the manufacturer.
5.4 HDDs
HDDs shall be labelled as shown in Figure 6. A label detailing the HDD numbers shall be available on the
frame for reference by service personnel. The drives shall be assigned SATA ports as shown in Table 3:
Disk Drive SATA Port Assignments so that drive locations are common across WCS blades and so that the
location of a failed drive can be readily serviced. Loading of the drives shall typically be governed by the
configuration, but in the event of a configuration supporting partial loading, drives should be loaded in
numerical order.
6 PCB Stackup
Figure 7 shows the recommended 10-layer dual stripline PCB stackup. The stackup uses standard
FR4 PCB material. The PCB thickness requirement will be ~93mils.
8 January 7, 2016
Open Compute Project Open CloudServer OCS Blade
Copper
Layer Name Layer Type Thickness
Weight (oz)
Soldermask 0.5
Signal 1 SIGNAL 1.9 1.5
Prepreg 2.7
Plane 2 GND VDD GND 1.3 1.0
Core 4.0
Signal 3 SIGNAL GND SIGNAL 1.3 1.0
Prepreg 25.0
Signal 4 GND SIGNAL GND 1.3 1.0
Core 4.0
Plane 5 Power GND Power 2.6 2.0
Prepreg 4.0
Plane 6 Power GND Power 2.6 2.0
Core 4.0
Signal 7 GND SIGNAL GND 1.3 1.0
Prepreg 25.0
Signal 8 SIGNAL GND SIGNAL 1.3 1.0
Core 4.0
Plane 9 GND VDD GND 1.3 1.0
Prepreg 2.7
Signal 10 SIGNAL 1.9 1.5
Soldermask 0.5
Total 93.2 +/-9
Table 4 lists the recommended PCIe mapping for the design. This mapping is used to determine
feasibility of stackup support for the PCIe routing. Note that this is informational only; actual
implementation may vary.
http://opencompute.org 9
CPU PCIe bus Destination Layer
10 January 7, 2016
Open Compute Project Open CloudServer OCS Blade
Blade Motherboard
Tray Backplane
Airmax Airmax
L3 L3 L3 VS2 VS2 L3
40GbE L2 L2 L2 L2 Network
Port 2 Port 2 40GbE Port QSFP+
Controller L1 L1 L1 L1
L0 Cable Switch
L0 L0 L0
<3.5" <1.6" < 3m
SEAM SEAF
NIC Mezz
http://opencompute.org 11
Blade Motherboard PCIe Bridge Board PCIe Card
M.2
Uses device local PFAIL circuit. Save initiated by PERST#.
8.1 M.2
The design will support M.2 storage in PCIe SLOTS 1-4. If the M.2 contains a local PFAIL solution, the
solution will reside within the volume space designated for the M.2 module.
9 Blade Interconnects
The tray (or other supporting infrastructure) provides the electrical interface to the blade using the
connectors listed in Table 5 (or their functional equivalents). Note that the choice of these
connectors is based on a coplanar PCB for power and network distribution.
Blade connector
Qty Connector description Manufacturer Part Number TBP mating connector MPN
(MPN)
12 January 7, 2016
Open Compute Project Open CloudServer OCS Blade
Blade connector
Qty Connector description Manufacturer Part Number TBP mating connector MPN
(MPN)
contact, 2mm spacing, 17mm
pitch
Guide pin receptacle— (FCI) 10037912-101LF (FCI) 10044366-101LF
2
10.8mm right angle, 0° key (or equivalent) (or equivalent)
The chassis will provide electrical power and signaling to the tray. The tray will provide power to
the blade. The blade interface to the tray backplane will provide power and high speed signaling for
the tray mezzanine card on the tray backplane.
The interface to the tray backplane from a motherboard shall consist of four connectors:
1. An AirMax power connector for sourcing 12V power from PDB to blade motherboard.
2. An AirMax VS2 5x10 primarily to interface PCIe x16 Gen 3 to the tray backplane.
3. An AirMax VS2 3x6 to interface 10GbE and management signals to the tray backplane.
4. An AirMax VS2 3x6 primarily to interface SAS channels to the tray backplane.
The total amount of force required to mate the blade to the tray backplane will not exceed 18.6
pounds throughout the expected service life of the connector set. With the leverage provided by
the latch at the face of the blade, the force required will not exceed 3.15 pounds. The retention
force of the connectors is a minimum of 5.66 pounds, which equates to 0.94 pounds minimum
force at the latch.
http://opencompute.org 13
Figure 12: AirMax power receptacle pinout arrangement
The maximum power that can be delivered to a blade through this connector is 480W, assuming
the connector supports 40A with 30⁰C rise. Above this current, the Hot Swap Controller (HSC)
should disable power to protect the hardware.
The signal connectors used are from the FCI AirMax VS2. Figure 13 shows an example of an AirMax
coplanar connector pair.
There are three AirMax connectors interfacing the blade to the tray backplane:
14 January 7, 2016
Open Compute Project Open CloudServer OCS Blade
The previous-generation blade used the AirMax VS 3x6 connector. Note that the VS and VS2 family
connectors are plug-in compatible.
The AirMax connector is organized as a grid, with rows A through L and columns 1 through 10. Note
that the columns flip from header to receptacle so that the mated pairs match. Figure 14 shows the
signal connector layout, with the blade header on the left and the tray backplane receptacle on the
right.
http://opencompute.org 15
Table 8 describes the signals used in this interface.
Bus type I/O Logic Definition for three pair, eight column
Current
Mode
PCIE_B2T_TX_DP/N[15:0] O PCIe Gen 3 data from blade to tray mezzanine
Logic
(CML)
PCIE_T2B_RX_DP/N[15:0] I CML PCIe Gen3 data from tray mezzanine to blade
CLK_100M_P/N[3:0] O CML 100MHz PCIe Gen 3 clocks
PCIE_RESET_N[3:0] O 3.3V PCIe reset signals
WAKE_PCIE_N I 3.3V PCIe wake signal
PCIe configuration ID bits
Should be connected to General Purpose Input/Output
(GPIO) on the PCH
Should be pulled up with minimum 10K ohm resistor
PCIE_CFG_ID[1:0] I 3.3V Has 1K pulldown on tray mezzanine
00 = 1 x16 bifurcation
01 = 2 x8 bifurcation
10 = 4 x4 bifurcation
11 = N/A
PCIE_B2T_TX_DP/N[15:0] O CML PCIe Gen 3 data from blade to tray mezzanine
MEZZ_SDA/SCL I/O 3.3V I2C from blade BMC to tray mezzanine
Indicates tray mezzanine card is installed
MEZZ_PRESENT_N I 3.3V This signal should be pulled up on the blade and
grounded on the tray mezzanine card
MEZZ_EN O 3.3V Enable for tray mezzanine card on-board power
16 January 7, 2016
Open Compute Project Open CloudServer OCS Blade
Bus type I/O Logic Definition for three pair, eight column
P12V_MEZZ and P5V_USB pins are assumed to support a derated 500mA per pin; therefore,
P12V_MEZZ supports a derated maximum of 3A, and P5V_USB supports 500mA.
Table 9 shows how the reset and clock signals are mapped to support each of the bifurcation cases
for PCIe to the tray mezzanine.
Table 10 shows the pinout for the AirMax 10Gb/40Gb Ethernet header.
http://opencompute.org 17
Table 10: AirMax VS2 10/40GbE connector pinout
Table 11 describes the signals used in this interface. Note that 10G is no longer supported, but still
defined in the connectors.
Bus type I/O Logic Definition for three pair, eight column
18 January 7, 2016
Open Compute Project Open CloudServer OCS Blade
Bus type I/O Logic Definition for three pair, eight column
Table 12 shows the pinout for the AirMax SAS connector. Connector #1 supports SAS lanes 1-8.
Since external JBODs are no longer supported, this connector may be depopulated.
Bus type I/O Logic Definition for three pair, eight column
http://opencompute.org 19
Following are the skew compensation divisions between the blade motherboard and the tray
backplane:
The interposer edge card interfaces to the blade motherboard through a Samtec HSEC8-150-01-S-
DV-A connector (or equivalent). The interposer module supports the 60mm, 80mm, and 110mm
form factors (Type 2260, 2280, and 22110). To support two M.2 modules, the connector interface is
designed to support two PCIe Gen3x4 interfaces as well as the SSD specific signals, per the PCIE M.2
specification.
The interface is also designed to support a standard PCIe x8 interface through a separate riser card.
The bifurcation is communicated through the LINK_WIDTH signal (Pin B3), which should be
connected to the PCH. The PCIe card will not require I2C or Joint Test Action Group (JTAG)
connections to the motherboard. Table 14 shows the pinout for supporting only the M.2 interposer
module.
20 January 7, 2016
Open Compute Project Open CloudServer OCS Blade
Module 2 reference
11 REFCLK2- 12 CLKREQ2 Ref clock request (OD)
clock differential pair
13 GND Ground 14 SUSCLK Suspend cock (32.768Khz)
15 3.3V 3.3V power 16 DAS/DSS# Drive active indicator
17 DEVSLP Device sleep 18 3.3V 3.3V power
19 3.3V 3.3V power 20 3.3V 3.3V power
21 WAKE# Ground 22 PERST# PCIe reset
23 3.3V STBY 3.3V standby power 24 GND Ground
25 GND Ground 26 REFCLK1+ Module 1 reference clock
27 PETp(0) 28 REFCLK1- differential pair
Transmitter module 1
29 PETn(0) lane 0 differential pair 30 GND Ground
31 GND Ground 32 PERp(0) Receiver module 1 lane 0
33 PRSNT2# Hotplug detect 34 PERn(0) differential pair
http://opencompute.org 21
Side B connector Side B connector
Signals will satisfy the electrical requirements of the PCIe M.2 Specification and the PCIe Card
Electromechanical Specification. Note that table includes columns to indicate whether a signal is
required for use by the M.2 interposer module and/or the PCIe riser. Only slot 4 is required to
support both the M.2 Interposer and PCIe Riser.
22 January 7, 2016
Open Compute Project Open CloudServer OCS Blade
http://opencompute.org 23
Table 17 shows the pinout for the NIC mezzanine connector.
P3E_CPU1_LAN_RX_DP/N[7:0] I CML PCIe Gen3 from the NIC mezzanine to the CPU
P3E_CPU1_LAN_TX_DP/N[7:0] O CML PCIe Gen3 from the CPU to the NIC mezzanine
Mezzanine present
MEZZ_PRESENT_N I 3.3V
Should be GND on mezzanine
Port 1 10GbE transmit from mezzanine to tray
NWK_1_TX[3:0]P/N I CML
backplane
Port 1 10GbE receive from motherboard to tray
NWK_1_RX[3:0]P/N O CML
backplane
Port 2 40GbE transmit from mezzanine to tray
NWK_2_TX[3:0]P/N I CML
backplane
Port 2 40GbE receive from tray backplane to
NWK_2_RX[3:0]P/N O CML
mezzanine
24 January 7, 2016
Open Compute Project Open CloudServer OCS Blade
NIC mezzanine ID
NIC_MEZZ_ID[1:0] I 3.3V
Should connect to BMC
The 12V and 5V power rails are supplied directly from the main 12V and 5V power supply rails.
These rails need to drive at most six 3.5” LFF HDDs or Small Form Factor (SFF) SSDs. Table 19 lists
the signal names and current capacities.
http://opencompute.org 25
Table 19: SATA power connector signal names and current capacities
Data
Pin Signal Name Signal Description
1 GND Ground
2 A+ Transmit +
3 A- Transmit -
4 GND Ground
5 B- Receive -
6 B+ Receive +
7 GND Ground
Power
Pin Signal Name Signal Description
1 V33 3.3v Power
2 V33 3.3v Power
3 V33 3.3v Power, Pre-charge, 2nd mate
4 Ground 1st Mate
5 Ground 2nd Mate
6 Ground 3rd Mate
7 V5 5v Power, pre-charge, 2nd mate
8 V5 5v Power
9 V5 5v Power
10 Ground 2nd Mate
11 Optional GND -
12 Ground 1st Mate
13 V12 12v Power, Pre-charge, 2nd mate
14 V12 12v Power
15 V12 12v Power
26 January 7, 2016
Open Compute Project Open CloudServer OCS Blade
10 Management Subsystem
Management circuitry for the blade uses the Intel® Manageability Engine (ME) combined with the
Baseboard Management Controller (BMC). This section describes the requirements for
management of the blade. Primary features include:
http://opencompute.org 27
Figure 15 shows the management block diagram.
SVID
Blade Motherboard
BMC
VRD VRD VRD AST1050
DDR4 A/B DDR4 C/D VccIN CPU 1 PECI
JTAG
Front Attention LED
QPI0
DRAM
QPI1
128MB
XDP
9.6GT/s 9.6GT/s GPIO
Connector
QPI0 QPI1
SPI BLADE_MATED_N<1:0>
4GB/s 16MB BMC
FLASH BLADE_EN1
DMI2 PECI
GPIO BLADE_EN2
PCH Port 80 GPIO SKU_ID[2:0]
POST LEDs
SATA x1
HDD #1 sSATA[0]
HDD Conn
BMC Debug
SATA x1 TPM UART5 4-pin Hdr
HDD #2 sSATA[1] CPLD Connector
HDD Conn Connector
SATA x1 UART2
UART1
HDD #3 sSATA[2] LPC LPC
Connector
Console Dbg
SATA x1 Connector 4-pin Hdr CPLD
HDD #4 sSATA[3]
16MB BIOS
Connector SPI UART1
FLASH UART3
SATA x1
HDD #3 SATA[0]
Connector BMC/CM
GPIO JUMPER Force Bios Recovery Connector 4-pin Hdr
SATA x1
HDD #4 SATA[1]
Connector GPIO JUMPER Flash Security Override
Tray
SATA x4 GPIO JUMPER Clear CMOS Mezz
SATA[5:2] MEZZ_EN
Mini SAS HD GPIO
USB 2.0 MEZZ_PRESENT_N
USB 4-pin Hdr GPIO
Connector
USB 2.0
USB
Connector
4-pin Hdr
USB
Embedded ARM926EJ
Embedded 16KB/16KB cache
Synchronous Dynamic Random Access Memory (SDRAM) memory up to 512MB
NOR/NAND/Serial Peripheral Interface (SPI) flash memory
I2C System Management Bus (SMBus) controller
5 UART 16550 controllers
LPC bus interface
Up to 152 GPIO pins
10.2 DRAM
The BMC requires 128 MB of Double Data Rate 3 (DDR3) memory.
28 January 7, 2016
Open Compute Project Open CloudServer OCS Blade
UART3
Primary communication with CM for control of the blade.
UART1
Secondary communication with CM; primarily used for console debug.
UART5
BMC debug port; connects only to a debug connector on the blade.
10.6 PECI
The blade shall use the PECI host controller inside the CPU to support integrated thermal
monitoring of the two CPU sockets.
http://opencompute.org 29
10.8 PCH/BMC I2C
The design will include a number of I2C devices/functions available to the BMC and PCH. The block
diagram is shown below in Figure 16. A brief description of the components follows. Some of the
block components are specific with respect to requirements of the device and its slave address,
while others (such as voltage regulators) are more general, requiring that the functionality be
placed on a specific I2C bus but not requiring a specific solution (vendor part number) be used.
Care should be taken to electrically isolate components that are powered from separate power
domains, but are located on the same I2C bus. Note that addresses shown are 8-bit address with
the Read/Write (R/W) bit as the Least Significant Bit (LSB) set to 0 (0xA8=1010100x). Figure 16
shows a block diagram.
PCH BMC
932SQ420 ICS9ZX21901 ICS9FGP204 AT24C64 Debug AST1050
CPU XDP
CLKBUF CLKBUF CLKGEN FRUID
0xD2 0xD8 Connector 0xD0 0xA8
SMB SD4
SML1 SD7
SD6
Debug
SML0 SD5
SD3
Tray Debug
Mezzanine
SD2
Voltage regulators that support I2C or PMBus should be available to PCH MEXP_SMB0, PCH SMB,
and BMC I2C Port 3. This includes CPU and memory subsystem regulators, at a minimum. The Intel®
30 January 7, 2016
Open Compute Project Open CloudServer OCS Blade
ME is responsible for enabling power (through a programmable logic device) to any voltage
regulators that are not on AUX power (i.e., enabled automatically by the presence of 12V).
Clock circuitry that supports I2C should be available to PCH MEXP_SMB0, PCH SMBus, and BMC I2C
Port 3. The block diagram shows three clock devices that are common to this architecture (actual
implementation may vary).
The blade will include a 64Kb serial EEPROM MPN AT24C64 (or equivalent) to store manufacturing
data. The device should be available to PCH MEXP_SMB0, PCH SMB, and BMC I2C Port 3.
The blade will include support for a minimum of two temperature sensors, MPN TMP75 (or
equivalent), for monitoring the inlet and outlet temperatures of the blade. The sensors will be
available on BMC I2C port 6. Figure 17 shows temperature sensor locations on the blade.
For in-rush current protection on the blade, an HSC that includes support for the PMBUS 1.2
interface will be used on the motherboard. The device will support an I2C polling rate of 10ms. The
HSC will be available to the Intel® ME SML1 and will provide its ALERT# signal to a Multipath
General-Purpose Input/Output (MGPIO) on the PCH. The Intel ME SML1 will be dedicated to the
HSC to provide fast response time to power excursions.
http://opencompute.org 31
Following is a list of devices known to be supported by the platform requirements (equivalent or
newer devices may exist):
Intel® Node Manager requires the ability to read blade power levels from the HSC to detect
conditions where total blade power exceeds the desired level for power capping. Operation of the
HSC is controlled by the BLADE_EN1 and BLADE_MATED signals from the tray backplane, as
described in the sections that follow.
To disable the power, the CMC pulls the Blade_EN1 signal to reference ground. The Blade_EN1
signal should not be pulled low on the blade or driven back to the Chassis Manager.
If BLADE_EN1 is used to disable power to the blade, the BMC will detect this event and disable
logging to prevent spurious messages from propagating to the log files.
To support potential blade or tray backplane SKUs in which connectors could be depopulated, the
BLADE_MATED_N circuitry should allow for depopulation of components to meet the
BLADE_MATED_N requirement from either connector.
Figure 18 shows the signal connections and contact sequence. The 3.3V pull-ups shown should be
derived from the 12V raw input. Note that the block diagram is intended to be functional only and
does not describe the actual circuitry required for the intended logic.
32 January 7, 2016
Open Compute Project Open CloudServer OCS Blade
To support possible SKUs of the tray backplane in which connectors can be removed, the design
should include a method for grounding the BLADE_EN1 pin from the PCIe AirMax connector in case
this connector is depopulated. This could be accomplished through a BOM load or other methods.
Tray
BLADE Motherboard
Backplane
PCIe AirMax
Tray Mezz
VS2
BLADE_EN1
En Short Pin
3.3V GbE Airmax
VS2
BLADE_MATED_N
Short Pin
UV
Long Pin
http://opencompute.org 33
Table 21: Signal interpretation
BLADE_EN1 BLADE_MATED_N
Blade/hot swap controller status
on blade on blade
The blade will support I2C for the tray mezzanine card and the NIC mezzanine card. It is expected
that standard PCIe modules will support Address Resolution Protocol (ARP) per the I2C
specification. This should be verified with the specification for specific mezzanine cards. The
modules shall be accessible to the I2C ports as shown in the block diagram.
The Intel ME shall be enabled (powered) in all states (i.e., must be power from AUX power).
The following signals will be driven or received MGPIO signals of the PCH (additional
SmaRT/CLST logic will be supported on the PCB):
o 12V HSC ALERT#
o SMBAlert#, SMBAlert_EN#
o PROCHOT# and MEMHOT# for each CPU
34 January 7, 2016
Open Compute Project Open CloudServer OCS Blade
Each blade has three LEDs: a front power/status LED that is green/amber, and two attention LEDs
(front and back) that are red. Figure 19 shows the location of the front LEDs. The location of the
rear LED is shown in Figure 20. The visible diameter and brightness requirements of the LEDs are
TBD.
The rear attention LED is located near the AirMax 3x6 SAS connector, as shown in Figure 20. The
LED is placed so that it is visible externally through the blade and tray backplane connectors.
When a blade is first inserted, the LED will turn amber if 12V is present at the output of the HSC.
This ensures that the backplane has 12V power, the tray backplane connectors are mated, and the
Blade_EN1 signal is asserted.
http://opencompute.org 35
When the blade’s management software turns on the system power (CPU/Memory/PCIe), the
power status LED turns green. Note that the power status LED may be driven by an analog resistor
network tied directly to a power rail, and is not an indication of the health of the blade. Table 22
describes the operation of the blade power status LED.
Off Blade is not fully inserted, 12V power is absent, or Blade_EN1 is de-asserted.
Blade is inserted, Blade_EN1 is asserted, 12V power output from the Hot Swap
Solid Amber ON
Controller is present.
Indicates that the BMC is booted and system power is enabled
Solid Green ON
(CPU/Memory/PCIe).
The blade attention LED directs the service technicians to the blade that requires repair. The
technician can remove the blade from the rack and replace it with an operational blade.
The attention LED is driven by a single BMC GPIO. Table 23 describes the operation of the blade
attention LED.
Note that a second red Attention LED is located at the rear of the blade so that it is visible through
the tray when the fan door is opened. This LED is set at the same time as the front red attention
LED, and indicates from the rear of the chassis which blade needs attention.
I2C debug headers on the CPU SMBus, PCH SMBus, Intel ME SML I2C, and BMC I2C.
Header will be a 3-pin compatible with standard I2C protocol analyzers (such as Beagle Protocol
Analyzer or Aardvark Host Adapter).
Note that the CPU SMBus can be eventually accessed through CPU XDP connector.
36 January 7, 2016
Open Compute Project Open CloudServer OCS Blade
Debug connector on all three BMC UARTS as shown in the block diagram (Figure 15).
Connector TBD.
Port 80 POST LED support on LPC bus as shown in the block diagram (Figure 15).
BIOS debug support including:
o Socketed BIOS Flash (to be removed for production)
o BIOS recovery jumper connected to PCH GPIO
o Flash security override driven from BMC GPIO (to PCH)
Power button support.
CPU XDP.
Recommend support for ITP60c, or otherwise ensure that it will be mechanically possible to
solder on the XDP connector post-production for debug.
Dual USB 2.0 headers connected to PCH USB2 Port 1 and Port 9.
Dongles will be readily available to get standard USB connectors (Port 1 on EHCI#1 and Port 9 on
EHCI#2 [assuming 0-based numbering] of PCH).
USB 2.0 debug port connected to PCH USB2 Port 0 for BIOS and OS debug.
These can be implemented using 4-pin header connectors to save space.
BMC disable jumper attached to GPIO on BMC.
Intel ME recovery mode jumper.
HW jumper to enable BIOS serial debug output.
Placement of the debug connectors will not obstruct the installation of any optional assemblies
such as the PCIe RAID Card or SSDs. It is acceptable that a debug feature is not accessible if an
optional assembly is installed unless the feature supports debug of that optional assembly.
below shows a logical block diagram for this functionality. Note that additional logic is shown to
enable control of power capping from either the CM or the BMC and to enable or disable the power
capping feature. It is recommended that this logic be implemented in a CPLD if available.
- When enabled, the typical workflow is as follows:
o Power Supply drives PSU_ALERT# low
o PSU_ALERT# drives PROCHOT#/MEMHOT# and PCH SMB_ALERT# (GPIO31)
o BMC interrupt on PSU_ALERT# issues default power cap over IPMI link (SML0)
http://opencompute.org 37
oBMC deasserts ENABLE signal (to disable PSU_ALERT# monitoring)
oPROCHOT# switch is reset by BMC when default power cap is removed. This re-arms the
functionality by unmasking PSU_ALERT# and asserting PSU_ALERT_EN
- Additionally
o BMC can manually assert/deassert PCH SMB_ALERT# using FM_NM_THROTTLE#
o BMC can enable/disable the power capping features using PSU_ALERT_EN
o PSU_ALERT# can be asserted by SMALERT# from the PSU or directly from the BMC
o PROCHOT#/MEMHOT# can be asserted by overcurrent/undervoltage monitor of 12V
SMALRT#
PSU_ALERT
PCH GPIO31
ENABLE BMC
SMB_ALERT#
GPIO
GPIOF7
PSU
PROCHOT# / MEMHO T# Intel FOR CE
GPIOF6 GPIOH6 x6
ME
CPU1 UART 1
IPMI
Chassis
Manager
SMLINK
UART 4
38 January 7, 2016
Open Compute Project Open CloudServer OCS Blade
Over-current protection is responsible for detecting a current level that indicates a catastrophic
failure of the blade. In this event, the HSC should disable 12V to the blade typically by disabling
the HSC’s input FETS. The recommended current threshold for over-current protection is 50A.
Over-current monitor is responsible for detecting a current level that which is higher than
achievable limits by worst case applications. In this event, the blade typically throttles the CPU
performance using the PROCHOT# input. It is expected that the current threshold be set high
enough or with slow enough filter response to prevent throttling during performance or stress
testing with worst case configuration. The recommended current threshold for over-current
monitoring is 45A.
11 NIC Mezzanine
The V2.0 OCS motherboard will support a 40GbE NIC mezzanine card. The network controller on
the NIC mezzanine will interface to CPU0 through a PCIe x8 channel. 40GbE is supported through
Port 2 of the NIC to the blade connector interface. The 40G port will connect to a QSFP+ connector
on the tray backplane. Figure 22 shows the NIC block diagram. The pinout for the NIC mezzanine is
described in Section 9.3. Mechanical outlines can be found in the Open CloudServer OCS NIC
Mezzanine Specification Version 2.0.
Blade Motherboard
Tray Backplane
CPU0
PCIe PCIe PCIe
Airmax Airmax
L3 L3 L3 VS2 VS2 L3
40GbE L2 L2 L2 L2 Network
Port 2 Port 2 40GbE Port QSFP+
Controller L1 L1 L1 L1
L0 Cable Switch
L0 L0 L0
SEAM SEAF <3.5" <1.6" < 3m
L3 L3
L2 L2
Port 1 Port 1
L1 L1
L0 L0
NIC Mezz
http://opencompute.org 39
12.1 Input Voltage, Power, and Current
Table 24 lists the nominal, maximum, and minimum values for the blade input voltage. The
maximum and minimum voltages include the effects of connector temperature, age, noise/ripple,
and dynamic loading.
The maximum amount of power allowed per blade is defined during system power allocation. The
number of blades in a chassis might be limited by the capacity of the AC power cord or by the
cooling capacity of the deployment. Table 25 lists the input power allocation for a low-power blade.
Low-power blade
(1x power 12.3VDC 300W 30A
connector)
The blade provides inrush current control through the 12V bus rail; return-side inrush control is not
used. The inrush current rises linearly from 0A to the load current over a 5 millisecond (ms) period.
The blade also provides a way to interrupt current flow within 1 microsecond (μs) of exceeding the
maximum current load.
40 January 7, 2016
Open Compute Project Open CloudServer OCS Blade
Maximum capacitance allowed on the 12V input to the blade is 4,000μFarads (μF). Minimum
capacitance allowed per blade is 300μF.
Mounting holes
Figure 23 provides guidelines for mounting holes; actual mounting hole locations may vary to
meet PCB implementation requirements.
NIC mezzanine keepout
Figure 23 does not include component height restrictions for the NIC mezzanine. For
information concerning the component height restrictions, see Open CloudServer NIC Mezzanine
Specification Version 2.0.
RAID/ROC card
Figure 23 shows the component height restrictions for supporting PCIe cards, such as the
RAID/RAID on Chip (ROC). Note that the RAID card can include a local Supercap or battery
backup solution; mounting for this should be accounted for by the mechanical design.
http://opencompute.org 41
Figure 23: Mechanical control outline
This specification assumes a coplanar power/network distribution PCB; if another structure is used,
the connectors need to be form, fit, and functionally compatible with this specification. Note that if
necessary, an electromagnetic interference (EMI) enclosure can be built to fit within the volumetric
restrictions.
42 January 7, 2016
Open Compute Project Open CloudServer OCS Blade
Each rack unit (or level) within the chassis enclosure can accommodate either two half-width blades
inserted next to each other or a single full-width blade. Figure 25 shows two blades on a tray.
http://opencompute.org 43
Figure 26 shows the dimensions of the blade-mounting envelope, which holds the blade on the
tray.
44 January 7, 2016
Open Compute Project Open CloudServer OCS Blade
Note that the only serviceable components in the blade are the hard drives, which can be replaced
without tools. All other repairs to the blade are made in a bench-top environment. Components
such as heat sinks, hard drives, and mother boards can be attached so that removal requires a tool,
if this option is less expensive.
The blade must be mechanically stiff enough to support its own weight during shipping and
handling. The side walls of the blade should be made as tall as possible within the given envelope to
maximize the stiffness of the blade structure.
The blade shall contain a dimple on the side of the front surface to force the blades to fit tightly in
the tray and prevent excessive bowing in the trays during a shock. Figure 27 shows an example.
Latches, as well as thumb screws and other components used to lock, unlock, or remove a
subassembly from the chassis, are colored blue (Pantone code 285 C blue) to make them easy to
identify. The locking feature used to secure the blade latch to the front of the blade must provide
locking with minimal hand motion. A screw type lock with more than a half-turn rotation is not
permitted.
http://opencompute.org 45
If the identity and function of a latching feature for a particular field replaceable unit (FRU) is clear,
coloring might not be required. For example, a lock/unlock indicator is sufficient for a blade lever.
Because a single tray design without a center rail is used for both half-width and full-width blades,
guides and alignment features are located in both the front and back of the enclosure. These
features are especially important when inserting a half-width blade into an empty tray.
The blade enclosure interfaces with the side walls of the tray and the guide pin in the tray to make
sure the blade is aligned within the ±3.50mm (±0.138”) tolerance necessary to engage the
connector. The guide pin in the tray aligns the half-width blades with the correct side of the tray.
The blade connectors are protected from damage by the guide pin by the blade sheet metal.
Figure 28 shows the blade guiding and latching features associated with the tray. Blade guide pins
slide into slots on the front of the tray for alignment. A latch attached to the blade fits into a notch
in the tray to secure the blade in place.
46 January 7, 2016
Open Compute Project Open CloudServer OCS Blade
Figure 29 and Figure 30 show examples of a front-blade guide and latch and of a rear-blade guide.
Note that a lever is included at the front of the blade to provide additional guidance when the
blade is locking into the tray, and to help with the force required to install and remove the blade
from the tray.
http://opencompute.org 47
Figure 29. Example of a front-blade guide and latch
For EMI containment, an EMI shield is added to the top rear edge of the tray, as shown in Figure 31.
EMI containment will be executed at the blade-assembly level, and EMI certification will be
executed at the chassis level.
48 January 7, 2016
Open Compute Project Open CloudServer OCS Blade
When all trays are in place, they are electrically stitched together to prevent leakage of
electromagnetic fields. The blade also has a gasket on the front of the top surface that provides
electrical sealing, as shown in Figure 32. Dimensions for this gasket are shown in Figure 24.
http://opencompute.org 49