FusionServer 2288H V7 Server User Guide 08
FusionServer 2288H V7 Server User Guide 08
User Guide
Issue 08
Date 2024-09-24
Notice
In this document, "xFusion" is used to refer to "xFusion Digital Technologies Co., Ltd." for concise description
and easy understanding, which does not mean that "xFusion" may have any other meaning. Any "xFusion"
mentioned or described hereof may not be understood as any meaning other than "xFusion Digital
Technologies Co., Ltd.", and xFusion Digital Technologies Co., Ltd. shall not bear any liability resulting from
the use of "xFusion".
The purchased products, services and features are stipulated by the contract made between xFusion and
the customer. All or part of the products, services and features described in this document may not be within
the purchase scope or the usage scope. Unless otherwise specified in the contract, all statements,
information, and recommendations in this document are provided "AS IS" without warranties, guarantees or
representations of any kind, either express or implied.
The information in this document is subject to change without notice. Every effort has been made in the
preparation of this document to ensure accuracy of the contents, but all statements, information, and
recommendations in this document do not constitute a warranty of any kind, express or implied.
Website: https://www.xfusion.com
Purpose
This document describes the appearance, functions, structure, hardware installation,
basic configuration, OS installation, and troubleshooting of FusionServer 2288H V7.
Intended Audience
This document is intended for:
● Enterprise administrators
● Enterprise end users
Symbolic Conventions
The symbols that may be found in this document are defined as follows:
Symbol Description
Change History
Issue Release Date Change Description
04 2024-03-15 Updated:
1.5.1.6 24 x 2.5" Drive NVMe Configurations
03 2024-01-31 Updated:
1.4.1.5 Memory Installation Positions
1.5.1 Drive Configurations and Drive
Numbering
02 2023-11-30 Updated:
1.7.2 PCIe Slots
Added:
1.10.1 LCD Software Environment
Introduction
8.8 Using the LCD
Contents
5 ESD................................................................................................................................122
5.1 ESD Prevention....................................................................................................................................... 122
5.2 Grounding Methods for ESD Prevention..................................................................................................123
6.9.2.6.7 Front-Drive Backplane SAS 3.0 High-Speed Cabling (Server with a PCIe Plug-in RAID Controller
Card)..............................................................................................................................................................228
6.9.2.6.8 GPU Power Cabling........................................................................................................................229
6.9.2.6.9 GPU Module Power Cabling...........................................................................................................230
6.9.2.6.10 GPU Module High-Speed Cabling................................................................................................ 231
6.9.2.7 12 x 2.5" Drive (4 x SATA + 8 x NVMe) Pass-Through Configuration 1............................................ 232
6.9.2.7.1 Left and Right Mounting Ear Cabling..............................................................................................233
6.9.2.7.2 OCP 3.0 Expansion Cabling........................................................................................................... 234
6.9.2.7.3 Intrusion Sensor Cabling.................................................................................................................236
6.9.2.7.4 Fan Board Cabling..........................................................................................................................237
6.9.2.7.5 Front-Drive Backplane Power and Indicator Signal Cabling........................................................... 238
6.9.2.7.6 Front-Drive Backplane SAS 3.0 High-Speed Cabling (PCH Pass-Through).................................. 239
6.9.2.7.7 Front-Drive Backplane High-Speed Cabling................................................................................... 240
6.9.2.8 12 x 2.5" Drive (4 x SAS/SATA + 8 x NVMe) Pass-Through Configuration 2.................................... 241
6.9.2.8.1 Left and Right Mounting Ear Cabling..............................................................................................241
6.9.2.8.2 OCP 3.0 Expansion Cabling........................................................................................................... 242
6.9.2.8.3 Intrusion Sensor Cabling.................................................................................................................244
6.9.2.8.4 Fan Board Cabling..........................................................................................................................245
6.9.2.8.5 Front-Drive Backplane Power and Indicator Signal Cabling........................................................... 246
6.9.2.8.6 Front-Drive Backplane SAS 3.0 High-Speed Cabling (Server with a PCIe Plug-in RAID Controller
Card)..............................................................................................................................................................246
6.9.2.8.7 Front-Drive Backplane High-Speed Cabling................................................................................... 248
6.9.2.9 12 x 2.5" Drive (4 x SATA + 8 x NVMe) + 4 x GPU Card Configuration 1..........................................249
6.9.2.9.1 Left and Right Mounting Ear Cabling..............................................................................................249
6.9.2.9.2 OCP 3.0 Expansion Cabling........................................................................................................... 250
6.9.2.9.3 Intrusion Sensor Cabling.................................................................................................................252
6.9.2.9.4 Fan Board Cabling..........................................................................................................................253
6.9.2.9.5 Front-Drive Backplane Power and Indicator Signal Cabling........................................................... 254
6.9.2.9.6 Front-Drive Backplane SAS 3.0 High-Speed Cabling (PCH Pass-Through).................................. 255
6.9.2.9.7 Front-Drive Backplane High-Speed Cabling................................................................................... 256
6.9.2.9.8 GPU Power Cabling........................................................................................................................257
6.9.2.9.9 GPU Module Power Cabling...........................................................................................................257
6.9.2.9.10 GPU Module High-Speed Cabling................................................................................................ 258
6.9.2.10 12 x 2.5" Drive (4 x SAS/SATA + 8 x NVMe) + 4 x GPU Card Configuration 2............................... 260
6.9.2.10.1 Left and Right Mounting Ear Cabling............................................................................................260
6.9.2.10.2 OCP 3.0 Expansion Cabling......................................................................................................... 261
6.9.2.10.3 Intrusion Sensor Cabling...............................................................................................................264
6.9.2.10.4 Fan Board Cabling........................................................................................................................265
6.9.2.10.5 Front-Drive Backplane Power and Indicator Signal Cabling......................................................... 266
6.9.2.10.6 Front-Drive Backplane SAS 3.0 High-Speed Cabling (Server with a PCIe Plug-in RAID Controller
Card)..............................................................................................................................................................266
6.9.2.10.7 Front-Drive Backplane High-Speed Cabling................................................................................. 268
6.9.2.10.8 GPU Power Cabling......................................................................................................................269
6.9.2.14.10 I/O Module 1 Cabling (2 x 2.5" SAS/SATA Drives) + I/O Module 2 Cabling (2 x 3.5" SAS/SATA
Drives)............................................................................................................................................................317
6.9.2.14.11 I/O Module 3 Cabling (4 x 2.5" NVMe Drives).............................................................................318
6.9.2.15 12 x 3.5" Drive EXP Configuration 1................................................................................................319
6.9.2.15.1 Left and Right Mounting Ear Cabling............................................................................................320
6.9.2.15.2 OCP 3.0 Expansion Cabling......................................................................................................... 321
6.9.2.15.3 Intrusion Sensor Cabling...............................................................................................................323
6.9.2.15.4 Fan Board Cabling........................................................................................................................324
6.9.2.15.5 Front-Drive Backplane Power and Indicator Signal Cabling......................................................... 325
6.9.2.15.6 Front-Drive Backplane SAS 3.0 High-Speed Cabling (Server with a PCIe Plug-in RAID Controller
Card)..............................................................................................................................................................325
6.9.2.15.7 I/O Module 1 (2 x 2.5" SAS/SATA Drives) Cabling....................................................................... 327
6.9.2.15.8 I/O Module 2 (2 x 3.5" SAS/SATA Drives) Cabling....................................................................... 328
6.9.2.15.9 I/O Module 1 (2 x 2.5" SAS/SATA Drives) + I/O Module 2 (2 x 3.5" SAS/SATA Drives) Cabling.. 329
6.9.2.15.10 I/O Module 3 Cabling (4 x 2.5" NVMe Drives)............................................................................ 331
6.9.2.16 24 x 2.5" Drive NVMe Configuration 1 (8 x SATA + 16 x NVMe)..................................................... 332
6.9.2.16.1 Left and Right Mounting Ear Cabling............................................................................................332
6.9.2.16.2 Intrusion Sensor Cabling...............................................................................................................333
6.9.2.16.3 Fan Board Cabling........................................................................................................................334
6.9.2.16.4 Front-Drive Backplane Power and Indicator Signal Cables.......................................................... 335
6.9.2.16.5 Front-Drive Backplane SAS 3.0 High-Speed Cabling (PCH Pass-Through)................................ 336
6.9.2.16.6 Front-Drive Backplane High-Speed Cabling................................................................................. 337
6.9.2.17 24 x 2.5" Drive NVMe Configuration 2 (8 x SAS/SATA + 16 x NVMe).............................................337
6.9.2.17.1 Left and Right Mounting Ear Cabling............................................................................................338
6.9.2.17.2 Intrusion Sensor Cabling...............................................................................................................339
6.9.2.17.3 Fan Board Cabling........................................................................................................................340
6.9.2.17.4 Front-Drive Backplane Power and Indicator Signal Cables.......................................................... 341
6.9.2.17.5 Front-Drive Backplane SAS 3.0 High-Speed Cable (Server with a PCIe RAID Controller Card). 341
6.9.2.17.6 Front-Drive Backplane High-Speed Cabling................................................................................. 343
6.9.2.18 24 x 2.5" Drive NVMe Configuration 3............................................................................................. 343
6.9.2.18.1 Left and Right Mounting Ear Cabling............................................................................................344
6.9.2.18.2 NC-SI Cabling...............................................................................................................................345
6.9.2.18.3 Intrusion Sensor Cabling...............................................................................................................347
6.9.2.18.4 Fan Board Cabling........................................................................................................................348
6.9.2.18.5 Front-Drive Backplane Power and Indicator Signal Cables.......................................................... 349
6.9.2.18.6 Front-drive Backplane High Speed Cables...................................................................................350
6.9.2.18.7 I/O Module 3 Cabling (PCH Pass-Through)..................................................................................351
6.9.2.19 24 x 2.5" Drive NVMe Configuration 4............................................................................................. 352
6.9.2.19.1 Left and Right Mounting Ear Cabling............................................................................................353
6.9.2.19.2 NC-SI Cabling...............................................................................................................................354
6.9.2.19.3 Intrusion Sensor Cabling...............................................................................................................356
6.9.2.19.4 Fan Board Cabling........................................................................................................................357
6.9.2.19.5 Front-Drive Backplane Power and Indicator Signal Cables.......................................................... 358
7 Troubleshooting Guide................................................................................................386
8 Common Operations................................................................................................... 387
8.1 Querying the iBMC IP Address................................................................................................................387
8.2 Logging In to the iBMC WebUI................................................................................................................ 388
8.3 Logging In to the Desktop of a Server..................................................................................................... 396
8.3.1 Using the Remote Virtual Console........................................................................................................396
8.3.1.1 iBMC.................................................................................................................................................. 396
8.3.2 Logging In to the System Using the Independent Remote Console..................................................... 398
8.3.2.1 Windows............................................................................................................................................ 398
8.3.2.2 Ubuntu............................................................................................................................................... 400
8.3.2.3 Mac.................................................................................................................................................... 403
8.3.2.4 Red Hat..............................................................................................................................................405
8.4 Logging In to the Server CLI....................................................................................................................407
8.4.1 Logging In to the CLI Using PuTTY over a Network Port..................................................................... 408
8.4.2 Logging In to the CLI Using PuTTY over a Serial Port......................................................................... 409
8.5 Managing VMD........................................................................................................................................ 411
9 More Information..........................................................................................................429
9.1 Obtaining Technical Support....................................................................................................................429
9.2 Maintenance Tool.....................................................................................................................................430
A Appendix...................................................................................................................... 434
A.1 Chassis Label Information.......................................................................................................................434
A.1.1 Chassis Head Label............................................................................................................................. 434
A.1.1.1 Nameplate......................................................................................................................................... 435
A.1.1.2 Certificate.......................................................................................................................................... 436
A.1.1.3 Quick Access Label...........................................................................................................................437
A.1.2 Chassis Internal Label..........................................................................................................................438
A.1.3 Chassis Tail Label.................................................................................................................................439
A.2 Product SN.............................................................................................................................................. 439
A.3 Operating Temperature Limitations......................................................................................................... 444
A.4 Nameplate............................................................................................................................................... 458
A.5 RAS Features..........................................................................................................................................459
A.6 Sensor List...............................................................................................................................................459
A.7 FAQs About Optical Modules.................................................................................................................. 465
B Glossary....................................................................................................................... 468
B.1 A-E...........................................................................................................................................................468
1 Hardware Description
1.1.1 Appearance
● 8 x 2.5" drive configuration
1.1.3 Ports
Port Positions
● 8 x 2.5" drive configuration
3 VGA port - -
3 VGA port - -
3 VGA port - -
3 VGA port - -
3 VGA port - -
Port Description
Note: The number of ports varies depending on server configuration. This table
lists the maximum number of ports in different configurations.
1.2.1 Appearance
● Server with a drive module or PCIe riser module on the rear panel
NO TE
● I/O module 1 and I/O module 2 each can be a PCIe riser module, 2 x 3.5" rear-drive
module, or a module with 2 x 2.5" rear drives and one PCIe riser module.
● I/O module 3 supports a PCIe riser module or 4 x 2.5" rear-drive module.
● For details about the OCP 3.0 NIC, see 1.6.1 OCP 3.0 NIC.
● The figure is for reference only. The actual configuration may vary.
● 4-GPU model
1 Slot 1 2 Slot 4
3 Slot 7 4 Slot 9
5 PSU 2 6 PSU 1
NO TE
● For details about the OCP 3.0 NIC, see 1.6.1 OCP 3.0 NIC.
● The figure is for reference only. The actual configuration may vary.
Indicator Positions
● Server with a drive module or PCIe riser module on the rear panel
5 PSU indicator - -
● 4-GPU model
5 PSU indicator - -
Indicator Description
1.2.3 Ports
Port Positions
● Server with a drive module or PRM on the rear panel
● 4-GPU model
Port Description
1.3 Processors
● The server supports one or two processors.
● If only one processor is required, install it in socket CPU 1.
● Processors of the same model must be used in a server.
● For details about the optional components, consult the local sales representative
or see "Search Parts" in the compatibility list on the technical support website.
1.4 Memory
1 Capacity ● 16 GB
● 32 GB
● 64 GB
● 128 GB
● 256 GB
A DIMM001(I)
B (primary) DIMM010(B)
B DIMM011(J)
C (primary) DIMM020(C)
C DIMM021(K)
D (primary) DIMM030(D)
D DIMM031(L)
E (primary) DIMM040(E)
E DIMM041(M)
F (primary) DIMM050(F)
F DIMM051(N)
G (primary) DIMM060(G)
G DIMM061(O)
H (primary) DIMM070(H)
H DIMM071(P)
A DIMM101(I)
B (primary) DIMM110(B)
B DIMM111(J)
C (primary) DIMM120(C)
C DIMM121(K)
D (primary) DIMM130(D)
D DIMM131(L)
E (primary) DIMM140(E)
E DIMM141(M)
F (primary) DIMM150(F)
F DIMM151(N)
G (primary) DIMM160(G)
G DIMM161(O)
H (primary) DIMM170(H)
H DIMM171(P)
NO TICE
● A server must use DDR5 memory modules of the same part number (P/N code),
and the memory speed is the lower one of the following two speed values:
● Memory speed supported by a CPU
● Maximum operating speed of a memory module
● The DDR5 memory modules of different types (RDIMM and RDIMM-3DS) and
specifications (capacity, bit width, rank, and height) cannot be used together.
● For details about the optional components, consult the local sales representative
or see "Search Parts" in the compatibility list on the technical support website.
Parameter Specifications
Parameter Specifications
Maximum number 32 32 16 32 32 32 32
of DDR5 DIMMs in
a servera
● At least one DDR5 memory module must be configured with SPR CPU
(excluding HBM CPU) and EMR CPU. SPR HBM CPU can be configured
without memory module.
● The memory modules configured must be DDR5 RDIMMs.
● The memory modules must be configured with the same number of ranks.
● Install filler memory modules in vacant slots.
Observe the memory module installation rules when configuring memory modules.
For details, see the memory configuration guide on the technical support website.
For details about DDR5 DIMM installation rules of HBM CPUs, see Figure 1-27 and
Figure 1-28.
For details about DDR5 DIMM installation rules of other CPUs, see Figure 1-25 and
Figure 1-26.
NO TE
● 1 processor: When 48 GB DIMMs are configured, only 8 DIMMs and insertion methods are
supported. When 96 GB DIMMs are configured, only 8 and 16 DIMMs and insertion
methods are supported.
● 2 processors: When 48 GB DIMMs are configured, only 16 DIMMs and insertion methods
are supported. When 96 GB DIMMs are configured, only 16 and 32 DIMMs and insertion
methods are supported.
Figure 1-27 DDR5 memory module installation guidelines (one HBM processor)
Figure 1-28 DDR5 memory module installation guidelines (two HBM processor)
● ECC
● Memory Mirroring
● Memory Single Device Data Correction (SDDC)
● Failed DIMM Isolation
● Memory Thermal Throttling
● Command/Address Parity Check and Retry
● Memory Demand/Patrol Scrubbing
● Memory Data Scrambling
1.5 Storage
Drive Configurations
Drive Numbering
NO TICE
The drive numbers identified by the RAID controller card vary depending on the
cabling of the RAID controller card. This section uses the drive numbers identified by
a RAID controller card that adopts the default cabling described in "Internal Cabling"
in the server Maintenance and Service Guide.
0 0
1 1
2 2
3 3
4 4
5 5
6 6
7 7
44 44
45 45
46 46
47 47
0 0 0
1 1 1
2 2 2
3 3 3
4 4 4
5 5 5
6 6 6
7 7 7
44 44 -
45 45 -
46 46 -
47 47 -
0 0
1 1
2 2
3 3
4 4
5 5
6 6
7 7
0 0 0Note
1 1 1Note
2 2 2Note
3 3 3Note
4 4 4Note
5 5 5Note
6 6 6Note
7 7 7Note
Note: If the slot is configured with a SAS/SATA drive, the RAID controller card
can manage the drive and allocate a number to the drive.
0 0
1 1
2 2
3 3
4 4
5 5
6 6
7 7
0 0 0
1 1 1
2 2 2
3 3 3
4 4 4
5 5 5
6 6 6
7 7 7
Drive Configurations
For details about the optional components, consult the local sales representative or
see "Search Parts" in the compatibility list on the technical support website.
Drive Numbering
NO TICE
The drive numbers identified by the RAID controller card vary depending on the
cabling of the RAID controller card. This section uses the drive numbers identified by
a RAID controller card that adopts the default cabling described in "Internal Cabling"
in the server Maintenance and Service Guide.
0 0
1 1
2 2
3 3
4 4
5 5
6 6
7 7
8 8
9 9
10 10
11 11
0 0 0
1 1 1
2 2 2
3 3 3
4 4 4Note
5 5 5Note
6 6 6Note
7 7 7Note
8 8 -
9 9 -
10 10 -
11 11 -
Note: If the slot is configured with a SAS/SATA drive, the RAID controller card
can manage the drive and allocate a number to the drive.
Drive Configurations
● a: I/O module 1 (2 x 2.5") is configured with rear 2 x 2.5" drives and a PCIe riser
module.
● b: NVMe drives are supported when CPU 2 is configured. A single-CPU server
does not support NVMe drives.
● For details about the optional components, consult the local sales
representative or see "Search Parts" in the compatibility list on the technical
support website.
Drive Numbering
NO TICE
The drive numbers identified by the RAID controller card vary depending on the
cabling of the RAID controller card. This section uses the drive numbers identified by
a RAID controller card that adopts the default cabling described in "Internal Cabling"
in the server Maintenance and Service Guide.
Figure 1-37 Drive numbering (I/O module 1 configured with 2.5" drives)
Figure 1-38 Drive numbering (I/O module 1 configured with 3.5" drives)
0 0
1 1
2 2
3 3
4 4
5 5
6 6
7 7
8 8
9 9
10 10
11 11
40 40
41 41
44 44
45 45
46 46
47 47
0 0 0
1 1 1
2 2 2
3 3 3
4 4 4
5 5 5
6 6 6
7 7 7
8 8 8
9 9 9
10 10 10
11 11 11
40 40 12
41 41 13
42 42 14
43 43 15
44 44 -
45 45 -
46 46 -
47 47 -
0 0
1 1
2 2
3 3
4 4
5 5
6 6
7 7
8 8
9 9
10 10
11 11
40 40
41 41
44 44
45 45
46 46
47 47
0 0 0
1 1 1
2 2 2
3 3 3
4 4 4
5 5 5
6 6 6
7 7 7
8 8 8Note
9 9 9Note
10 10 10Note
11 11 11Note
40 40 12
41 41 13
42 42 14
43 43 15
44 44 -
45 45 -
46 46 -
47 47 -
Note: If the slot is configured with a SAS/SATA drive, the RAID controller card
can manage the drive and allocate a number to the drive.
Drive Configurations
● a: I/O module 1 (2 x 2.5") is configured with rear 2 x 2.5" drives and a PCIe riser
module.
● b: NVMe drives are supported when CPU 2 is configured. A single-CPU server
does not support NVMe drives.
● For details about the optional components, consult the local sales
representative or see "Search Parts" in the compatibility list on the technical
support website.
Drive Numbering
NO TICE
The drive numbers identified by the RAID controller card vary depending on the
cabling of the RAID controller card. This section uses the drive numbers identified by
a RAID controller card that adopts the default cabling described in "Internal Cabling"
in the server Maintenance and Service Guide.
0 0 0
1 1 1
2 2 2
3 3 3
4 4 4
5 5 5
6 6 6
7 7 7
8 8 8
9 9 9
10 10 10
11 11 11
40 40 12
41 41 13
42 42 14
43 43 15
44 44 -
45 45 -
46 46 -
47 47 -
Drive Configurations
Drive Numbering
NO TICE
The drive numbers identified by the RAID controller card vary depending on the
cabling of the RAID controller card. This section uses the drive numbers identified by
a RAID controller card that adopts the default cabling described in "Internal Cabling"
in the server Maintenance and Service Guide.
0 0 0
1 1 1
2 2 2
3 3 3
4 4 4
5 5 5
6 6 6
7 7 7
8 8 0
9 9 1
10 10 2
11 11 3
12 12 4
13 13 5
14 14 6
15 15 7
16 16 0
17 17 1
18 18 2
19 19 3
20 20 4
21 21 5
22 22 6
23 23 7
44 44 -
45 45 -
46 46 -
47 47 -
Drive Configurations
For details about the optional components, consult the local sales representative or
see "Search Parts" in the compatibility list on the technical support website.
Drive Numbering
NO TICE
The drive numbers identified by the RAID controller card vary depending on the
cabling of the RAID controller card. This section uses the drive numbers identified by
a RAID controller card that adopts the default cabling described in "Internal Cabling"
in the server Maintenance and Service Guide.
0 0
1 1
2 2
3 3
4 4
5 5
6 6
7 7
8 8
9 9
10 10
11 11
12 12
13 13
14 14
15 15
16 16
17 17
18 18
19 19
20 20
21 21
22 22
23 23
0 0 0
1 1 1
2 2 2
3 3 3
4 4 -
5 5 -
6 6 -
7 7 -
8 8 -
9 9 -
10 10 -
11 11 -
12 12 4
13 13 5
14 14 6
15 15 7
16 16 -
17 17 -
18 18 -
19 19 -
20 20 -
21 21 -
22 22 -
23 23 -
0 0
1 1
2 2
3 3
4 4
5 5
6 6
7 7
8 8
9 9
10 10
11 11
12 12
13 13
14 14
15 15
16 16
17 17
18 18
19 19
20 20
21 21
22 22
23 23
44 44
45 45
46 46
47 47
0 0 -
1 1 -
2 2 -
3 3 -
4 4 -
5 5 -
6 6 -
7 7 -
8 8 -
9 9 -
10 10 -
11 11 -
12 12 -
13 13 -
14 14 -
15 15 -
16 16 -
17 17 -
18 18 -
19 19 -
20 20 -
21 21 -
22 22 -
23 23 -
44 44 0
45 45 1
46 46 2
47 47 3
Drive Configurations
● a: I/O module (2 x 2.5") is configured with rear 2 x 2.5" drives and a PCIe riser
module.
● b: NVMe drives are supported when CPU 2 is configured. A single-CPU server
does not support NVMe drives.
● For details about the optional components, consult the local sales
representative or see "Search Parts" in the compatibility list on the technical
support website.
Drive Numbering
NO TICE
The drive numbers identified by the RAID controller card vary depending on the
cabling of the RAID controller card. This section uses the drive numbers identified by
a RAID controller card that adopts the default cabling described in "Internal Cabling"
in the server Maintenance and Service Guide.
0 0 0
1 1 1
2 2 2
3 3 3
4 4 4
5 5 5
6 6 6
7 7 7
8 8 8
9 9 9
10 10 10
11 11 11
12 12 12
13 13 13
14 14 14
15 15 15
16 16 16
17 17 17
18 18 18
19 19 19
20 20 20
21 21 21
22 22 22
23 23 23
24 24 24
40 40 25
41 41 26
42 42 27
43 43 28
44 44 -
45 45 -
46 46 -
47 47 -
● If the VMD function is enabled and the latest VMD driver is installed, the NVMe
drives support surprise hot swap.
● If the VMD function is disabled, the NVMe drives support only orderly hot swap.
1.6 Network
● For details about the optional components, consult the local sales representative
or see "Search Parts" in the compatibility list on the technical support website.
● When IB cards are used to build an IB network, ensure that the IPoIB modes of
the IB cards at both ends of the network connection are the same. For details,
contact technical support.
– I/O module 1 provides slots 1, 2, and 3. If the module with 2 x 2.5" drives
and one PCIe riser card is used, slots 1 and 2 are unavailable.
– I/O module 2 provides slots 4, 5, and 6. If the module with 2 x 2.5" drives
and one PCIe riser card is used, slots 4 and 5 are unavailable.
– I/O module 3 provides slots 7 and 8.
● 4-GPU model
PCIe Riser Cards (Applicable to the Server with a Drive Module or a PCIe
Riser Module on the Rear Panel)
● PCIe riser card 1 of I/O module 1/2
– Provides PCIe slots 1, 2, and 3 when installed in I/O module 1.
– Provides PCIe slots 4, 5, and 6 when installed in I/O module 2.
Server with Drive Modules or PCIe Riser Modules on the Rear Panel
● a: PCIe 5.0 refers to the PCIe of the fifth generation, and x16 refers to the
physical slot width.
● b: The x16 in brackets indicates that the link bandwidth is x16.
● c: The default link bandwidth of FlexIO card 1 is x8. The link bandwidth can be
extended to x16 using cables.
● d: FlexIO card 1 supports the Socket Direct function when it is connected to the
two CPUs through high-speed cables.
● e: The default link bandwidth of FlexIO card 2 is x8. The link bandwidth can be
extended to x16 using cables. When FlexIO card 1 supports the Socket-Direct
function, FlexIO card 2 can only support x8.
● The PCIe x16 slots are compatible with PCIe x16, PCIe x8, PCIe x4, and PCIe
x1 cards. The bandwidth of the PCIe slot cannot be less than that of the
inserted PCIe card.
● The full-height full-length (FHFL) PCIe slots are compatible with FHFL PCIe
cards, full-height half-length (FHHL) PCIe cards, and half-height half-length
(HHHL) PCIe cards.
● The FHHL PCIe slots are compatible with FHHL PCIe cards and HHHL PCIe
cards.
● The maximum power supply of each PCIe slot is 75 W.
● a: PCIe 5.0 refers to the PCIe of the fifth generation, and x16 refers to the
physical slot width.
● b: The x16 in brackets indicates that the link bandwidth is x16.
● c: The default link bandwidth of FlexIO card 1 is x8. The link bandwidth can be
extended to x16 using cables.
● d: FlexIO card 1 supports the Socket Direct function when it is connected to the
two CPUs through high-speed cables.
● e: The default link bandwidth of FlexIO card 2 is x8. The link bandwidth can be
extended to x16 using cables. When FlexIO card 1 supports the Socket-Direct
function, FlexIO card 2 can only support x8.
● The PCIe x16 slots are compatible with PCIe x16, PCIe x8, PCIe x4, and PCIe
x1 cards. The bandwidth of the PCIe slot cannot be less than that of the
inserted PCIe card.
● The full-height full-length (FHFL) PCIe slots are compatible with FHFL PCIe
cards, full-height half-length (FHHL) PCIe cards, and half-height half-length
(HHHL) PCIe cards.
● The FHHL PCIe slots are compatible with FHHL PCIe cards and HHHL PCIe
cards.
● The maximum power supply of each PCIe slot is 75 W.
● SOL serial port information: If serial port information has been collected, search
the keyword RootBusBDF or DeviceBDF in systemcom.tar file to query the
B/D/F information of the server.
● The following describes how to obtain the B/D/F information on different OSs:
– Linux OS: You can obtain the B/D/F information of the server using the lspci
-vvv command.
NO TE
If the OS does not support the lspci command by default, obtain the pci-utils
package from the yum source and install it to make the OS support the command.
– Windows OS: After installing the pci-utils package, run the lspci command
to obtain the B/D/F information of the server.
– VMware OS: The lspci command is supported by default. You can directly
obtain the B/D/F information of the server using the lspci command.
1.8 PSUs
● Supports one or two PSUs.
● Supports AC or DC PSUs.
● Supports hot swap.
● When two PSUs are configured, 1+1 redundancy is supported.
● PSUs of the same P/N code must be used in a server.
● Short-circuit protection is provided, and bipolar fuses are provided for PSUs that
support dual live wire input.
● If the DC power supply is used, purchase the DC power supply that meets the
requirements of the safety standards or the DC power supply that has passed
the CCC certification.
● For details about the optional components, consult the local sales representative
or see "Search Parts" in the compatibility list on the technical support website.
● Supports N+1 redundancy. The server runs properly when one fan fails.
● Supports intelligent fan speed adjustment.
● Fan modules of the same part number (P/N code) must be used in a server.
1.10 LCD
NO TE
Functions
The LCD displays the installation status and running status of server components
and enables users to set the IP address of the iBMC management network port on
the server.
The LCD and the server iBMC form an LCD subsystem. The LCD directly obtains
device information from the iBMC. The LCD subsystem does not store device data.
UI
Tab Functions
Tab Functions
Icon Description
Next screen
Back
Previous screen
Screenshot
Parameter Description
1.10.1.3 Status
Screenshot
Parameter Description
1.10.1.4 Monitor
Screenshot
Parameter Description
1.10.1.5 Info.
Item Description
Item Description
1.10.1.6 Setting
Screenshot
Parameter Description
IPv4
Parameter Parameter
IPv6
Parameter Parameter
1.11 Boards
1.11.1 Mainboard
Figure 1-76 Mainboard
2 Power connector -
(HDD_POWER/J14)
6 Power connector -
(HDD_POWER/J21)
6 Backplane signal -
connector (HDD BP/
J19)
7 Power connector -
(HDD_POWER/J21)
5 Power connector -
(HDD_POWER/J21)
12 Power connector -
(HDD_POWER/J41)
4 Power connector -
(HDD_POWER/J1)
Rear-Drive Backplanes
● 2 x 2.5" drive backplane
7 UBC connector 2 - -
(UBC2/J4)
2 Product Specifications
Category Specifications
Category Specifications
Category Specifications
Category Specifications
Air volume 120 cubic feet per minute (CFM) to 320 CFM
Category Specifications
NO TE
SSDs and HDDs (including NL-SAS, SAS, and SATA) cannot be preserved for a long time in
the power-off state. Data may be lost or faults may occur if the preservation duration exceeds
the specified maximum duration. When drives are preserved under the storage temperature
and humidity specified in the preceding table, the following preservation duration is
recommended:
● Maximum preservation duration of SSDs:
● 12 months in power-off state without data stored
● 3 months in power-off state with data stored
● Maximum preservation duration of HDDs:
● 6 months in unpacked/packed and powered-off state
● The maximum preservation duration is determined according to the preservation
specifications provided by drive vendors. For details, see the manuals provided by drive
vendors.
NOTE
● See Figure 2-1 for methods in measuring physical
dimensions of the chassis.
● The measuring method for chassis with 3.5" drives and that
for chassis with 2.5" drives are the same. The chassis with
3.5" drives is used as an example.
Category Description
For details about the OS and hardware, see the compatibility list on the technical
support website.
NO TICE
● If incompatible components are used, the device may be abnormal. Such a fault is
beyond the scope of technical support and warranty.
● The performance of servers is closely related to application software, basic
middleware software, and hardware. The slight differences of the application
software, middleware basic software, and hardware may cause performance
inconsistency between the application layer and test software layer.
● If the customer has requirements on the performance of specific application
software, contact technical support to apply for proof of concept (POC) tests
in the pre-sales phase to determine detailed software and hardware
configurations.
● If the customer has requirements on hardware performance consistency,
specify the specific configuration requirements (for example, specific drive
models, RAID controller cards, or firmware versions) in the presales phase.
4 Safety Instructions
4.1 Security
4.2 Maintenance and Warranty
4.1 Security
General Statement
● Comply with local laws and regulations when installing equipment. These safety
instructions are only a supplement.
● Observe the safety instructions that accompany all "DANGER", "WARNING",
and "CAUTION" symbols in this document.
● Observe all safety instructions provided on device labels.
● Operators of special types of work (such as electricians, operators of electric
forklifts, and so on.) must be certified or authorized by the local government or
authority.
WARNING
Human Safety
● This device is not suitable for use in places where children may be present.
● Only certified or authorized personnel are allowed to install equipment.
● Discontinue any dangerous operations and take protective measures. Report
anything that could cause personal injury or device damage to a project
supervisor.
● Do not move devices or install cabinets and power cables in hazardous weather
conditions.
● Do not carry the weight that exceeds the maximum load per person allowed by
local laws or regulations. Before moving a device, check the maximum device
weight and arrange required personnel.
● Wear clean protective gloves, ESD clothing, a protective hat, and protective
shoes, as shown in Figure 4-1.
● Before touching a device, wear ESD clothing and gloves (or wrist strap), and
remove any conductive objects (such as watches and jewelry). Figure 4-2
shows conductive objects that must be removed before you touch a device.
● Exercise caution when using tools that could cause personal injury.
● If the installation position of a device is higher than the shoulders of the
installation personnel, use a vehicle such as a lift to facilitate installation. Prevent
the device from falling down and causing personal injury or damage to the
device.
● The equipment is powered by high-voltage power sources. Direct or indirect
contact (especially through damp objects) with high-voltage power sources may
result in serious injury or death.
● Ground a device before powering it on. Otherwise, high voltage leakage current
may cause personal injury.
● When a ladder is used, ensure that another person holds the ladder steady to
prevent accidents.
● Do not look into optical ports without eye protection when installing, testing, or
replacing optical cables.
Equipment Safety
● Use the recommended power cables at all times.
● Power cables are used only for dedicated servers. Do not use them for other
devices.
● Before operating equipment, wear ESD clothes and gloves to prevent
electrostatic-sensitive devices from being damaged by ESD.
● When moving a device, hold the bottom of the device. Do not hold the handles of
the installed modules, such as the PSUs, fan modules, drives, and the
mainboard. Handle the equipment with care.
● Exercise caution when using tools that could cause damage to devices.
● Connect the primary and secondary power cables to different power distribution
units (PDUs) to ensure reliable system operation.
● Ground a device before powering it on. Otherwise, high voltage leakage current
may cause device damage.
Transportation Precautions
Improper transportation may damage equipment. Contact the manufacturer for
precautions before attempting transportation.
Transportation precautions include but are not limited to:
● The logistics company engaged to transport the device must be reliable and
comply with international standards for transporting electronics. Ensure that the
For details about components supported by the server, see "Search Parts" in the
compatibility list on the technical support website.
● Power off all devices before transportation.
CAUTION
The maximum weight allowed to be carried by a single person is subject to local laws
or regulations. The markings on the device and the descriptions in the documentation
are for reference only.
Table 4-1 lists the maximum weight one person is permitted to carry as stipulated by
a number of organizations.
For more information about security instructions, see the server Safety Information.
For details about warranty, visit the Technical Support Website > Service Support
Center > Warranty.
5 ESD
● Die-casting pliers
Used to replace LC optical cables, pluggable optical modules and unshielded
network cables
Item Description
NO TE
The Server is 2U high and stackable. If the cabinet space is sufficient, it is recommended that
1U of distance (1U = 44.45 mm) be left between two servers.
If the packing case is soaked or deformed, or the seals or pressure-sensitive adhesive tapes
are not intact, contact technical support to obtain the Cargo Problem Feedback Form.
Step 2 Use a paper cutter to open the pressure-sensitive adhesive tape on the packing case
and open the packing case.
CAUTION
Exercise caution with the box cutter to avoid injury to your hands or damage to
devices.
No. Description
----End
Procedure
Step 1 Install optional parts.
For details, see the Server Maintenance and Service Guide.
----End
Procedure
Step 1 Install floating nuts.
1. Determine the installation positions of the floating nuts according to the cabinet
device installation plan.
NO TE
3. Use a floating nut hook to pull the upper end of the floating nut, and fasten it to
the upper edge of the square hole.
----End
Procedure
Step 1 Place the rail horizontally in the planned position. Stretch the rail on both sides of the
cabinet based on the cabinet length, keeping it in contact with the mounting bar in the
cabinet, and hook the rail. See (1) in Figure 6-7.
NO TE
The distance between the upper and lower boundaries of the three holes in each mounting bar
for the guide rail must be within 1 U.
Step 2 Secure the guide rail using the plugs delivered with the server.
● Plug the first upper square hole at the front of the rail (The second square hole is
used to secure the captive screws of the two mounting ears of the server). See
(2) in Figure 6-7.
● Plug the second square hole at the rear of the rail. See (3) in Figure 6-7.
Step 3 Use a 2 # Phillips screwdriver to insert an M6 screw on the first lower square hole at
the rear of the rail. See (4) in Figure 6-7.
NO TE
The adjustable L-shaped guide rails are installed without screws, which meets the
requirements of normal server use. To improve the shockproof level and tightness of the
server, perform this operation.
----End
The ball bearing rail kits apply to cabinets with a distance of 609 mm to 950 mm
(23.98 in. to 37.40 in.) between the front and rear mounting bars.
Procedure
Step 1 Push the release latch on the front of a rail and pull out the hook. See (1) and (2) in
Figure 6-8.
Step 2 Insert the positioning pin at the rear of the rail into the hole on the rear column of the
cabinet. See (3) in Figure 6-8.
Step 3 Keep the rail horizontal, and push the front end of the rail until it is inserted into the
hole on the front column of the cabinet. See (4) in Figure 6-8.
----End
Procedure
Step 1 Install the server.
CAUTION
At least two people are required to install the device. Otherwise, personal injury or
device damage may occur.
1. At least two people are required to lift the server vertically from both sides, place
it on the guide rails, and push it into the cabinet. See (1) in Figure 6-10.
NO TE
If you touch the drive release button by mistake, do not remove the drive and immediately
close the drive ejector lever to install the drive in place.
2. Press the mounting ears on both sides of the server against the mounting bars,
open the baffle plate of the captive screws on the mounting ears, Use a 2#
Phillips screwdriver to tighten the captive screws using the manual screwdriver.
See (2) in Figure 6-10.
Step 2 Connect external cables as required, such as network cables, VGA cables, and USB
devices.
Step 3 Connect cables to PSUs.
For details, see 6.8.9 Connecting PSU Cables.
Step 4 Power on the server.
----End
Procedure
Step 1 Install the server.
CAUTION
At least two people are required to install the device. Otherwise, personal injury or
device damage may occur.
2. At least two people are required to lift the server vertically from both sides, align
the two guide pins at the rear of the server with the anchoring holes on the inner
rails, and place the rear part of the server vertically on the inner rails. Then push
the rear part of the server horizontally as far as it will go.
3. Align the six guide pins at the front of the server with the anchoring holes on the
inner rails, place the server vertically to ensure that the server is secured to the
inner rails.
4. Unlock the release latches on both sides of the inner rails and push the server
as far as it will go. See (1) and (2) in Figure 6-15.
NO TE
If you touch the drive release button by mistake, do not remove the drive and immediately
close the drive ejector lever to install the drive in place.
Figure 6-15 Pushing a server into the ball bearing rail kits
5. Open the baffle plate covering the captive screws on the mounting ears, and
tighten the captive screws using the manual screwdriver.
Step 3 Connect external cables as required, such as network cables, VGA cables, and USB
devices.
NO TICE
Do not block the air exhaust vents on the rear panel of the server when you lay out
cables. Otherwise, heat dissipation of the server may be affected.
● Lay out and bind cables of different types (such as power and signal cables)
separately. Cables of the same type must be in the same direction.
– Route cables near each other in crossover mode.
– Ensure that the distance between power cables and signal cables is greater
than or equal to 30 mm (1.18 in.) when you lay out the cables in parallel.
● If you cannot identify cables according to the cable labels, attach an engineering
label to each cable.
● Protect cables from burrs, heat sinks, and active accessories, which may
damage the insulation layers of cables.
● Ensure that the length of cable ties for binding cables is appropriate. Do not
connect two or more cable ties together for binding cables. After binding cables
properly, trim off the excess lengths of the cable ties and ensure that the cuts are
neat and smooth.
● Ensure that cables are properly routed, supported, or fixed within the cable
troughs inside the cabinet to prevent loose connections and cable damage.
● Coil any surplus lengths of cables and bind them to proper positions inside the
cabinet.
● Route cables straightly and bind them neatly. The bending radius of a cable
varies depending on the position where the cable is bent.
– If you need to bend a cable in its middle, the bending radius must be at least
twice the diameter of the cable.
– If you need to bend a cable at the output terminal of a connector, the
bending radius must be at least five times the cable diameter, and the cable
must be bound before bending.
● Do not use cable ties at a place where the cables are bent. Otherwise, the
cables may break.
Common Methods
The methods of routing cables inside a cabinet are described as follows:
● Choose overhead or underfloor cabling for power cables based on equipment
room conditions (such as the AC power distribution frame, surge protector, and
terminal blocks).
● Choose overhead or underfloor cabling for service data cables (for example,
signal cables) based on equipment room conditions.
● Place the connectors of all service data cables at the bottom of the cabinet so
that the connectors are difficult to reach.
Procedure
Step 1 Connect the USB connector on one end of the USB-to-PS/2 cable to a USB port on
the front or rear panel of the server.
Step 2 Connect the PS/2 connectors on the other end of the USB-to-PS/2 cable to a
keyboard and mouse respectively.
Step 3 Connect the DB15 connector of the VGA cable to the VGA port on the front or rear
panel of the server and tighten the two screws.
Step 4 Connect the other end of the VGA cable to the VGA port on the monitor and tighten
the two screws.
----End
Procedure
Step 1 Check the model of the new network cable.
● A shielded network cable is recommended.
NO TE
If a non-shielded cable is used, the system cannot respond to ESD. As a result, the
server may work abnormally.
● The new and old cables must be of the same model or be compatible.
Step 2 Number the new network cable.
● The number of the new network cable must be the same as that of the old one.
● Use the same type of labels for the network cable.
– Record the name and number of the local device to be connected on one
side of the label, and those of the peer device on the other side.
– Attach a label 2 cm (0.79 in.) away from the end of a network cable.
Step 3 Route the new network cable.
● Route the new network cable in the same way (underfloor or overhead) as the
old one. Underfloor cabling is recommended because it is tidy and easy.
● Route network cables in the cabinet based on the installation requirements. You
are advised to arrange new cables in the same way as existing cables. Ensure
that cables are routed neatly without damage to the cable sheath.
● Separate network cables from power cables and signal cables when routing the
cables.
● Ensure that the bend radius of the network cables is at least 4 cm (1.57 in.).
Check that the cable insulation layer is intact.
● Ensure that cables are routed for easy maintenance and capacity expansion.
● Bind network cables with cable ties. Ensure that network cables are bound
closely, neatly, and straight, and cable ties are in even distance and fastened
properly.
Remove the network cable to be replaced from the network interface card (NIC) or
board in the cabinet.
Step 5 Connect the new network cable to the NIC or board in the cabinet.
● Connect the new network cable to the same port as the old one.
● Before connecting a network cable to a network port, check that the network
cable connector is intact and the pins have no sundries or deformation.
● Connect the network cable to the network port securely.
Step 6 Connect the new network cable to the peer network port.
● Connect the other end of the network cable to the peer network port based on
network planning.
● Connect the new network cable to the same port as the old one.
● Connect the network cable to the network port securely.
Power on the device. Check whether the communication with the peer device is
normal by running the ping command.
----End
Procedure
Step 1 Determine the type of the new cable.
▪ Connect the new optical cable to the same port as the old one.
i. Insert the optical module into the optical port. See (1) in Figure 6-20.
ii. Close the latch on the optical module to secure it. See (2) in Figure
6-20.
iii. Insert the optical cable into the optical module. See (3) in Figure 6-20.
NO TICE
Do not directly pull the latch out to remove the SFP+ cable.
● If yes, go to Step 7.
● If no, go to Step 6.
Step 6 Check whether the cable is intact or the connector is securely connected.
● If yes, contact technical support.
● If no, replace the cable or connect the connector securely, and go to Step 5.
Step 7 Bind the new optical cable.
Bind the new optical cable in the same way as the existing optical cables. You can
also remove all existing cable ties and bind all optical cables again if necessary.
----End
Figure 6-23 Removing a cable (IB NIC with two 56 Gbit/s ports as an example)
Figure 6-24 Connecting a cable (IB NIC with two 56 Gbit/s ports as an example)
Power on the device. If the LOM indicator is green, the cable is properly connected.
----End
Procedure
Step 1 Connect the USB Type-C connector of the cable to the USB Type-C port on the
server panel.
----End
Procedure
Step 1 Connect the USB device to a USB port of the server.
----End
Procedure
Step 1 Connect the serial cable to the serial port.
----End
NO TICE
Procedure
Step 1 Take the component out of its ESD bag.
Step 2 Connect one end of the power cable to the power socket on the PSU of the server.
Step 4 Connect the other end of the power cable to the AC PDU in the cabinet.
The AC PDU is fastened in the rear of the cabinet. Connect the power cable to the
nearest jack on the AC PDU.
Step 5 Bind the power cable to the cable trough using cable ties.
----End
NO TICE
Procedure
Step 1 Take the component out of its ESD bag.
Step 2 Connect the cables of the PSU.
1200 W (power cable P/N code: 02232SVN)
1. Insert the cord end terminal at one end of each power cable to the
corresponding wiring terminal on the PSU until the cord end terminal clicks into
position. See Figure 6-31.
a. Connect the cord end terminal of the negative power cable (blue) to the
NEG(-) wiring terminal on the PSU.
b. Connect the cord end terminal of the positive power cable (black) to the
RTN(+) wiring terminal on the PSU.
2. Insert cord end terminal at the other end of each power cable to the
corresponding wiring terminal on the PDU, and tighten the screw. See (1) and
(2) in Figure 6-32.
a. Connect the cord end terminal of the negative power cable (blue) to the
PDU(–) wiring terminal.
b. Connect the cord end terminal of the positive power cable (black) to the
PDU(+) wiring terminal.
Step 3 Connect the other end of the power cable to the DC PDU in the cabinet.
The DC PDU is fastened in the rear of the cabinet. Connect the power cable to the
nearest jack on the DC PDU.
Step 4 Bind the power cable to the cable trough using cable ties.
----End
CAUTION
Before verifying cable connections, check that the power is cut off. Otherwise, any
incorrect connection or loose connection may cause human injury or device damage.
Power cable Power cables are correctly connected to the rear of the
chassis.
Ground cable The server node does not provide a separate ground
port.
● In AC or HVDC environment, the power cables of AC
PSUs are grounded. Ensure that the power cables are
in good contact.
● In a DC environment, the ground terminals of DC
PSUs must be grounded. Ensure that the ground
cables are in good contact.
LCD Cabling
NC-SI Cabling
1 04080584-005 NC-SI cable for connecting the PCIe NIC to the NC-
SI connector (J31) of the mainboard
1 04080584 NC-SI cable for connecting the PCIe NIC to the NC-
SI connector (J31) of the mainboard
NO TE
● In single-CPU configuration, the OCP 3.0 NIC can be installed only in the slot of FlexIO
card 1.
● In dual-CPU configuration, the OCP 3.0 NICs can be installed in the slots of FlexIO card 1
and FlexIO card 2.
● The default operating bandwidth of the slot of FlexIO card 1 is x8. If you need to
expand the bandwidth to x16, use a cable (P/N: 14270055) to connect the OCP 3.0
NIC 1 UBC connector (J42) to the CPU 1 northbound UBC connector (J64) on the
mainboard. To expand the bandwidth to x8 + x8, use a cable (P/N: 14270055-006) to
connect the OCP 3.0 NIC 1 UBC connector (J42) to the CPU 2 northbound UBC
connector (J53) on the mainboard. In this case, the slot of the FlexIO card 2 cannot
be expanded to x16.
● The default operating bandwidth of the slot of FlexIO card 2 is x8. If you need to
expand the bandwidth to x16, use a cable (P/N: 14270055-003) to connect the OCP
3.0 NIC 2 UBC connector (J6071) to the CPU 2 northbound UBC connector (J53).
LCD Cabling
NC-SI Cabling
1 04080584-005 NC-SI cable for connecting the PCIe NIC to the NC-
SI connector (J31) of the mainboard
1 04080584 NC-SI cable for connecting the PCIe NIC to the NC-
SI connector (J31) of the mainboard
NO TE
● In single-CPU configuration, the OCP 3.0 NIC can be installed only in the slot of FlexIO
card 1.
● In dual-CPU configuration, the OCP 3.0 NICs can be installed in the slots of FlexIO card 1
and FlexIO card 2.
● The default operating bandwidth of the slot of FlexIO card 1 is x8. If you need to
expand the bandwidth to x16, use a cable (P/N: 14270055) to connect the OCP 3.0
NIC 1 UBC connector (J42) to the CPU 1 northbound UBC connector (J64) on the
mainboard. To expand the bandwidth to x8 + x8, use a cable (P/N: 14270055-006) to
connect the OCP 3.0 NIC 1 UBC connector (J42) to the CPU 2 northbound UBC
connector (J53) on the mainboard. In this case, the slot of the FlexIO card 2 cannot
be expanded to x16.
● The default operating bandwidth of the slot of FlexIO card 2 is x8. If you need to
expand the bandwidth to x16, use a cable (P/N: 14270055-003) to connect the OCP
3.0 NIC 2 UBC connector (J6071) to the CPU 2 northbound UBC connector (J53).
6.9.2.2.8 Front-Drive Backplane SAS 3.0 High-Speed Cable (Server with a PCIe RAID
Controller Card)
NO TICE
LCD Cabling
NC-SI Cabling
1 04080584-005 NC-SI cable for connecting the PCIe NIC to the NC-
SI connector (NCSI CONN/J31) on the mainboard
1 04080584 NC-SI cable for connecting the PCIe NIC to the NC-
SI connector (NCSI CONN/J31) on the mainboard
NO TE
● In single-CPU configuration, the OCP 3.0 NIC can be installed only in the slot of FlexIO
card 1.
● In dual-CPU configuration, the OCP 3.0 NICs can be installed in the slots of FlexIO card 1
and FlexIO card 2.
● The default operating bandwidth of the slot of FlexIO card 1 is x8. If you need to
expand the bandwidth to x16, use a cable (P/N: 14270055) to connect the OCP 3.0
NIC 1 UBC connector (J42) to the CPU 1 northbound UBC connector (J64) on the
mainboard. To expand the bandwidth to x8 + x8, use a cable (P/N: 14270055-006) to
connect the OCP 3.0 NIC 1 UBC connector (J42) to the CPU 2 northbound UBC
connector (J53) on the mainboard. In this case, the slot of the FlexIO card 2 cannot
be expanded to x16.
● The default operating bandwidth of the slot of FlexIO card 2 is x8. If you need to
expand the bandwidth to x16, use a cable (P/N: 14270055-003) to connect the OCP
3.0 NIC 2 UBC connector (J6071) to the CPU 2 northbound UBC connector (J53).
LCD Cabling
NC-SI Cabling
1 04080584-005 NC-SI cable for connecting the PCIe NIC to the NC-
SI connector (NCSI CONN/J31) on the mainboard
1 04080584 NC-SI cable for connecting the PCIe NIC to the NC-
SI connector (NCSI CONN/J31) on the mainboard
NO TE
● In single-CPU configuration, the OCP 3.0 NIC can be installed only in the slot of FlexIO
card 1.
● In dual-CPU configuration, the OCP 3.0 NICs can be installed in the slots of FlexIO card 1
and FlexIO card 2.
● The default operating bandwidth of the slot of FlexIO card 1 is x8. If you need to
expand the bandwidth to x16, use a cable (P/N: 14270055) to connect the OCP 3.0
NIC 1 UBC connector (J42) to the CPU 1 northbound UBC connector (J64) on the
mainboard. To expand the bandwidth to x8 + x8, use a cable (P/N: 14270055-006) to
connect the OCP 3.0 NIC 1 UBC connector (J42) to the CPU 2 northbound UBC
connector (J53) on the mainboard. In this case, the slot of the FlexIO card 2 cannot
be expanded to x16.
● The default operating bandwidth of the slot of FlexIO card 2 is x8. If you need to
expand the bandwidth to x16, use a cable (P/N: 14270055-003) to connect the OCP
3.0 NIC 2 UBC connector (J6071) to the CPU 2 northbound UBC connector (J53).
6.9.2.4.8 Front-Drive Backplane SAS 3.0 High-Speed Cabling (Server with a PCIe
Plug-in RAID Controller Card)
NO TICE
LCD Cabling
NO TE
● In single-CPU configuration, the OCP 3.0 NIC can be installed only in the slot of FlexIO
card 1.
● In dual-CPU configuration, the OCP 3.0 NICs can be installed in the slots of FlexIO card 1
and FlexIO card 2.
● The default operating bandwidth of the slot of FlexIO card 1 is x8. If you need to
expand the bandwidth to x16, use a cable (P/N: 14270055) to connect the OCP 3.0
NIC 1 UBC connector (J42) to the CPU 1 northbound UBC connector (J64) on the
mainboard. To expand the bandwidth to x8 + x8, use a cable (P/N: 14270055-006) to
connect the OCP 3.0 NIC 1 UBC connector (J42) to the CPU 2 northbound UBC
connector (J53) on the mainboard. In this case, the slot of the FlexIO card 2 cannot
be expanded to x16.
● The default operating bandwidth of the slot of FlexIO card 2 is x8. If you need to
expand the bandwidth to x16, use a cable (P/N: 14270055-003) to connect the OCP
3.0 NIC 2 UBC connector (J6071) to the CPU 2 northbound UBC connector (J53).
LCD Cabling
NO TE
● In single-CPU configuration, the OCP 3.0 NIC can be installed only in the slot of FlexIO
card 1.
● In dual-CPU configuration, the OCP 3.0 NICs can be installed in the slots of FlexIO card 1
and FlexIO card 2.
● The default operating bandwidth of the slot of FlexIO card 1 is x8. If you need to
expand the bandwidth to x16, use a cable (P/N: 14270055) to connect the OCP 3.0
NIC 1 UBC connector (J42) to the CPU 1 northbound UBC connector (J64) on the
mainboard. To expand the bandwidth to x8 + x8, use a cable (P/N: 14270055-006) to
connect the OCP 3.0 NIC 1 UBC connector (J42) to the CPU 2 northbound UBC
connector (J53) on the mainboard. In this case, the slot of the FlexIO card 2 cannot
be expanded to x16.
● The default operating bandwidth of the slot of FlexIO card 2 is x8. If you need to
expand the bandwidth to x16, use a cable (P/N: 14270055-003) to connect the OCP
3.0 NIC 2 UBC connector (J6071) to the CPU 2 northbound UBC connector (J53).
6.9.2.6.7 Front-Drive Backplane SAS 3.0 High-Speed Cabling (Server with a PCIe
Plug-in RAID Controller Card)
NO TICE
NO TE
● In single-CPU configuration, the OCP 3.0 NIC can be installed only in the slot of FlexIO
card 1.
● In dual-CPU configuration, the OCP 3.0 NICs can be installed in the slots of FlexIO card 1
and FlexIO card 2.
● The default operating bandwidth of the slot of FlexIO card 1 is x8. If you need to
expand the bandwidth to x16, use a cable (P/N: 14270055) to connect the OCP 3.0
NIC 1 UBC connector (J42) to the CPU 1 northbound UBC connector (J64) on the
mainboard. To expand the bandwidth to x8 + x8, use a cable (P/N: 14270055-006) to
connect the OCP 3.0 NIC 1 UBC connector (J42) to the CPU 2 northbound UBC
connector (J53) on the mainboard. In this case, the slot of the FlexIO card 2 cannot
be expanded to x16.
● The default operating bandwidth of the slot of FlexIO card 2 is x8. If you need to
expand the bandwidth to x16, use a cable (P/N: 14270055-003) to connect the OCP
3.0 NIC 2 UBC connector (J6071) to the CPU 2 northbound UBC connector (J53).
NO TE
● In single-CPU configuration, the OCP 3.0 NIC can be installed only in the slot of FlexIO
card 1.
● In dual-CPU configuration, the OCP 3.0 NICs can be installed in the slots of FlexIO card 1
and FlexIO card 2.
● The default operating bandwidth of the slot of FlexIO card 1 is x8. If you need to
expand the bandwidth to x16, use a cable (P/N: 14270055) to connect the OCP 3.0
NIC 1 UBC connector (J42) to the CPU 1 northbound UBC connector (J64) on the
mainboard. To expand the bandwidth to x8 + x8, use a cable (P/N: 14270055-006) to
connect the OCP 3.0 NIC 1 UBC connector (J42) to the CPU 2 northbound UBC
connector (J53) on the mainboard. In this case, the slot of the FlexIO card 2 cannot
be expanded to x16.
● The default operating bandwidth of the slot of FlexIO card 2 is x8. If you need to
expand the bandwidth to x16, use a cable (P/N: 14270055-003) to connect the OCP
3.0 NIC 2 UBC connector (J6071) to the CPU 2 northbound UBC connector (J53).
6.9.2.8.6 Front-Drive Backplane SAS 3.0 High-Speed Cabling (Server with a PCIe
Plug-in RAID Controller Card)
NO TICE
NO TE
● In single-CPU configuration, the OCP 3.0 NIC can be installed only in the slot of FlexIO
card 1.
● In dual-CPU configuration, the OCP 3.0 NICs can be installed in the slots of FlexIO card 1
and FlexIO card 2.
● The default operating bandwidth of the slot of FlexIO card 1 is x8. If you need to
expand the bandwidth to x16, use a cable (P/N: 14270055) to connect the OCP 3.0
NIC 1 UBC connector (J42) to the CPU 1 northbound UBC connector (J64) on the
mainboard. To expand the bandwidth to x8 + x8, use a cable (P/N: 14270055-006) to
connect the OCP 3.0 NIC 1 UBC connector (J42) to the CPU 2 northbound UBC
connector (J53) on the mainboard. In this case, the slot of the FlexIO card 2 cannot
be expanded to x16.
● The default operating bandwidth of the slot of FlexIO card 2 is x8. If you need to
expand the bandwidth to x16, use a cable (P/N: 14270055-003) to connect the OCP
3.0 NIC 2 UBC connector (J6071) to the CPU 2 northbound UBC connector (J53).
● In single-CPU configuration, the OCP 3.0 NIC can be installed only in the slot of FlexIO
card 1.
● In dual-CPU configuration, the OCP 3.0 NICs can be installed in the slots of FlexIO card 1
and FlexIO card 2.
● The default operating bandwidth of the slot of FlexIO card 1 is x8. If you need to
expand the bandwidth to x16, use a cable (P/N: 14270055) to connect the OCP 3.0
NIC 1 UBC connector (J42) to the CPU 1 northbound UBC connector (J64) on the
mainboard. To expand the bandwidth to x8 + x8, use a cable (P/N: 14270055-006) to
connect the OCP 3.0 NIC 1 UBC connector (J42) to the CPU 2 northbound UBC
connector (J53) on the mainboard. In this case, the slot of the FlexIO card 2 cannot
be expanded to x16.
● The default operating bandwidth of the slot of FlexIO card 2 is x8. If you need to
expand the bandwidth to x16, use a cable (P/N: 14270055-003) to connect the OCP
3.0 NIC 2 UBC connector (J6071) to the CPU 2 northbound UBC connector (J53).
6.9.2.10.6 Front-Drive Backplane SAS 3.0 High-Speed Cabling (Server with a PCIe
Plug-in RAID Controller Card)
NO TICE
● In single-CPU configuration, the OCP 3.0 NIC can be installed only in the slot of FlexIO
card 1.
● In dual-CPU configuration, the OCP 3.0 NICs can be installed in the slots of FlexIO card 1
and FlexIO card 2.
● The default operating bandwidth of the slot of FlexIO card 1 is x8. If you need to
expand the bandwidth to x16, use a cable (P/N: 14270055) to connect the OCP 3.0
NIC 1 UBC connector (J42) to the CPU 1 northbound UBC connector (J64) on the
mainboard. To expand the bandwidth to x8 + x8, use a cable (P/N: 14270055-006) to
connect the OCP 3.0 NIC 1 UBC connector (J42) to the CPU 2 northbound UBC
connector (J53) on the mainboard. In this case, the slot of the FlexIO card 2 cannot
be expanded to x16.
● The default operating bandwidth of the slot of FlexIO card 2 is x8. If you need to
expand the bandwidth to x16, use a cable (P/N: 14270055-003) to connect the OCP
3.0 NIC 2 UBC connector (J6071) to the CPU 2 northbound UBC connector (J53).
NO TE
● In single-CPU configuration, the OCP 3.0 NIC can be installed only in the slot of FlexIO
card 1.
● In dual-CPU configuration, the OCP 3.0 NICs can be installed in the slots of FlexIO card 1
and FlexIO card 2.
● The default operating bandwidth of the slot of FlexIO card 1 is x8. If you need to
expand the bandwidth to x16, use a cable (P/N: 14270055) to connect the OCP 3.0
NIC 1 UBC connector (J42) to the CPU 1 northbound UBC connector (J64) on the
mainboard. To expand the bandwidth to x8 + x8, use a cable (P/N: 14270055-006) to
connect the OCP 3.0 NIC 1 UBC connector (J42) to the CPU 2 northbound UBC
connector (J53) on the mainboard. In this case, the slot of the FlexIO card 2 cannot
be expanded to x16.
● The default operating bandwidth of the slot of FlexIO card 2 is x8. If you need to
expand the bandwidth to x16, use a cable (P/N: 14270055-003) to connect the OCP
3.0 NIC 2 UBC connector (J6071) to the CPU 2 northbound UBC connector (J53).
6.9.2.12.6 Front-Drive Backplane SAS 3.0 High-Speed Cable (Server with a PCIe
RAID Controller Card)
NO TICE
6.9.2.12.8 I/O Module 1 Cabling (2 x 2.5" SAS/SATA Drives) + I/O Module 2 Cabling (2
x 3.5" SAS/SATA Drives)
NO TE
● In single-CPU configuration, the OCP 3.0 NIC can be installed only in the slot of FlexIO
card 1.
● In dual-CPU configuration, the OCP 3.0 NICs can be installed in the slots of FlexIO card 1
and FlexIO card 2.
● The default operating bandwidth of the slot of FlexIO card 1 is x8. If you need to
expand the bandwidth to x16, use a cable (P/N: 14270055) to connect the OCP 3.0
NIC 1 UBC connector (J42) to the CPU 1 northbound UBC connector (J64) on the
mainboard. To expand the bandwidth to x8 + x8, use a cable (P/N: 14270055-006) to
connect the OCP 3.0 NIC 1 UBC connector (J42) to the CPU 2 northbound UBC
connector (J53) on the mainboard. In this case, the slot of the FlexIO card 2 cannot
be expanded to x16.
● The default operating bandwidth of the slot of FlexIO card 2 is x8. If you need to
expand the bandwidth to x16, use a cable (P/N: 14270055-003) to connect the OCP
3.0 NIC 2 UBC connector (J6071) to the CPU 2 northbound UBC connector (J53).
2 0415Y053 Power cable for connecting the power with the 4 x 2.5"
connector (HDD PWR/J21) of the rear- drive module as a
drive backplane to the rear I/O module 3 spare part.
PSU connector (IO3 PWR/J6089) on
the mainboard
● In single-CPU configuration, the OCP 3.0 NIC can be installed only in the slot of FlexIO
card 1.
● In dual-CPU configuration, the OCP 3.0 NICs can be installed in the slots of FlexIO card 1
and FlexIO card 2.
● The default operating bandwidth of the slot of FlexIO card 1 is x8. If you need to
expand the bandwidth to x16, use a cable (P/N: 14270055) to connect the OCP 3.0
NIC 1 UBC connector (J42) to the CPU 1 northbound UBC connector (J64) on the
mainboard. To expand the bandwidth to x8 + x8, use a cable (P/N: 14270055-006) to
connect the OCP 3.0 NIC 1 UBC connector (J42) to the CPU 2 northbound UBC
connector (J53) on the mainboard. In this case, the slot of the FlexIO card 2 cannot
be expanded to x16.
● The default operating bandwidth of the slot of FlexIO card 2 is x8. If you need to
expand the bandwidth to x16, use a cable (P/N: 14270055-003) to connect the OCP
3.0 NIC 2 UBC connector (J6071) to the CPU 2 northbound UBC connector (J53).
6.9.2.14.6 Front-Drive Backplane SAS 3.0 High-Speed Cable (Server with a PCIe
RAID Controller Card)
NO TICE
6.9.2.14.10 I/O Module 1 Cabling (2 x 2.5" SAS/SATA Drives) + I/O Module 2 Cabling
(2 x 3.5" SAS/SATA Drives)
NO TE
● In single-CPU configuration, the OCP 3.0 NIC can be installed only in the slot of FlexIO
card 1.
● In dual-CPU configuration, the OCP 3.0 NICs can be installed in the slots of FlexIO card 1
and FlexIO card 2.
● The default operating bandwidth of the slot of FlexIO card 1 is x8. If you need to
expand the bandwidth to x16, use a cable (P/N: 14270055) to connect the OCP 3.0
NIC 1 UBC connector (J42) to the CPU 1 northbound UBC connector (J64) on the
mainboard. To expand the bandwidth to x8 + x8, use a cable (P/N: 14270055-006) to
connect the OCP 3.0 NIC 1 UBC connector (J42) to the CPU 2 northbound UBC
connector (J53) on the mainboard. In this case, the slot of the FlexIO card 2 cannot
be expanded to x16.
● The default operating bandwidth of the slot of FlexIO card 2 is x8. If you need to
expand the bandwidth to x16, use a cable (P/N: 14270055-003) to connect the OCP
3.0 NIC 2 UBC connector (J6071) to the CPU 2 northbound UBC connector (J53).
6.9.2.15.6 Front-Drive Backplane SAS 3.0 High-Speed Cabling (Server with a PCIe
Plug-in RAID Controller Card)
NO TICE
6.9.2.15.9 I/O Module 1 (2 x 2.5" SAS/SATA Drives) + I/O Module 2 (2 x 3.5" SAS/SATA
Drives) Cabling
6.9.2.17.5 Front-Drive Backplane SAS 3.0 High-Speed Cable (Server with a PCIe
RAID Controller Card)
NO TICE
NC-SI Cabling
1 04080584-005 NC-SI cable for connecting the PCIe NIC to the NC-
SI connector (NCSI CONN/J31) on the mainboard
1 04080584 NC-SI cable for connecting the PCIe NIC to the NC-
SI connector (NCSI CONN/J31) on the mainboard
NC-SI Cabling
1 04080584-005 NC-SI cable for connecting the PCIe NIC to the NC-
SI connector (NCSI CONN/J31) on the mainboard
1 04080584 NC-SI cable for connecting the PCIe NIC to the NC-
SI connector (NCSI CONN/J31) on the mainboard
6.9.2.19.7 I/O Module 3 Cabling (Server with a PCIe plug-in RAID Controller Card)
NO TICE
6.9.2.20.5 Front-Drive Backplane High-Speed Cabling (Server with Three PCIe Plug-
in RAID Controller Cards)
NO TICE
The PCIe plug-in RAID controller cards are configured in slots 2, 3, and 6.
NO TICE
6.9.2.21.8 I/O Module 1 (2 x 2.5" SAS/SATA Drives) + I/O Module 2 (2 x 2.5" SAS/SATA
Drives) Cabling
6.10.1 Power-On
NO TICE
● Before powering on a server, ensure that the server is powered off, all cables are
connected correctly, and the power supply voltage meets device requirements.
● During power-on, do not remove or insert server components or cables, such as
drive modules, network cables, and console cables.
● If the power supply of a server is disconnected, wait for at least one minute before
powering it on again.
● If PSUs are properly installed but are not yet powered on, the server is powered
off.
Connect the external power supply to the PSUs. Then the server will be powered
on with the PSUs.
NO TE
System State Upon Power Supply is set to Power On by default, which indicates that
the server automatically powers on after power is supplied to PSUs. You can log in to the
iBMC WebUI and choose System > Power > Power Control to view and change the
settings.
● If the PSUs are powered on and the server is in standby state (the power
indicator is steady yellow), use any of the following methods to power on the
server:
– Press the power button on the front panel.
For details, see 1.1.2 Indicators and Buttons.
6.10.2 Power-Off
NO TE
● The "power-off" mentioned here is an operation performed to change the server to the
standby state (the power indicator is steady yellow).
● Powering off a server will interrupt all services and programs running on it. Therefore,
before powering off a server, ensure that all services and programs have been stopped or
migrated to other servers.
● After a server is powered off forcibly, wait for more than 10 seconds for the server to power
off completely. Do not power on the server again before it is completely powered off.
● A forced power-off may cause data loss or program damage. Select an appropriate
operation based on your actual situation.
● Connect a keyboard, video, and mouse (KVM) to the server and shut down the
operating system of the server using the KVM.
● When the server is in power-on state, pressing the power button on the server
front panel can power off the server gracefully.
NO TE
If the server OS is running, shut down the OS according to the onscreen instructions.
For details, see 1.1.2 Indicators and Buttons.
● When the server is in power-on state, holding down the power button on the
server front panel for six seconds can power off the server forcibly.
For details, see 1.1.2 Indicators and Buttons.
● Use the iBMC WebUI.
a. Log in to the iBMC WebUI.
For details, see 8.2 Logging In to the iBMC WebUI.
b. Choose System > Power > Power Control.
The Power Control page is displayed.
c. Click Power Off or Forced Power Off.
A confirmation message is displayed.
d. Click OK.
The server starts to power off.
● Use the iBMC CLI.
a. Log in to the iBMC CLI.
For details, see 8.4 Logging In to the Server CLI.
b. Run the following command:
▪ To power off the compute node forcibly, run the ipmcset -d powerstate
-v 2 command.
c. Type y or Y and press Enter.
The server starts to power off.
● Use the remote virtual console.
a. Log in to the remote virtual console.
For details, see 8.3 Logging In to the Desktop of a Server.
iBMC login data User name and ● Default user name: Administrator
password ● Default Password: Admin@9000
Change the Change the initial password of ● Use the iBMC WebUI or
initial the iBMC default user. iBMC CLI to change the
password. ● Use the iBMC WebUI. password. The method for
changing the password
● Use the iBMC CLI. varies depending on iBMC
● Use the BIOS. versions. For details, see the
iBMC User Guide of the
corresponding version.
● Use the BIOS to change the
password. For details, see
the Eagle Stream Platform
BIOS Parameter Reference.
Check the 1. Check the indicators on the ● Check the status of indicators
server. panel to ensure that the on the panel. For details, see
server is running properly. 1.1.2 Indicators and
2. Check the iBMC or BIOS Buttons.
versions of the server to ● The methods for querying the
ensure that the version is the server version, health status,
same as the target version. and alarm information vary
The query methods are as depending on the iBMC
follows: version. For details, see the
● Use the iBMC WebUI. iBMC User Guide.
● Use the iBMC CLI. ● Upgrade the firmware to the
target version. For details,
3. This section describes how to see the Upgrade Guide.
query the health status and
alarm information of the ● Handle the alarms. For
server to ensure that the details, see iBMC Alarm
server is running properly. Handling.
The query methods are as
follows:
● Use the iBMC WebUI.
● Use the iBMC CLI.
Configure Configure the RAID array based The configuration method varies
RAID. on service requirements. according to the RAID controller
NOTE card type. For details, see the
When the management mode of server RAID Controller Card
common drives is PCH, RAID User Guide .
arrays cannot be configured.
NOTE
For details about compatible RAID
controller cards, see the
compatibility list on the technical
support website.
Configure Configure the BIOS based on For details, see the Eagle
the BIOS. the actual service scenario. Stream platform BIOS
parameter reference.
NOTE
Common actions for setting the
BIOS are as follows:
● Setting the system boot
sequence
● Setting PXE of the NIC
● Setting the BIOS password
● Switching the system language
Install the Install the OS for the server. For details about how to install
OS. different OSs, see the OS
Installation Guide.
NOTE
For details about compatible OSs,
see the compatibility list on the
technical support website.
7 Troubleshooting Guide
8 Common Operations
You can query the IP address of the iBMC management network port on:
● BIOS
● iBMC WebUI
● iBMC CLI
Run the following command: ipmcget -d ipinfo
For more information about iBMC, see the iBMC user guide.
Procedure
Step 1 Access the BIOS.
----End
NO TE
● If the TLS version is set to Only TLS 1.3 on the Security Management page under
User & Security, the following browser versions are not supported:
● All Internet Explorer versions
● Safari 11.0 to 12.0
● Microsoft Edge 12 to 18
● Before using Internet Explorer to log in to the iBMC WebUI, enable the compatibility
view and select Use TLS 1.2.
● Enable the compatibility view:
Procedure
Step 1 Check that the client (for example, a local PC) used to access the iBMC meets the
operating environment requirements.
If you want to use the Java Integrated Remote Console, ensure that the Java
Runtime Environment (JRE) meets requirements.
NO TE
When the TLS version is set to TLS 1.3 only in the User & Security > Security Management
interface, the iBMC operating environment does not support the following browser versions:
● All Internet Explorer versions
● Safari 11.0 to 12.0
● Microsoft Edge 12 to 18
Microsoft Edge
Step 2 Configure an IP address and subnet mask or route information for the local PC to
enable communication with the iBMC management network port.
Step 3 Connect the local PC to the iBMC using any of the following methods:
● Connect the local PC to the iBMC management network port using a network
cable.
● Connect the local PC to the iBMC management network port over a LAN.
● Connect the local PC to the iBMC direct connect management port using a USB
Type-C cable.
NO TE
– Only servers configured with the iBMC direct management port support this operation.
– If you use a USB Type-C cable to connect the local PC to the iBMC direct connect
management port, the local PC can run only Windows 10.
Step 4 Choose Control Panel > Network and Internet > Network Connections, and
check whether the local PC is connected to the iBMC network.
NO TE
● If the local PC is connected to the iBMC using a USB Type-C cable, the iBMC network
name is Remote NDS Compatible Device.
● If the local PC is connected to the iBMC using a network cable or over a LAN, the iBMC
network name varies depending on the NIC used on the local PC.
● If yes, go to Step 5.
● If no, contact technical support for assistance.
Step 5 Open Internet Explorer on the local PC, enter https://IP address of the iBMC
management network port, and press Enter.
NO TE
● If the local PC is connected to the iBMC direct connect management port using a USB
Type-C cable, the IP address of the iBMC management network port is 169.254.1.5.
● If the local PC is connected to the iBMC management network port using a network cable
or over a LAN, the default IP address of the iBMC management network port is
192.168.2.100.
● Enter the IP address of the iBMC management network port based on actual situation:
– If an IPv6 address is used, use [] to enclose the IPv6 address, for example,
[fc00::64].
– If an IPv4 address is used, enter the IPv4 address, for example, 192.168.100.1.
NO TE
If a security alert is displayed, you can ignore this message or perform any of the following to
shield this alert:
● Import a trust certificate and a root certificate to the iBMC.
For details, see Importing Trust Certificates and Root Certificates in the iBMC user guide
of the server you use.
● If no trust certificate is available and network security can be ensured, add the iBMC to
the Exception Site List on Java Control Panel or reduce the Java security level.
This operation poses security risks. Exercise caution when performing this operation.
Parameter Description
----End
Step 2 Enter the user name and password for logging in to the iBMC WebUI. For details, see
Table 8-2.
NO TE
The default user name for logging in to the iBMC system is Administrator, and the default
password is Admin@9000.
Step 3 Select Local iBMC or Automatic matching from the Domain drop-down list.
NO TE
● If you use Internet Explorer to log in to the iBMC WebUI for the first time after an upgrade,
the system may display a message indicating that the login fails due to incorrect user name
or password. If this occurs, press Ctrl+Shift+Delete, and click Delete in the dialog box
displayed to clear the cache of Internet Explorer. Then, attempt to log in again.
● If you cannot log in to the iBMC WebUI using Internet Explorer, choose Tools > Internet
Options > Advanced and click Reset. Then you can log in to the iBMC WebUI.
----End
● A domain controller exists on the network, and a user domain, LDAP user name,
and password have been created on the domain controller.
NO TE
For details about how to create a domain controller, user domain, and LDAP user name
and password that belong to a user domain, see related documents about domain
controllers. The iBMC provides only the access function for LDAP users.
● On the User & Security > LDAP page of the iBMC WebUI, the LDAP function is
enabled, and the user domain and the LDAP user who belong to the user
domain are set.
Step 1 (Optional) On the iBMC login page, switch to the target language.
Step 2 Enter the LDAP user name and password for logging in to the iBMC WebUI. For
details, see Table 8-2.
NO TE
● To log in as an LDAP user, the user name can be in either of the following formats:
– LDAP user name (In this case, Domain can be Automatic matching or a specified
domain.)
– LDAP user name@Domain name (In this case, Domain can be Automatic matching
or a specified domain.)
● When you log in to the iBMC WebUI as an LDAP, the password can contain a maximum of
255 characters.
Step 3 Select the LDAP user domain from the Domain drop-down list.
NO TE
----End
8.3.1.1 iBMC
Scenario
Log in to the desktop of a server using the iBMC Remote Virtual Console.
Procedure
Step 1 Log in to the iBMC WebUI.
Step 3 In the Virtual Console area, click Start and select Java Integrated Remote
Console or HTML5 Integrated Remote Console from the drop-down list box.
NO TE
● Java Integrated Remote Console (Private): allows only one local user or VNC user to
access and manage the server at a time.
● Java Integrated Remote Console (Shared): allows two local users or five VNC users to
access and manage the server at a time. The users can see each other's operations.
● HTML5 Integrated Remote Console (Private): allows only one local user or VNC user to
access and manage the server at a time.
● HTML5 Integrated Remote Console (Shared): allows two local users or five VNC users
to access and manage the server at a time. The users can see each other's operations.
● If you want to use the Java Integrated Remote Console, ensure that the Java Runtime
Environment (JRE) meets requirements listed in Table 8-1. If the JRE is not installed, click
Failed to open the console ... and click here to download the JRE from the official
AdoptOpenJDK website. If you still cannot use the console after installing the JRE, click the
links under Troubleshooting Remote Virtual Console Problems to obtain more
information.
● For details about the virtual console, see "Virtual Console" in the iBMC user guide of the
server you use.
----End
NO TE
The independent remote console is a remote control tool developed based on the server
management software iBMC. It plays the same functions as Virtual Console provided by the
iBMC WebUI. This tool allows you to remotely access and manage a server, without worrying
about the compatibility between the client's browser and the JRE.
8.3.2.1 Windows
The following Windows OS versions are supported:
● Windows 7 (32-bit/64-bit)
● Windows 8 (32-bit/64-bit)
● Windows 10 (32-bit/64-bit)
● Windows Server 2008 R2 (32-bit/64-bit)
● Windows Server 2012 (64-bit)
Procedure
Step 1 Configure an IP address for the client (local PC) to enable communication with the
iBMC management network port.
----End
8.3.2.2 Ubuntu
The following Ubuntu OS versions are supported:
● Ubuntu 14.04 LTS
Before the operation, ensure that the IPMItool whose version is later than 1.8.14 has
been installed.
Procedure
Step 1 Configure an IP address for the client (local PC) to enable communication with the
iBMC management network port.
Step 2 Open the console and set the folder where the Independent Remote Console is
stored as the working folder.
./KVM.sh
● Shared Mode: allows two users to access and manage a server at the same
time. The users can see each other's operations.
● Private Mode: allows only one user to access and manage a server at a time.
Step 7 Click Connect.
A security warning is displayed.
----End
8.3.2.3 Mac
The following macOS version is supported:
● Mac OS X El Capitan
Before the operation, ensure that the IPMItool whose version is later than 1.8.14 has
been installed.
Procedure
Step 1 Configure an IP address for the client (local PC) to enable communication with the
iBMC management network port.
Step 2 Open the console and set the folder where the Independent Remote Console is
stored as the working folder.
./KVM.sh
NO TE
----End
Procedure
Step 1 Configure an IP address for the client (local PC) to enable communication with the
iBMC management network port.
Step 2 Open the console and set the folder where the Independent Remote Console is
stored as the working folder.
NO TE
----End
NO TE
Procedure
Step 1 Set an IP address and subnet mask or add route information for the PC to
communicate with the server.
Step 2 On the PC, double-click PuTTY.exe.
The PuTTY Configuration window is displayed.
● Host Name (or IP address): Enter the IP address of the server to be accessed,
for example, 192.168.34.32.
● Port: Retain the default value 22.
● Connection type: Retain the default value SSH.
● Close window on exit: Retain the default value Only on clean exit.
NO TE
Configure Host Name and Saved Sessions, and click Save. You can double-click the
saved record in Saved Sessions to log in to the server next time.
NO TE
● If this is your first login to the server, the PuTTY Security Alert dialog box is displayed.
Click Yes to proceed.
● If an incorrect user name or password is entered, you must set up a new PuTTY session.
----End
Procedure
Step 1 On the PC, double-click PuTTY.exe.
The PuTTY Configuration window is displayed.
Step 2 In the navigation tree on the left, choose Connection > Serial.
Step 3 Set login parameters.
The parameters are described as follows:
● Serial Line to connect to: COMn
● Speed (baud): 115200
● Data bits: 8
● Stop bits: 1
● Parity: None
● Flow control: None
NO TE
Set Saved Sessions and click Save. You can double-click the saved record in Saved
Sessions to log in to the server next time.
NO TE
If this is your first login to the server, the PuTTY Security Alert dialog box is displayed. Click
Yes to trust this site. The PuTTY page is displayed.
If the login is successful, the server host name is displayed on the left of the prompt.
----End
● Before using the VMD function, contact the technical support of the operating
system vendor to check whether the current operating system supports the VMD
function. If yes, check whether the VMD driver needs to be manually installed
and how to install the VMD driver.
● The Intel VROC driver needs to be installed on the Intel VMD and must be
enabled in the BIOS. The Intel VROC driver supports only UEFI mode.
● If the VMD function is enabled and the latest VMD driver is installed, the NVMe
drives support surprise hot swap.
Procedure
Step 1 Access the BIOS.
NO TE
----End
----End
For details about how to access the Remote Control page on the iBMC WebUI, see the iBMC
User Guide of the corresponding server.
Restarting the server will interrupt services. Exercise caution when performing this operation.
Step 3 When the screen shown in Figure 8-20 is displayed, press Del or Delete.
NO TE
NO TE
● The default BIOS password is Admin@9000 (administrator password). You are advised to
set the administrator password immediately after the first login. For details, see the Eagle
Stream Platform BIOS Parameter Reference.
● Press F2 to alternate between the English (US), French, and Japanese keyboards.
● Use the mouse to open the on-screen keyboard and enter the password.
● For security purposes, change the administrator password periodically.
● The system will be locked if an incorrect password is entered three consecutive times.
Restart the server to unlock it.
● Figure 8-22 and Figure 8-23 are front pages when you log in with an administrator
password.
● Figure 8-24 is the front page when you log in to the system using a common user
password. Only the Continue and Setup Utility options are displayed.
On the Setup Utility screen, a common user can only view menu options, set or change the
password of the common user (that is, editing the Set User Password option on the
Security page), set parameters (except Load Defaults) on the Exit page, and press F10
to save the settings and exit. Other options are all dimmed and cannot be edited, and the
F9 shortcut key function is unavailable.
Step 6 On the front page, select Setup Utility and press Enter.
The Main screen of the Setup Utility is displayed.
----End
NO TICE
The cleared data cannot be restored. Exercise caution when performing this
operation.
Procedure
NO TE
Step 1 You have accessed the desktop of the server where the target drive is located.
lsscsi
fdisk -l
NO TE
● The drive with the * symbol in the Boot column is the system drive. As shown in Figure
8-27, sda is the system drive.
● Do not directly clear system drive data. Before clearing system drive data, clear data from
other storage media.
NO TE
● The drive letters vary with the storage media (HDD, SSD, and USB flash drive). Ensure
that the drive letter that you entered is correct.
● This operation takes a long time. Wait patiently.
● If the command fails to execute, contact technical support.
After the data is cleared, do not restart or reinstall the server. Otherwise, the system will reload
data to the drives during the startup of the server.
----End
Procedure
Step 1 Determine the position of the LCD.
Step 2 Press the power button on the LCD. See (1) in Figure 2 Enabling the LCD.
Step 3 Pull out the LCD from the server until the rotating shaft appears. See (2) in Figure 2
Enabling the LCD.
Step 4 Rotate the LCD into a position for better viewing. See (3) in Figure 2 Enabling the
LCD.
----End
Procedure
Step 1 Determine the position of the LCD.
Step 2 Rotate the LCD to a horizontal position. See (1) in Figure 2 Disabling the LCD.
Step 3 Push the LCD into the server as far as it will go. See (2) in Figure 2 Disabling the
LCD.
----End
Procedure
Step 1 Tap any position on the LCD touchscreen to activate the LCD.
NO TE
If you do not perform any operations on the LCD within 5 minutes, the LCD will automatically
hibernate.
The default user name is Administrator, and the default password is Admin@9000.
NO TE
----End
Procedure
Step 1 On the LCD home screen, tap Setting. The Mgmtport screen is displayed.
Step 2 Change the IP address of the iBMC management network port based on the
parameter description .
NO TE
● The default IP address of the iBMC management network port is 192.168.2.100 and the
default subnet mask is 255.255.255.0.
● Tap the text box to display the soft keyboard. Tap Cancel to return to the Mgmtport
screen.
----End
Procedure
Step 1 On the LCD home screen, tap Status.
The Status page is displayed.
NO TE
NO TE
● Components are classified into different categories on the Status page. Tap any
component category to enter the component information screen and view the status of all
components in this category.
● The status and alarm severity for components are indicated by icons of different colors. For
details about the meaning of each icon color, see 1.10.1.1 Icon Description.
----End
Procedure
Step 1 On the LCD home screen, tap Monitor. The Monitor screen is displayed.
----End
Procedure
Step 1 On the LCD home screen, tap Info.. The Info. > Mgmt Port screen is displayed.
The Info. > Mgmt Port screen displays the management network port mode, VLAN
ID, MAC address, IPv4 address, and IPv6 address.
----End
Procedure
Step 1 On the LCD home screen, tap Info. > Basic Info.
Info. > Basic Info page displays the device serial number and asset label.
----End
Procedure
Step 1 On the LCD home screen, tap Info. > version.
The Info. > version screen is displayed.
----End
Procedure
Step 1 On the LCD home screen, tap Setting. The Mgmtport screen is displayed.
Step 2 Change the IP address of the iBMC management network port based on the
parameter description in the Table 1-47.
NO TE
● The default IP address of the iBMC management network port is 192.168.2.100 and the
default subnet mask is 255.255.255.0.
● Tap the text box to display the soft keyboard. Click Cancel to return to the Mgmtport page.
----End
9 More Information
Knowledge Base
To obtain case study about servers, visit Knowledge Base.
10.1 iBMC
10.2 BIOS
10.1 iBMC
The intelligent Baseboard Management Controller (iBMC) is a proprietary intelligent
system for remotely managing a server. The iBMC complies with DCMI 1.5/IPMI 1.5/
IPMI 2.0 and SNMP standards and supports various functions, including KVM
redirection, text console redirection, remote virtual media, and hardware monitoring
and management.
The iBMC supports domain name system (DNS) and Lightweight Directory
Application Protocol (LDAP) to implement domain management and directory
service.
● Image backup
The iBMC works in active/standby mode to ensure system reliability. If the active
iBMC is faulty, the standby iBMC takes over services immediately.
● Intelligent power management
The iBMC uses power capping to increase deployment density, and uses
dynamic energy saving to reduce operating expenditure.
For more information about iBMC, see the iBMC user guide.
10.2 BIOS
The basic input/output system (BIOS) is the most basic software loaded on a
computer hardware system. The BIOS provides an abstraction layer for the operating
system (OS) and the hardware to interact with the keyboard, display, and other input/
output (I/O) devices.
The BIOS data is stored on the Serial Peripheral Interface (SPI) flash memory. The
BIOS performs a power-on self-test (POST), initializes CPUs and memory, checks
the I/O and boot devices, and finally boots the OS. The BIOS also provides the
advanced configuration and power interface (ACPI), hot swap setting, and a
management interface in Chinese, English, and Japanese.
The Eagle Stream platform BIOS complies with UEFI 2.8 and ACPI 6.3
specifications.
The Eagle Stream platform server BIOS is developed based on Independent BIOS
Vendor (IBV) code base. It provides a variety of in-band and out-of-band
configuration functions as well as high scalability.
For more information about the BIOS, see the Server Eagle Stream Platform BIOS
Parameter Reference.
A Appendix
1 Nameplate 2 Certificate
A.1.1.1 Nameplate
1 Server model
NOTE
For details, see A.4 Nameplate.
2 Device name
5 Vendor information
6 Certification marks
A.1.1.2 Certificate
No. Description
1 Order
2 No.
NOTE
For details, see Figure A-4 and Table A-3.
3 QC inspector
4 Production date
5 No. barcode
No. Description
3 ● Y: a server
● B: a semi-finished server
● N: a spare part
No. Description
No. Description
No. Description
7 P/N code
8 QR code
NOTE
Scan the QR code to obtain technical support resources.
NO TE
● The quick start guide is located on the inside of the chassis cover. It describes how to
remove the mainboard components, important components of the chassis, precautions,
and QR codes of technical resources. The pictures are for reference only. For details, see
the actual product.
● The quick start guide is optional. For details, see the actual product.
NO TE
For details about the warning label, see the server Safety Information.
A.2 Product SN
The serial number (SN) on the label plate uniquely identifies a server. The SN is
required when users contact xFusion technical support. SNs can be in three forms,
as shown in SN Sample 1, SN Sample 2, andSN Sample 3.
● SN example 1
No. Description
No. Description
● SN example 2
No. Description
● SN example 3
No. Description
No. Description
No. Description
No. Description
supporte
d.
configur supporte
ed. d.
● 6434/64
34H
CPUs
and
CPUs
with
TDP
greater
than 205
W are
not
supporte
d when
the built-
in 4 x
3.5"
drives or
4-card
module
are
configur
ed.
supporte
d.
● 6434/64
34H
CPUs
and
CPUs
with
TDP
greater
than 205
W are
not
supporte
d when
the built-
in 4 x
3.5"
drives or
4-card
module
are
configur
ed.
W are ● 6434/64
not 34H
supporte CPUs
d when and
the built- CPUs
in 4-card with
module TDP
is greater
configur than 205
ed. W are
not
supporte
d when
the built-
in 4-card
module
is
configur
ed.
W are ● 6434/64
not 34H
supporte CPUs
d when and
the built- CPUs
in 4-card with
module TDP
is greater
configur than 205
ed. W are
not
supporte
d when
the built-
in 4-card
module
is
configur
ed.
NO TE
● When a single fan is faulty, the highest operating temperature is 5ºC (9°F) lower than the
rated value.
● When a single fan is faulty, the system performance may be affected.
● It is recommended that servers be deployed at an interval of 1U to reduce server noise and
improve server energy efficiency.
● Liquid-cooled processors are not supported.
A.4 Nameplate
Certified Model Remarks
H22H-07 Global
2288H V7 Global
PCIe RAID$ Temp Temperature of the PCIe PCIe RAID controller card
RAID controller card
PCIe$ Card BBU BBU status of the PCIe PCIe RAID controller card
RAID controller card
Table A-11 Problems of using optical modules that have not been tested for
compatibility and corresponding causes
Symptom Cause
Data bus defects cause the Some optical modules that have not been tested for
data bus suspension of a compatibility have defects in data bus designs.
device. Using such an optical module causes suspension of
the connected data bus on the device. As a result,
data on the suspended bus cannot be read.
An optical module with If an optical module that has not been tested for
improper edge connector compatibility with improper edge connector size is
size damages electronic used on an optical interface, electronic components
components of the optical of the optical interface will be damaged by short
interface. circuits.
Improper register settings Some optical modules that have not been tested for
cause errors or failures in compatibility have improper register values on page
reading parameters or A0, which can cause errors or failures when the
diagnostic information. data bus attempts to read parameters or diagnostic
information.
Optical modules bring Some optical modules that have not been tested for
electromagnetic compatibility are not designed in compliance with
interference to nearby EMC standards and have low anti-interference
devices. capability. Additionally, they bring electromagnetic
interference to nearby devices.
Symptom Cause
Optical modules cannot Some optical modules that are not tested for
work properly when the compatibility have poor heat dissipation. Since they
temperature change rate are not adapted to the heat dissipation policy of the
exceeds the normal range server, abnormally high temperatures may occur
without adapting to the heat continuously after they are running for a period of
dissipation policy of the time. As a result, the optical modules cannot work
server. properly.
B Glossary
B.1 A-E
B
BMC The baseboard management controller (BMC) complies
with the Intelligent Platform Management Interface
(IPMI). It collects, processes, and stores sensor signals,
and monitors the operating status of components. The
BMC provides the hardware status and alarm information
about the managed objects to the management system
so that the management system can implement unified
management of the devices.
E
ejector lever A part on the panel of a device used to facilitate
installation or removal of the device.
B.2 F-J
G
Gigabit Ethernet (GE) An extension and enhancement of traditional shared
media Ethernet standards. It is compatible with 10 Mbit/s
and 100 Mbit/s Ethernet and complies with IEEE 802.3z
standards.
H
hot swap Replacing or adding components without stopping or
shutting down the system.
B.3 K-O
K
KVM A hardware device that provides public keyboard, video
and mouse (KVM).
B.4 P-T
P
panel An external component (including but not limited to
ejector levers, indicators, and ports) on the front or rear
of the server. It seals the front and rear of the chassis to
ensure optimal ventilation and electromagnetic
compatibility (EMC).
R
redundancy A mechanism that allows a backup device to
automatically take over services from a faulty device to
ensure uninterrupted running of the system.
S
server A special computer that provides services for clients over
a network.
system event log Event records stored in the system used for subsequent
(SEL) fault diagnosis and system recovery.
B.5 U-Z
U
U A unit defined in International Electrotechnical
Commission (IEC) 60297-1 to measure the height of a
cabinet, chassis, or subrack. 1U = 44.45 mm (1.75 in).
C.1 A-E
A
AC alternating current
B
BBU backup battery unit
C
CCC China Compulsory Certification
CD calendar day
CE Conformite Europeenne
D
DC direct current
E
ECC error checking and correcting
EID enclosure ID
EN European Efficiency
C.2 F-J
F
FB-DIMM Fully Buffered DIMM
FC Fiber Channel
G
GE Gigabit Ethernet
H
HA high availability
I
iBMC intelligent baseboard management controller
IC Industry Canada
IP Internet Protocol
C.3 K-O
K
KVM keyboard, video, and mouse
L
LC Lucent Connector
M
MAC media access control
N
NBD next business day
O
OCP Open Compute Project
C.4 P-T
P
PCIe Peripheral Component Interconnect Express
POK Power OK
R
RAID redundant array of independent disks
S
SAS Serial Attached Small Computer System Interface
SERDES serializer/deserializer
T
TACH tachometer signal
C.5 U-Z
U
UBC Union Bus Connector
V
VCCI Voluntary Control Council for Interference by Information
Technology Equipment
W
WEEE waste electrical and electronic equipment