SR Iov

Download as pdf or txt
Download as pdf or txt
You are on page 1of 82

PowerVM™ Single Root I/O

Virtualization: Fundamentals,
Configuration, and Advanced
Topics
Allyn Walsh
Consulting IT Specialist
[email protected]

Many contributions from


Chuck Graham
STSM, Lead Architect
[email protected]

Alexander Paul
Senior Systems and Network Engineer
[email protected]

© 2016 IBM Corporation


Topics

• Overview

• SR-IOV Performance

• vNIC and Live Partition Mobility

• Fault-Tolerant Configurations

• vNIC Failover

• Performance Monitor & Topology

• Maintenance

• Additional information

1
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
Network Technologies on POWER Systems
• Dedicated Adapters
– Best possible performance
– Adapter exclusively bound to particular partition; no resource sharing

• Virtual Ethernet Adapter


– Hypervisor internal switching

• VIOS Shared Ethernet Adapter


– Hypervisor Switch uplink through VIOS
– Options for high availability
SEA failover, SEA failover w. load sharing, NIB
vNIC Announced
• Single Root I/O Virtualization (SR-IOV) and vNIC
5th October 2015
– SR-IOV is PCIe standard for hardware resource sharing
– vNIC is new virtual adapter type

• Host Ethernet Adapter (HEA)


– Adapter virtualization technology
– Not available for P7+ and P8 servers

2
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
Power Systems SR-IOV Solutions
PowerVM Single Root I/O Virtualization (SR-IOV) technology
• Sharing by up to 64 partitions per PCIe adapter.
• Direct Access I/O
• Performance - provides CPU utilization and latency characteristics similar to a dedicated
adapter
• Function - partition access to advanced adapter features (e.g. RSS, LSO, etc.)
• Logical port (VF) resource provisioning (e.g. desired bandwidth)
• Flexible deployment models
• Single partition
• Multi-partition without VIOS using Direct Access I/O
• Multi-partition thru VIOS
• Multi-partition mix of thru VIOS and Direct Access I/O
PowerVM virtual Network Interface Controller (vNIC) technology
• Leverages SR-IOV technology
• Advanced virtualization (e.g. LPM) capable
• Sharing by up to 64 partitions per adapter
• Logical port (VF) resource provisioning (e.g. desired bandwidth)
• Requires VIOS
3
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
SR-IOV HW/SW minimums for POWER8 - June 2015 GA

• IBM Power System E870 (9119-MME), IBM Power System E880 (9119-MHE),
IBM Power System E850 (8408-E8E)
IBM Power System S824 (8286-42A), IBM Power System S814(8286-41A),
IBM Power System S822(8284-22A), IBM Power System S824L(8247-42L),
IBM Power System S822L (8247-22L), IBM Power System S812L(8247-21L)
• SR-IOV support for PCIe Gen3 I/O expansion drawer
• HMC is required for SR-IOV
• Server firmware 830
• PowerVM is required - standard or enterprise edition
• PowerVM express edition allows only one partition to use the SR-IOV logical ports per adapter
• Minimum client operation systems:
• AIX 6.1 TL9 SP5 and APAR IV68443, or later
• AIX 7.1 TL3 SP5 and APAR IV68444, or later
• IBM i 7.1 TR10, or later
• IBM i 7.2 TR2, or later
• Red Hat Enterprise Linux 6.5, or later
• Red Hat Enterprise Linux 7, or later
• SUSE Linux Enterprise Server 11 SP3, or later
• SUSE Linux Enterprise Server 12, or later
• Ubuntu 15.04, or later
• SR-IOV logical ports assigned to VIOS requires VIOS 2.2.3.51, or later
4
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
Systems with SR-IOV Support
• 4/2014 GA
• 9117-MMD (IBM Power 770 ), 9179-MHD (IBM Power 780), 8412-EAD (IBM Power ESE )
• System node PCIe slots
• 3/2015 GA
• 9119-MME (IBM Power System E870 ), 9119-MHE (IBM Power System E880 )
• System node PCIe slots

• 6/2015 GA
• POWER8 scale-out servers, expanded options for Power E870 and E880, and Power E850
• PCIe Gen3 I/O expansion drawer.
• The following POWER8 PCIe slots are SR-IOV capable:
• All Power E870/E880 and Power E850 system node slots.
• Slots C6, C7, C10, and C12 of a Power S814 (1S 4U) or S812L (1S 2U) server.
• Slots C2, C3, C4, C5, C6, C7, C10, and C12 of a S824 or S824L server (2-socket, 4U) with both
sockets populated. If only one socket is populated, then C6, C7, C10, and C12.
• Slots C2, C3, C5, C6, C7, C10, and C12 of a S822 or S822L server (2-socket, 2U) with both
sockets populated. If only one socket is populated, then C6, C7, C10, and C12.
• Slots C1 and C4 of the 6-slot Fan-out Module in a PCIe Gen3 I/O drawer. If system memory is
<128GB only slot C1 of a Fan-out Module is SR-IOV capable.
• 12/2015 GA
• vNIC support for POWER8 scale-out servers, Power E870 and E880, and PCIe Gen3 I/O drawer
5
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
SR-IOV Capable Slots in P8 Scale-out models such as the S824
Adapter placement in scale-out models will vary depending on 1 core or 2 core and may
have other considerations such as installed memory
SR-IOV adapter placement will vary - 2U vs 4U - example below S824 - URL to Knowledge Center

6 For platforms with less than 64GB of total system memory, SR-IOV should not be configured in slots: C2, C4,
C10 and C12 as performance may be severely impacted.

Two slot positions per Fan-out module are SR-IOV capable See adapter
placement rules example below Adapter placement for SR-IOV in MEX drawer

The SR-IOV capability varies in slots P1-C4 and P2-C4 based on the amount of system memory. If the EMX0 PCIe3
expansion drawer is connected to a system with a total amount of physical memory greater than or equal to 128 GB, slots
P1-C4 and P2-C4 are SR-IOV capable.

6
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
2-port 10GbE CNA & 2-port 1GbE Adapter
FC #EN0J: 10GbE Optical SR, Low Profile
FC #EN0H: 10GbE Optical SR, Full High
FC #EN0L: 10GbE Active Copper Twinax, Low Profile
FC #EN0K: 10GbE Active Copper Twinax, Full High
FC #EN0N: 10GbE Optical LR, Low Profile
FC #EN0M: 10GbE Optical LR, Full High

20 VFs Dual 10 GbE


per port SR

+
4 VFs Dual 1 GbE
1GBASE-T
per port
=
48 VFs
per adapter
Announced GA
EN0H,EN0K SR-IOV support for Power 770/780/ESE system node. April 8, 2014 April 2014
EN0J,EN0L,EN0N SR-IOV support for Power E870/E880 system node. Feb. 24, 2015 March 2015
EN0H,EN0K,EN0M SR-IOV support for other POWER8 systems
April 28, 2015 June 2015
and PCIe Gen3 I/O expansion drawer
7
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
New PCIe Gen3 Adapters

• PCIe3 4-port 10Gb Optical SR or Active Copper twinax


• #EN15: 10GbE Optical SR, Full Height
• #EN16: 10GbE Optical SR, Low Profile
• #EN17: 10GbE Active Copper Twinax, Full Height
• #EN18: 10GbE Active Copper Twinax, Low Profile

16 VFs
4 ports 10GbE CNA
per port

=
64 VFs
per adapter

Announced GA

Supported in PCIe Gen3 I/O drawer and the 4U scale-out


system units and in the Power E850/E870/E880 system April 28, 2015 June 2015
node slots.
8
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
Adapters with SR-IOV Support

Adapters # of logical ports Low Full high - Low profile - Full high -
per adapter and profile - multi OS Linux only - Linux only -
per port multi OS PowerVM PowerVM
PCIe3 4-port (2x10GbE+2x1GbE) 48
#EN0J 1 #EN0H 3 #EL38 #EL56
SR Optical fiber and RJ45 20/20/4/4
PCIe3 4-port (2x10GbE+2x1GbE) 48 #EN0K 3
#EN0L 1 #EL3C #EL57
copper twinax and RJ45 20/20/4/4
PCIe3 4-port (2x10GbE+2x1GbE) 48
#EN0N #EN0M n/a n/a
LR Optical fiber and RJ45 20/20/4/4
PCIe3 4-port 10GbE SR optical 64
#EN16 2 #EN15 n/a n/a
fiber 16/16/16/16
PCIe3 4-port 10GbE copper 64
#EN18 2 #EN17 n/a n/a
twinax 16/16/16/16

Notes:
1. SR-IOV announced February 2015 for Power E870/E880 system node. Now
available in other POWER8 servers.
2. Adapter is only available in Power E870/E880 system node, not 2U server.
3. SR-IOV announced April 2014 for Power 770/780/ESE system node. With April 2015
announce, available in POWER8 servers.

9
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
Power Systems SR-IOV Capable PCIe Slots
• IBM Power 770 (9117-MMD), IBM Power 780 (9179-MHD), or Power ESE (8412-EAD)
Power Systems servers:
– all PCIe slot within the system units are SR-IOV capable.
– PCIe slots in the I/O expansion drawers are not SR-IOV capable.

• POWER8 Systems:
– consult IBM Knowledge Center for the specific system of interest. In some cases total system
memory may determine if a PCIe slots is SR-IOV capable.
Power System or I/O IBM Knowledge Center PCIe adapter placement rules
Expansion Drawer
8247-21L, 8247-22L, or 8284- http://www-01.ibm.com/support/knowledgecenter/8247-21L/p8eab/p8eab_83x_8rx_slot_details.htm
22A
8247-42L https://www-01.ibm.com/support/knowledgecenter/8247-42L/p8eab/p8eab_8247_slot_details.htm

8286-41A and 8286-42A https://www-01.ibm.com/support/knowledgecenter/8286-


41A/p8eab/p8eab_82x_84x_slot_details.htm
8408-E8E https://www-01.ibm.com/support/knowledgecenter/8408-
E8E/p8eab/p8eab_85x_slot_details.htm?cp=8408-E8E%2F0-2-7-2-0
9119-MHE or 9119-MME https://www-01.ibm.com/support/knowledgecenter/9119-
MME/p8eab/p8eab_87x_88x_slot_details.htm
PCIe Gen3 I/O expansion https://www-01.ibm.com/support/knowledgecenter/9119-MHE/p8eab/p8eab_emx0_slot_details.htm
drawer

10
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
SR-IOV Architecture Internal Switching in conjunction with SEA
Partition A Partition B Virtual I/O Server Partition C

SEA

Logical Logical Logical Logical Logical Virtual Virtual


Port Port Port Port Port Eth Eth

10
% Virtual hypervisor switch

10 Gbps - 20 Virtual Functions (VF)


2x 10 GbE SR
10 Gbps - 20 Virtual Functions (VF)

1 Gbps - 4 Virtual Functions (VF)


2x 1 GbE copper
1 Gbps - 4 Virtual Functions (VF)

4-port 10GbE CNA & 1GbE Adapter


11
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
Flexible Deployment

LPAR A LPAR A

 Single partition Device


Driver
Device
Driver
vFunc
DevDrv
OR SR-IOV
Logical
Port
SR-IOV
Logical
SR-IOV
Logical
SR-IOV
vFunc Logical
DevDrv Port
Port Port

– All adapter resources available to a


single partition PF PF VF

VF VF

VF
SR-IOV
Adapter
(Dedicated Virtual
mode) Fabric

Port Port Port SR-IOV Port


Adapter

 Multi-partition without VIOS LPAR A


LPAR B

– Direct access to adapter features SR-IOV


Logical
SR-IOV SR-IOV
Logical
SR-IOV
Logical
Logical
Port Port Port Port

– Capacity per logical port


– Fewer adapters for redundant adapter …
VF

VF

VF

VF

configurations. VF VF VF
Virtual
VF
Virtual
Fabric

Fabric
Port SRIOV Port
Adapter
Port SR-IOV Port
Adapter

VF – Virtual Function
PF – Physical Function

12
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
Flexible Deployment
VIOS LPAR 1 VIOS LPAR 2

 Multi-partition thru VIOS LPAR A

– Adapters shared by VIOS partitions SR-IOV


Logical
Virtual
Adapter
Virtual
Adapter
Virtual
Adapter
Virtual
Adapter
Virtual
Adapter
Virtual
Adapter
SR-IOV
Logical
Port
Port

– Fewer adapters for redundancy


– VIOS client partitions eligible for Live … …
Partition Mobility …
VF VF

VF VF

VF VF VF VF
Virtual
Fabric
Virtual
Fabric
Port SRIOV Port
Adapter
Port SR-IOV Port
Adapter

 Multi-partition mix of VIOS and nonVIOS VIOS LPAR

– For VIOS partition behavior is the same as LPAR C LPAR B LPAR A

Multi-partition thru VIOS above SR-IOV


Logical Virtual
Adapter
Virtual
Adapter
Virtual
Adapter
Virtual
Adapter
SR-IOV
Logical
SR-IOV
Logical
Port
Port Port

– Direct access partitions


• Path length & latency comparable to … …
dedicated adapter VF VF VF VF
Virtual


Fabric
Direct access to adapter features
Port SR-IOV Port

Adapter
Entitled capacity per logical port
VF = Virtual Function
13
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
PowerVM Single Root I/O Virtualization:
Fundamentals, Configuration, and Advanced Topics

SR-IOV Performance

14
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
Traditional Virtual Ethernet Performance

Maximum
Throughput Virtual Ethernet MTU 1500 “out-of-the-box”
throughput
~ 2.9 Gbit/s
2.9

2.4
Throughput [Gbps]

1.9 TP 9117-MMB default


"TP 8202-E4C default"
TP 9117-MMD default
1.4
TP POWER8 9119-MME

0.9

0.4
0.4 0.6 0.8 1 1.2 1.4 1.6
CPU units

15
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
SR-IOV Internal Switching – Setup on POWER8 S824

 Benchmark with 8 parallel TCP sessions


– Client LPAR: – Server LPAR:
• Power S824 8286-42A • Power S824 8286-42A (Same as client)
• AIX 7.1 TL3 SP 3 • AIX 7.1 TL3 SP 3
• Capped • EC=3.0 Units, uncapped
4 VPs • 4 VPs
• MTU size: 1500 bytes • MTU size: 1500 bytes

16
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
POWER8 SR-IOV Internal Switching on POWER8 S824

• POWER8 provides access to adapter line-speed with less CPU units


compared to POWER7+.
Throughput SR-IOV internal
vs. Virtual Ethernet
12

10
Thoughput [Gbit/s]

VENT MTU 1500


6
VENT MTU 9000

4 P7+ SRIOV MTU 1500


P8 SRIOV MTU 1500
2

0
0.5 0.7 0.9 1.1 1.3 1.5 1.7 1.9 2.1
Proc Units

17
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
POWER8 SR-IOV External Switching on POWER8 S824

Throughput SR-IOV internal


and external switching MTU 1500 byte

12,00

10,00
Thoughput [Gbit/s

8,00

6,00
P8 SRIOV internal
4,00
P8 SRIOV external
2,00

0,00
0,40 0,50 0,60 0,70 0,80 0,90 1,00 1,10 1,20 1,30 1,40
Proc Units

18
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
CPU units consumption: SEA /SR-IOV

Throughput: 2.5 Gbit/s 5 Gbit/s 5 Gbit/s 10 Gbit/s


CPU 5.8 2.8 0.8 1.6
Consumption
VIOS Rx
1.65

VIOS Tx
1.65 VIOS Rx
1.0
Server VIOS Tx Server 0.81
1.25 0.85
Client Server 0.59 Server 0.43
Client 0.80
1.23 Client 0.37 Client 0.40

E870 SEA E870 SEA S824 S824


default + LSO SR-IOV SR-IOV
(large send offload)
19
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
Case Study: Transaction Rate Performance

• Customer migrated SAP from POWER7 to POWER8.


• Expectation for SAP ERP was high network transaction rate
performance.
• Issue: Benchmarks on POWER7 with Virtual Ethernet
Adapters showed limited TPS performance for small packets.
• Better results were expected from new POWER8 systems.
• Customer additionally evaluated SR-IOV for transaction rate
tuning.
• Systems: Power7+ 770 (old) / Power8 E870 (new)

20
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
Transaction Rate Performance

• SAP ERP sizing for high network transaction rates.


• Packet size of 700 bytes assumed.
• Systems: Power8 E870 / Power7+ 770

22
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
SR-IOV & vNIC Capacity

• Capacity – Controls adapter and system resource levels, including desired minimum
bandwidth
• Desired minimum percent of physical port resources
• Unallocated or unused bandwidth is available for logical ports with more demand than their minimum
• Actual consumable bandwidth is an approximation
• Configured logical ports reserve a small amount of bandwidth even when they don’t have demand
• LSO traffic on a logical port with a small Capacity value may result in overshoot of the minimum bandwidth
• This is especially visible for 1Gbs links

23
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
SR-IOV & vNIC Capacity

• Capacity value can not be changed dynamically


• To change a capacity value for a logical port:
• dynamic remove the logical port, then dynamic add a logical port with new capacity value
• update profile with new value and activate profile
• It may be desirable to leave some capacity for new logical ports.
• Capacity setting must be a multiple of the default (2%).

24
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
SR-IOV & vNIC Desired Bandwidth

Logical ports is using 10 seconds competitive


physical port exclusively situation with other ports

Logical port
with 2%
capacity

26
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
PowerVM Single Root I/O Virtualization:
Fundamentals, Configuration, and Advanced Topics

vNIC and Live Partition Mobility

27
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
Live Partition Mobility Options with SR-IOV

Virtual Network Interface Controller (vNIC)


• vNIC is new virtual adapter type
• vNIC leverages SR-IOV to provide a performance optimized virtual NIC
solution
• vNIC enables advanced virtualization features such as live partition mobility
with SR-IOV adapter sharing
• Leverages SR-IOV Capacity value resource provisioning (e.g. minimum
bandwidth)

• December 2015 GA for AIX & IBM i


• Linux in progress – Each Distro will certify
• Power E850 support GA on 3/4/2016 with firmware 840.10 SP

• Pre-req
• AIX 7.1 TL4 or later or AIX 7.2 or later
• IBM i 7.1 TR10 or later or 7.2 TR3 or later
• VIOS 2.2.4, or later
• Firmware 840 or later
• HMC V8R8.4.0 or later
28
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
vNIC Architecture

Virtual I/O Server Client Partition

Data Buffer

Logical vNIC vNIC


Port Server Client

30
% Hypervisor

2x 10 GbE SR

2x 1 GbE copper

4-port 10GbE CNA/FCoE & 1GbE Adapter

29
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
vNIC Architecture

Virtual I/O Server Client Partition

Data Buffer
Partition Mobility

Logical vNIC vNIC


Port Server Client

30
% Hypervisor

2x 10 GbE SR

2x 1 GbE copper

4-port 10GbE CNA/FCoE & 1GbE Adapter

30
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
Comparison of Virtual Enet & vNIC

VIOS LPAR 1 VIOS LPAR 2


LPAR A LPAR B LPAR A LPAR B
SEA Data Data Data
Data

Control flow
SR-IOV SR-IOV SR-IOV vNIC vNIC vNIC vNIC
Virtual Virtual Virtual Data flow Server Server Client Client
Logical Adapter Adapter Adapter Logical Logical
Port Port Port Adapter Adapter Adapter Adapter

vSwitch
Power Hypervisor
Power Hypervisor

VF … VF VF … VF VF … VF VF … VF
SR-IOV
SR-IOV
Adapter
Adapter
Port Port Port Port

vNIC
Virtual Ethernet/SEA (current) • SR-IOV with advanced virtualization features (e.g. LPM)
• Improved performance
• Multiple copies of data • Eliminates data copies
• Optimized control flow, no overhead from the vSwitch or SEA
• Many-to-one relationship between virtual adapters • Multiple queue support
and physical adapter • Efficient
• QoS based on VLAN tag PCP bits (i.e. 8 traffic • Lower CPU and Memory usage (no data copy)
• Leverages adapter offload for LPAR to LPAR communication
classes)
• Deterministic QoS
• One-to-one relationship between vNIC client adapter and SR-IOV logical port
• Extends logical port QoS

31
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
Live Partition Mobilty/Remote Restart with vNIC

• Target system:
• Must support vNIC
• Must have at least one running SR-IOV
adapter in shared mode
• Must to have at least one running VIOS
that supports vNIC

• Target physical port


• User can select any physical port on the
target for each vNIC on the LPAR to be
migrated
• If physical port not selected, the HMC will
map physical port by port label and port
switch mode (VEB/VEPA). Empty string
is a valid label.
• Target physical port must have sufficient
available capacity and available logical
port count

32
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
Miscellaneous vNIC Notes

• vNIC backing device SR-IOV logical ports owned by VIOSs will not be
captured in the system templates

• vNIC backing device SR-IOV logical ports will not be in the VIOS profile
even when “sync last activated profile with current configuration” is on.

• vNIC backing device SR-IOV logical ports can not be modified.

• Deleting a vNIC backing device SR-IOV logical port is blocked by HMC


unless its associated vNIC is already gone

• Activating a client partition profile with vNICs requires the specified hosting
VIOSs to be in running state with RMC connection

33
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
vNIC / SR-IOV: Throughput / CPU Comparison

VNIC / SR-IOV NATIVE THROUGHPUT


EXTERNAL SWITCHING 4% difference
vNIC SR-IOV in max. TP

9.5
9
8.5
8
THROUGHPUT [GBPS]

7.5
7
6.5
6
5.5
5
0.4 0.5 0.6 0.7 0.8 0.9 1
CPU UNITS

34
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
Total CPU Consumption: SEA / SR-IOV

Throughput: 2.5 Gbit/s 5 Gbit/s 5 Gbit/s 10 Gbit/s 5 Gbit/s 10 Gbit/s


CPU 5.8 2.8 0.8 1.6 2.4 4.9
Consumption

VIOS Rx
1.65
VIOS Rx 1.47
VIOS Tx
1.65 VIOS Rx
VIOS Tx 1.00
1.0 VIOS Rx 0.89

Server VIOS Tx VIOS Tx 0.48


Server 0.81
1.25 0.85 Server 1.54
Server 0.7
Client Server 0.59 Server 0.43
Client 0.80
1.23 Client 0.92
Client 0.37 Client 0.40 Client 0.37

E870 SEA E870 SEA S824 S824


E870 vNIC E870 vNIC
default + LSO SR-IOV SR-IOV
(large segment offload)

36
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
vNIC / SR-IOV: TPS Maximum w. Small Packets

VNIC / SR-IOV NATIVE TPS


EXTERNAL SWITCHING
400000

350000

300000

250000

200000

150000

100000

50000

86814 96596 286689 359168


0
SEA 1 Gigabit SEA 10 Gigabit vNIC 10 Gigabit SR-IOV 10 Gigabit

38
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
Live Partition Mobility Options with SR-IOV

Virtual Ethernet configuration


VIOS LPAR 1 VIOS LPAR 2

• Use current Virtual Ethernet support with SEA SEA


LPAR A LPAR B
logical ports as Shared Ethernet Adapter SR-IOV SR-IOV
SR-IOV
(SEA) physical connections to the network SR-IOV
Logical
Logical
Port
Port
Virtual
Adapter
Virtual
Adapter
Virtual
Adapter
Virtual
Adapter
Logical
Logical
Port
Port

• Does not receive performance benefits vSwitch

provided with SR-IOV Direct Access Power Hypervisor

• No client partition resource allocation (e.g.


desired bandwidth)
• Benefits: VF

VF VF

VF
… …
• LPM Capability VF VF VF VF
Virtual
Fabric


Virtual
Adapter/port sharing to reduce number of adapters Fabric
Port SRIOV Port
Adapter
Port SR-IOV Port
Adapter

39
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
Live Partition Mobility Options with SR-IOV

Active-backup configuration
• Configure SR-IOV logical port as Active connection and Virtual
Ethernet adapter or vNIC client virtual adapter as backup VIOS LPAR
LPAR

• Prior to migration, use dynamic LPAR operation to remove SR- SR-IOV Virtual Virtual SR-IOV
IOV logical port Logical
Port
Adapter Adapter Logical
Port

Backup
• Virtual Ethernet becomes Active connection Active

… …
VF VF VF VF
• Migrate the partition
Virtual
Fabric

• On target system, configure SR-IOV logical port as Active Port SR-IOV Port
Adapter
connection
Normal Active-Backup configuration
• Option for AIX and Linux
• Physical I/O can not be assigned (even temporarily) to an IBM i LPM
capable partitions

40
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
PowerVM Single Root I/O Virtualization:
Fundamentals, Configuration, and Advanced Topics

Fault-Tolerant Configurations

41
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
Link Aggregation

AIX
• EtherChannel or Static Link Aggregation
• IEEE 802.3ad/802.1ax Link Aggregation Control Protocol (LACP)
• Network Interface Backup (NIB)

IBM i
• EtherChannel or Static Link Aggregation
• IEEE 802.3ad/802.1ax Link Aggregation Control Protocol (LACP)
• Virtual IP Address (VIPA)

Linux
• Several Bonding/port trunking modes including LACP and Active-Backup

42
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
Link Aggregation Using LACP

Issue
LPAR A LPAR B

SR-IOV SR-IOV SR-IOV SR-IOV


Logical Logical Logical Logical
Port
• Link Aggregation (LACP) will not function properly
Port Port Port

with multiple logical ports using the same physical … …


VF VF VF VF
port VF

VF VF

VF
Virtual
Fabric
Virtual
FabricPort SRIOV Port
Adapter
• Switch expects a single partner (MAC –physical Port SR-IOV
Adapter
Port

layer) on a link

• Multiple SR-IOV logical ports on the same physical Switch


port creates multiple partners on the link
Invalid Link Aggregation configuration. Two
logical ports assigned to one physical port.

VF = Virtual Function

43
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
EtherChannel or Static Link Aggregation

Issue
LPAR A LPAR B

SR-IOV SR-IOV SR-IOV SR-IOV

• SR-IOV logical ports may go down while the Logical


Port
Logical
Port
Logical
Port
Logical
Port

physical link remains up Failed logical port

EtherChannel … …
VF VF VF VF
… …
• Switch port failover occurs when physical link goes VF VF VF VF
Virtual
Fabric
Virtual
down FabricPort SRIOV
Adapter
Port
Port SR-IOV Port
Adapter

• Switch does not recognize a logical port going down


and will continue to send traffic on the physical port
Switch
• Etherchannel is not recommended for an SR-IOV Not recommended – Switch will not
configuration detect if logical link fails

44
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
Link Aggregation Recommendations

LPAR A LPAR B
If you require bandwidth greater than a single link’s SR-IOV SR-IOV
SR-IOV SR-IOV
bandwidth with link failover? Logical
Port
Logical
Port
Logical
Port
Logical
Port

Link Agg.
(LACP)
• Use Link Aggregation (LACP) with one logical port per VF

VF VF

VF
… …
physical port. VF VF VF VF
Virtual
Fabric
Virtual
Fabric
Port SRIOV Port
• Provides greater bandwidth than a single link with Port SR-IOV
Adapter
Port
Adapter
failover
Link aggregation with one logical port
assigned to each physical port.
• Other adapter ports may be shared or used in a LACP
configuration by other partitions

• Best Practice
• Assign 100% capacity to each SR-IOV logical port in the Link
Aggregation Group to prevent accidental assignment of another SR-
IOV logical port to the same physical port

45
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
Link Aggregation Recommendations

If you require bandwidth less than a single link’s


bandwidth and failover?
LPAR A LPAR B

• Use Active-Backup approach (e.g. AIX NIB, IBM i VIPA, SR-IOV


Logical
SR-IOV
Logical
SR-IOV
Logical
SR-IOV
Logical
or Linux bonding driver active-backup) Port Port Port Port

• For Linux the fail_over_mac parameter must be set to “active or 1” or “follow or 2”.
… …
VF VF VF VF
• Allows sharing of the physical port by multiple VF

VF VF

VF
Virtual
Fabric
partitions Virtual
Fabric
Port SRIOV Port
Adapter
Port
• When configured in an active-backup configuration,
SR-IOV Port
Adapter

you should configure the capability to detect when to


Active backup configuration (no switch
failover configuration required) allows sharing of
• On AIX, configure a backup adapter and an IP address to ping. physical port
• On IBM i with VIPA, options for detecting network failures besides link failures include
Routing Information Protocol (RIP), Open Shortest Path First (OSPF) or customer
monitor script.
• On Linux, use the bonding support to configure monitoring to detect network failures.

46
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
PowerVM Single Root I/O Virtualization:
Fundamentals, Configuration, and Advanced Topics

vNIC Failover

53
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
vNIC Failover

• vNIC with vNIC server side redundancy (analogous to SEA failover)


• Multiple backing devices (up to 6) for per vNIC client
• One active, others as inactive standby
• Backing device configuration includes selection of
• VIOS
• Adapter physical port
• Failover priority
• Capacity

• Provides flexible deployment and load balancing options


• Health check of active and inactive backing devices
• Hypervisor manages failover based on operational state and failover
priority
• Dynamic add and remove of backing devices

54
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
vNIC Failover Architecture

Virtual I/O Server Virtual I/O Server Virtual I/O Server Client Partition
Standby Standby Active
Data Buffers
Failover Failover Failover
Priority 100 Priority 50 Priority 1

Logical vNIC Logical vNIC Logical vNIC vNIC


Port Server Port Server Port Server Client

Hypervisor

vNIC failover configuration


• Up to 6 backing devices per vNIC client
• Select VIOS & adapter physical port for each backing device
• Set Failover priority for each backing device
• Auto Priority Failover: Enabled or Disabled

56
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
vNIC Failover Architecture

Virtual I/O Server Virtual I/O Server Virtual I/O Server Client Partition
Standby Active Not Operational Data Buffers
Failover Failover Failover
Priority 100 Priority 50 Priority 1

Logical vNIC Logical vNIC Logical vNIC vNIC


Port Server Port Server Port Server Client

Hypervisor

vNIC failover configuration Failover triggers


• Up to 6 backing devices per vNIC client • No VIOS heartbeat
• Select VIOS & adapter physical port for each backing device
• Set Failover priority for each backing device
• Auto Priority Failover: Enabled or Disabled

57
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
vNIC Failover Architecture

Virtual I/O Server Virtual I/O Server Virtual I/O Server Client Partition
Active Link Down Not Operational Data Buffers
Failover Failover Failover
Priority 100 Priority 50 Priority 1

Logical vNIC Logical vNIC Logical vNIC vNIC


Port Server Port Server Port Server Client

Hypervisor

vNIC failover configuration Failover triggers


• Up to 6 backing devices per vNIC client • No VIOS heartbeat
• Select VIOS & adapter physical port for each backing device • Adapter or adapter link failures
• Set Failover priority for each backing device
• Auto Priority Failover: Enabled or Disabled

58
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
vNIC Failover Architecture

Virtual I/O Server Virtual I/O Server Virtual I/O Server Client Partition
Standby Link Down Active
Data Buffers
Failover Failover Failover
Priority 100 Priority 50 Priority 1

Logical vNIC Logical vNIC Logical vNIC vNIC


Port Server Port Server Port Server Client

Hypervisor

vNIC failover configuration Failover triggers


• Up to 6 backing devices per vNIC client • No VIOS heartbeat
• Select VIOS & adapter physical port for each backing device • Adapter or adapter link failures
• Set Failover priority for each backing device • Auto Priority Failover if Enabled
• Auto Priority Failover: Enabled or Disabled

59
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
vNIC Failover Architecture

Virtual I/O Server Virtual I/O Server Virtual I/O Server Client Partition
Active Link Down Standby
Data Buffers
Failover Failover Failover
Priority 100 Priority 50 Priority 1

Logical vNIC Logical vNIC Logical vNIC vNIC


Port Server Port Server Port Server Client

Hypervisor

vNIC failover configuration Failover triggers


• Up to 6 backing devices per vNIC client • No VIOS heartbeat
• Select VIOS & adapter physical port for each backing device • Adapter or adapter link failures
• Set Failover priority for each backing device • Auto Priority Failover if Enabled
• Auto Priority Failover: Enabled or Disabled • HMC user initiated (Sets APF to disabled)

60
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
vNIC Failover Configuration

• Partition Properties->Virtual NICS


• New Virtual NICs interface, includes support for multiple backing devices
• To Add vNIC Client click on “Add Virtual NIC”

61
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
vNIC Failover Configuration – Add vNIC

• Select the desired physical port

62
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
vNIC Failover Configuration – Add vNIC

• Physical port information updated


• Select Hosting Partition (VIOS)
• Optionally set Capacity %
• Optionally set Failover Priority – lower # is more favored

63
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
vNIC Failover Configuration – Add vNIC

• Click on the Advanced Virtual NIC Settings for additional options

64
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
vNIC Failover Configuration – Additional backing devices

• Create additional backing devices per vNIC client


• Up to 6 backing devices total
• Click the Add Entry button to add new backing device

65
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
vNIC Failover Configuration – Add Entry

• New row for new backing device information

66
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
vNIC Failover Configuration – Add Entry

• Select Physical Port, Hosting Partition (VIOS), Capacity value, Failover Priority
• Physical port selection list limited to previously unselected physical ports
• Capacity % may be different for each backing device

67
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
vNIC Failover Configuration – Add Entry

• Select vNIC Allow Auto Priority Failover option


• Enabled: Always failover to more favored (lower Failover Priority number) backing device
• Disabled: Only failover when current backing device is not operational
• Click OK to create vNIC with backing devices

68
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
vNIC Failover Configuration – vNIC list

• Device Name indicates device name in client partition


• vNIC Auto Prioirty Failover state selected
• Other columns associated with backing devices
• Backing Device State indicates operational state of backing devices and which
one is the “Active” backing device

69
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
vNIC Failover Configuration – Modify Backing Devices

• Click on the Action button for a list of Actions you can perform on the vNIC

• Click on Modify Backing Device

70
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
vNIC Failover Configuration – Modify Backing Device

• Modify the vNIC Auto Priority Failover options


• Click on Add Backing Devices button to configure additional backing devices

71
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
vNIC Failover Configuration – Add vNIC Backing Devices

• Select a physical port, VIOS, Capacity %, and Failover Priority

72
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
vNIC Failover Configuration – Add Backing Device

• Click Add entry to configure additional backing devices


• Click OK when done

73
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
vNIC Failover Configuration – Live Partition Migration

• Partition Migration wizard provides a proposed mapping from Source Backing


Device Port to Destination Backing Device Port.
• The Destination Backing Device Port for a Source Backing Device Port can be
modified.

74
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
vNIC Failover Configuration – Live Partition Migration
• The modify Destination Backing Device Port allows for the following
• Change the physical adapter
• Change the physical port
• Change the Host VIOS
• Change the Capacity(%)

75
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
vNIC Failover

• Target 4Q2016 GA (except E850 (8408-E8E))


• E850 (8408-E8E) GA 1H2017

• Pre-req (current targets)


• AIX 7.1 TL4 or later or AIX 7.2 or later
• IBM i 7.1 TR11 or later or 7.2 TR3 or later
• Linux not supported at GA. Phased implementation via Linux community in 2017
• VIOS 2.2.5.0, or later
• Firmware FW860.10 or later
• HMC V8R8.6.0 with mandatory PTF or later

76
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
PowerVM Network Virtualization Comparison

Live Quality of Direct


Link Server Side Requires
Technology Partition service access
Aggregation Failover VIOS
Mobility (QoS) perf.

SR-IOV No1 Yes Yes Yes2 No No

vNIC Yes Yes No3 Yes2 vNIC failover4 Yes

SEA/vEth Yes No No Yes SEA failover Yes

Notes:
1. SR-IOV can optionally be combined with VIOS and virtual Ethernet to use higher-level
virtualization functions like Live Partition Mobility (LPM); however, client partition will not
receive the performance or QoS benefit.
2. Some limitations apply. For SR-IOV and vNIC see SR-IOV link aggregation support
3. Generally better performance and requires fewer system resources when compared to
SEA/virtual Ethernet
4. Available 2H2016

77
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
PowerVM Single Root I/O Virtualization:
Fundamentals, Configuration, and Advanced Topics

Performance Monitor & Topology

78
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
Performance Monitor for SR-IOV

• From the performance monitor screen click Network Utilization Trend -> More
Graphs -> SR-IOV adapters
• The breakdown by physical ports shows how heavily utilized a physical port is and
can be used to determine whether or not there is additional bandwidth available

79
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
Performance Monitor for SR-IOV

• The breakdown by partitions shows each logical port individually and which LPAR owns it
• This can be used to determine which logical ports are using the physical ports bandwidth

80
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
Physical and Logical Port Counters

Physical and logical port counters available via the HMC GUI or CLI

81
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
New HMC GUI: SR-IOV and vNIC Diagram

82
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
New HMC GUI: SR-IOV and vNIC Diagram

83
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
New HMC GUI: SR-IOV and vNIC Diagram

88
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
New HMC GUI: SR-IOV and vNIC Diagram

89
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
PowerVM Single Root I/O Virtualization:
Fundamentals, Configuration, and Advanced Topics

Maintenance

90
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
SR-IOV Firmware Update

• There are 2 pieces of firmware for SR-IOV that are built into system firmware
• Adapter driver firmware – The driver code that configures the adapter and logical ports
• Adapter firmware – The firmware that runs on the adapter

• Both levels of firmware are automatically updated to the levels included in the active system firmware in
the following cases
• System boot/reboot
• Adapter transitioned into SR-IOV mode
• Adapter level concurrent maintenance

• The level description will indicate if a new version of SR-IOV adapter firmware is included

91
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
SR-IOV Firmware Update

• When system firmware is updated concurrently, the SR-IOV levels on currently configured SR-IOV adapters
are not automatically updated
• Updating the SR-IOV levels will cause a temporary network outage on the logical ports on the affected adapter

• Starting with system firmware level FW830, the SR-IOV firmware levels can be viewed and updated using the
HMC GUI

• On the HMC enhanced+ GUI select the Server -> Actions -> SR-IOV Firmware Update

92
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
SR-IOV Firmware Update

• Each adapter in SR-IOV mode will display the active adapter driver and adapter
levels.
• If an update is available the update available column will indicate Yes
• Select the set of adapters to update, then right click to launch the context menu
• Update Adapter Driver Firmware - will update only the adapter driver code and will not update the adapter
firmware. This type of update results in a shorter network outage on logical ports
• If only the adapter driver firmware is updated then another update may still be available for the adapter firmware
• Update Adapter Driver and Adapter Firmware - will update both levels
• Adapters are updated serially to ensure that both devices in a multipath setup are not affected at the same
time. This means that the totally time to update can be significant if a lot of adapters are selected.

93
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
Physical Adapter Replacement

• SR-IOV adapters can be added, removed, and replaced without disrupting the system or
shutting down the partitions.
• For adapter replacement, all the logical ports must be de-configured.
• The HMC provides a GUI for adapter concurrent maintenance operations. (Serviceability →
Hardware → MES Tasks → Exchange FRU)

• New adapter must have the same capabilities (same type/feature code).
• When new adapter is plugged into the same slot as the original adapter, the hypervisor
will automatically associate the old adapter’s configuration with the new adapter.
• If the new adapter is plugged in to a different slot, the chhwres command is needed to
associate the original adapter configuration with the new adapter.

$ chhwres -m Server1 -r sriov –rsubtype adapter -o m -a \


“slot_id=2101020b,target_slot_id=21010208”

94
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
Known SR-IOV / vNIC Issues

• AIX NIB configuration issue


• AIX APARs: IV77944, IV80034, IV80127, IV82254, IV82479

• IBM i SR-IOV logical port / vNIC VLAN restrictions issue


• V7R1 – Resolution is not required as OS generated VLAN tags are not supported in V7R1
• V7R2 – For SR-IOV apply PTFs MF62338, MF62348, MF62349; For vNIC PTF MF62676
• V7R3 – For SR-IOV apply PTFs MF62340, MF62350, MF62351; For vNIC PTF MF62703

• PVID issue
• FW840.40 or later
• FW830.xx (target availability 12/2016)

• Transmit hang issue


• FW840.20 or later
• FW830.30 or later
• FW820.50 or later

95
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.
Links and Additional Information
• IBM Power Systems SR-IOV: Technical Overview and Introduction Redpaper
• http://www.redbooks.ibm.com/redpieces/abstracts/redp5065.html?Open
• Linkedin PowerVM group
• https://www.linkedin.com/groups/8403988
• SR-IOV FAQs blog and wiki
• https://www.ibm.com/developerworks/community/wikis/home?lang=en_us#!/wiki/Power%20Systems/page/In
troduction%20to%20SR-IOV%20FAQs
• https://www.ibm.com/developerworks/community/wikis/home?lang=en_us#!/wiki/Power%20Systems/page/S
R-IOV%20Frequently%20Asked%20Questions
• vNIC FAQs blog and wiki
• https://www.ibm.com/developerworks/community/wikis/home?lang=en_us#!/wiki/Power%20Systems/page/In
troduction%20to%20vNIC%20FAQs
• https://www.ibm.com/developerworks/community/wikis/home?lang=en_us#!/wiki/Power%20Systems/page/v
NIC%20Frequently%20Asked%20Questions
• Introducing New PowerVM Virtual Networking Technology
• https://ibm.biz/pvm-vnic
• Knowledge center links for SR-IOV firmware update on prior releases
• http://www-
01.ibm.com/support/knowledgecenter/POWER7/p7hb1/p7hb1_updating_sriov_firmware.htm?cp=POWER7
%2F1-8-3-5-2-4-6-0
• http://www-01.ibm.com/support/knowledgecenter/P8DEA/p8efd/p8efd_updating_sriov_firmware.htm
• Our contact info
• Allyn Walsh – [email protected] or Chuck Graham – [email protected]
96
© Copyright IBM Corporation 2016. Materials may not be reproduced in whole
or in part without the prior written permission of IBM.

You might also like