IBM Power Systems SR-IOV
IBM Power Systems SR-IOV
IBM Power Systems SR-IOV
Shivaji D Bhosale
Alexandre Bicas Caldeira
Bartłomiej Grabowski
Chuck Graham
Alexander D Hames
Volker Haug
Marc-Eric Kahle
Cesar Diniz Maciel
Manjunath N Mangalur
Monica Sanchez
ibm.com/redbooks Redpaper
International Technical Support Organization
July 2014
REDP-5065-00
Note: Before using this information and the product it supports, read the information in “Notices” on page v.
This edition applies to the IBM Power 770 (9117-MMD), IBM Power 780 (Machine type 9179-MHD) , IBM
Power ESE (Machine type 8412-EAD) Power Systems servers and HMC V7R7.9.0.
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .v
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .x
Chapter 2. Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.1 SR-IOV hardware requirements and planning introduction. . . . . . . . . . . . . . . . . . . . . . . 8
2.2 Hardware requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.3 Operating system requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.4 System management requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Chapter 4. Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.1 Verify prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.1.1 Verify that the Power Systems server is SR-IOV Capable . . . . . . . . . . . . . . . . . . 22
4.1.2 Verify that the system has at least one SR-IOV capable adapter . . . . . . . . . . . . . 23
4.2 Adapter operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
4.2.1 Switching adapter from dedicated mode to SR-IOV shared mode . . . . . . . . . . . . 23
4.2.2 Switching adapter from SR-IOV shared mode to dedicated mode . . . . . . . . . . . . 24
4.3 Physical port operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4.3.1 Physical port properties: General . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4.3.2 Physical port properties: Advanced. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4.3.3 Physical port properties: Port Counters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
4.4 Logical port operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.4.1 Adding a logical port during partition creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.4.2 Adding a logical port using dynamic partitioning . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Chapter 5. Maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
5.1 Adapter firmware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
5.2 Problem determination and data collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
5.2.1 SR-IOV Platform Dump. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
5.3 Problem recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
5.4 Concurrent maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
5.4.1 Adapter concurrent and non-concurrent maintenance . . . . . . . . . . . . . . . . . . . . . 60
5.5 IBM i performance metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
5.6 HMC commands for SR-IOV handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not grant you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM websites are provided for convenience only and do not in any
manner serve as an endorsement of those websites. The materials at those websites are not part of the
materials for this IBM product and use of those websites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring
any obligation to you.
Any performance data contained herein was determined in a controlled environment. Therefore, the results
obtained in other operating environments may vary significantly. Some measurements may have been made
on development-level systems and there is no guarantee that these measurements will be the same on
generally available systems. Furthermore, some measurements may have been estimated through
extrapolation. Actual results may vary. Users of this document should verify the applicable data for their
specific environment.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs.
The following terms are trademarks of the International Business Machines Corporation in the United States,
other countries, or both:
Active Memory™ POWER7® Redbooks®
AIX® POWER7+™ Redpaper™
developerWorks® POWER8® Redbooks (logo) ®
Global Technology Services® PowerHA® RS/6000®
IBM® PowerVM® System Storage®
POWER® PureFlex® Tivoli®
Power Systems™ PureSystems®
C3, and Phyteland device are trademarks or registered trademarks of Phytel, Inc., an IBM Company.
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Other company, product, or service names may be trademarks or service marks of others.
Peripheral Component Interconnect Express (PCIe) single root I/O virtualization (SR-IOV) is
a virtualization technology on IBM Power Systems™ servers. SR-IOV allows multiple logical
partitions (LPARs) to share a PCIe adapter with little or no run time involvement of a
hypervisor or other virtualization intermediary.
SR-IOV does not replace the existing virtualization capabilities that are offered as part of the
IBM PowerVM® offerings. Rather, SR-IOV compliments them with additional capabilities.
This paper is directed to clients, IBM Business Partners, and system administrators who are
involved with planning, deploying, configuring, and maintaining key virtualization
technologies.
Authors
This paper was produced by a team of specialists from around the world working at the
International Technical Support Organization (ITSO), Austin Center.
Alexandre Bicas Caldeira works on the IBM Power Systems Advanced Technical Support
team for IBM Brazil. He holds a degree in computer science from the Universidade Estadual
Paulista (UNESP). Alexandre has more than 14 years of experience working with IBM and
IBM Business Partners on Power Systems hardware, IBM AIX®, and PowerVM virtualization
products. He is also skilled on IBM System Storage®, IBM Tivoli® Storage Manager, IBM
PureSystems, IBM System x, and VMware.
Chuck Graham is a Senior Technical Staff Member and is the Lead Architect for PowerVM
SR-IOV support on Power Systems. He joined IBM in 1982 after graduating from the
University of Iowa with an Bachelor of Science degree in Electrical and Computer
Engineering. He has spent the majority of his career as a Developer and Architect of physical
and virtual I/O solutions for IBM computer systems and currently works in the Power
Hypervisor area. Chuck is also a Master Inventor with numerous patents in the field of
computer I/O.
Volker Haug is an Open Group Certified IT Specialist within IBM Systems and Technology
Group in Germany, supporting Power Systems clients and IBM Business Partners. He holds a
diploma degree in Business Management from the University of Applied Studies in Stuttgart.
His career includes more than 27 years of experience with Power Systems, AIX, and
PowerVM virtualization; he has written several IBM Redbooks publications about Power
Systems and PowerVM. Volker is a IBM POWER8 Champion and a member of the German
Technical Expert Council, an affiliate of the IBM Academy of Technology.
Marc-Eric Kahle is an AIX Software Specialist at the IBM Global Technology Services® in
Ehningen, Germany. He also has worked as a Power Systems Hardware Support Specialist
in the IBM RS/6000®, Power Systems, and AIX fields since 1993. He has worked at IBM
Germany since 1987. His area of expertise includes Power Systems hardware and he is
an AIX Certified Specialist. He has participated in the development of seven other IBM
Redbooks publications.
Cesar Diniz Maciel is an Executive IT Specialist with IBM in the United States. He joined IBM
in 1996 in Presales Technical Support for the IBM RS/6000 family of UNIX servers in Brazil,
and came to IBM United States in 2005. He is part of the Global Techline team, working on
presales consulting for Latin America. He holds a degree in Electrical Engineering from
Universidade Federal de Minas Gerais (UFMG) in Brazil. His areas of expertise include Power
Systems, AIX, and IBM Power Virtualization. He has written extensively about Power
Systems and related products. This is his eighth ITSO residency.
Manjunath N Mangalur is a Staff Software Engineer at the IBM Power Systems and
Technology Lab in IBM India. He holds a degree in Information Science from
Vishweshwaraiah Technological University. He has over eight years of experience with IBM
and has worked with the AIX operating system, IBM Power Systems and PureFlex® systems.
His areas of expertise include AIX security, kernel, Linux, PowerVM Enterprise, virtualization,
Power Systems, PureFlex systems, IBM Systems Director, and reliability, availability,
serviceability (RAS).
Monica Sanchez is an Advisory Software Engineer with more than 13 years of experience in
AIX and Power Systems support. Her areas of expertise include AIX, HMC, and networking.
She holds a degree in Computer Science from Texas A&M University and is currently part of
the Power HMC Product Engineering team, providing level 2 support for the IBM Power
Systems Hardware Management Console.
Scott Vetter
Executive Project Manager, PMP
Tamikia Barrow, Bill Brandmeyer, Charlie Burns, Medha D. Fox, Charles S. Graham,
Alexander Hames, Samuel Karunakaran, Kris Kendall, Stephen Lutz, Michael J. Mueller,
Kanisha Patel, Anitra Powell, Rajendra Patel, Vani Ramagiri, Woodrow Lemcke,
Tim Schimke, Jacobo Vargas, Bob Vidrick
IBM
Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html
Comments welcome
Your comments are important to us!
We want our papers to be as helpful as possible. Send us your comments about this paper or
other IBM Redbooks publications in one of the following ways:
Use the online Contact us review Redbooks form found at:
ibm.com/redbooks
Send your comments in an email to:
[email protected]
Mail your comments to:
IBM Corporation, International Technical Support Organization
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400
Preface ix
Stay connected to IBM Redbooks
Find us on Facebook:
http://www.facebook.com/IBMRedbooks
Follow us on Twitter:
http://twitter.com/ibmredbooks
Look for us on LinkedIn:
http://www.linkedin.com/groups?home=&gid=2130806
Explore new Redbooks publications, residencies, and workshops with the IBM Redbooks
weekly newsletter:
https://www.redbooks.ibm.com/Redbooks.nsf/subscribe?OpenForm
Stay current on recent Redbooks publications with RSS Feeds:
http://www.redbooks.ibm.com/rss.html
This chapter introduces SR-IOV architecture, and key benefits and features when deployed
on IBM Power Systems.
Initial SR-IOV deployment supports up to 48 logical ports per adapter, depending on the
adapter (for a list of available SR-IOV adapters, see Figure 2-1 on page 8). You can provide
additional fan-out for more partitions by assigning a logical port to a VIOS, and then using that
logical port as the physical device for a Shared Ethernet Adapter (SEA). VIOS clients can
then use that SEA through a traditional virtual Ethernet configuration.
Overall, SR-IOV provides integrated virtualization without VIOS and with greater server
efficiency as more of the virtualization work is done in the hardware and less in the software.
Generally, SR-IOV and PowerVM SEA are for two separate use cases. The PowerVM SEA
technology can scale well for connectivity and provides options for advanced functions such
as Live Partition Mobility. For a large scale application that requires little bandwidth per virtual
adapter and no bandwidth control per virtual adapter, and particularly this case for an existing
PowerVM installation, SEA has the potential to be the appropriate solution.
However, PowerVM SEA does not allow a hardware-direct connection, and the extra context
switches require additional processing, and has few options for quality of service (QoS).
SR-IOV can be used as a high performance solution with varying bandwidth requirements per
logical port and SR-IOV allows administrators to control bandwidth allocations per logical
port.
PowerVM SEA can be implemented using nearly every Ethernet adapter in the IBM Power
Systems portfolio; SR-IOV is limited to certain adapters. Also, SR-IOV supports only selected
Power Systems servers. PowerVM SEA is a feature of the VIOS. VIOS is not a requirement
for SR-IOV. Fewer system resources are required to use SR-IOV than PowerVM SEA overall.
PowerVM vNIC provides options for advanced functions such as Live Partition Mobility with
better performance and I/O efficiency when compared to PowerVM SEA. In addition,
PowerVM vNIC provides users with bandwidth control (QoS) by leveraging SR-IOV logical
ports as the physical interface to the network.
SR-IOV differs from IVE in areas such as QoS and system availability. For IVE, a logical port
competes for bandwidth with all the other logical ports defined on the same physical port.
This does not prevent a logical port from reaching the media speed if the other logical ports
do not have an intensive I/O workload. The QoS feature on SR-IOV assigns the minimum
bandwidth percentages that you want per logical port. A logical port can go above this
percentage if no contention for bandwidth exists on the link.
With direct access I/O, SR-IOV capable adapters running in shared mode allow the operating
system to directly access the slice of the adapter that has been assigned to its partition, so
The exact resource represented by the capacity value can vary based on the physical port
type and protocol. In the case of Ethernet physical ports, capacity determines the minimum
percentage of the physical port’s transmit bandwidth that the user desires for the logical port.
For example, consider Partitions A, B, and C, with logical ports on the same physical port. If
Partition A is assigned an Ethernet logical port with a capacity value of 20%, Partitions B and
C cannot use more than 80% of the physical port’s transmission bandwidth unless Partition A
is using less than 20%. Partition A can use more than 20% if bandwidth is available. This
ensures that, although the adapter is being shared, the partitions maintain their portion of the
physical port resources when needed.
In a single-partition deployment, the SR-IOV capable adapter in shared mode is wholly owned
by a single partition, and no adapter sharing takes place. This scenario offers no practical
benefit over traditional I/O adapter configuration, but the option is available.
In a more complex deployment scenario, an SR-IOV capable adapter could be shared by both
VIOS and non-VIOS partitions, and the VIOS partitions could further virtualize the logical
ports as shared Ethernet adapters for VIOS client partitions. This scenario leverages the
benefits of direct access I/O, adapter sharing, and QoS that SR-IOV provides, and also the
benefits of higher-level virtualization functions, such as Live Partition Mobility (for the VIOS
clients), that VIOS can offer.
For more deployment scenarios, see Chapter 3, “Deployment scenarios” on page 11.
At times, this paper uses the Virtual Function and logical port interchangeably. The history of
this is as follows:
Virtual Function (VF) A VF is a term used by the PCI Special Interest Group (SIG) to define
a specific type of PCI function. As a PCI function, a VF has
characteristics and capabilities that determine how the VF behaves on
a PCI bus. The definition of a VF does not include how a VF maps to
any particular parts of a PCI device (for example, network ports) other
than its PCI interface.
Logical port For PowerVM, a logical port is a configurable entity that defines
characteristics and capability for a portion of a physical port on a I/O
device. Platform firmware uses the logical port configuration
information to manage platform firmware resources and to configure
an I/O device. When an SR-IOV logical port is activated, either through
partition activation or though a DLPAR add operation, a VF is
associated with the logical port to allow the partition to access the PCI
device.
In SR-IOV shared mode, adapters partition their host interface using VFs. Power Systems
SR-IOV implements VFs as logical ports. Each logical port is associated with a physical port
of the adapter.
Figure 1-1 shows the relationship between an SR-IOV adapter and client partitions, including
a scenario in which logical ports are assigned to a virtual I/O server and then used as the
physical devices of Shared Ethernet Adapters (SEA).
VIOS
SEA
Hypervisor
SR-IOV
Physical Adapter Physical
Port Port
Fabric
Chapter 2. Planning
Implementing the SR-IOV technology requires some planning, which starts with the
appropriate operating system levels, firmware level, adapter types, and adapter settings.
SR-IOV functions present several configuration options that must be considered when
planning. This section describes the server support requirement for implementing the SR-IOV
functions and with IBM PowerVM.
To enable the SR-IOV adapter sharing function, a server that supports SR-IOV is required in
addition to an SR-IOV capable I/O adapter. You must be aware of the following hardware
considerations:
Not all I/O slots support SR-IOV
SR-IOV-capable I/O slots may have different capabilities
The capabilities of SR-IOV adapters may differ
PowerVM Standard or Enterprise Edition is required for using SR-IOV
Adapter Logical Low profile Full high Low profile Full high
ports per multiple OS multiple OS Linux only Linux only
adaptera
PCIe3 4-port (10 Gb FCoE and 1 GbE) 48 20/20/4/4 #EN0Jb #EN0Hc #EL38 #EL56
SR optical fiber and RJ45
PCIe3 4-port (10 Gb FCoE and 1 GbE) 48 20/20/4/4 #EN0Lb #EN0Kc #EL3C #EL57
SFP + copper twinax and RJ45
PCIe3 4-port (10 Gb FCoE and 1 GbE) 48 20/20/4/4 #EN0N #EN0M N/A N/A
LR optical fiber and RJ45
The minimum HMC code level for SR-IOV support is Version 7 Release 7.9.0 (HMC
V7R7.9.0).
Chapter 2. Planning 9
10 IBM Power Systems SR-IOV: Technical Overview and Introduction
3
One of the benefits of SR-IOV is enabling the virtualization of the network adapters without
the requirement of setting up and configuring a VIOS. This chapter highlights several
configuration scenarios that use this capability.
Partition
SR-IOV
Physical Adapter Physical
Port Port
Fabric
Although supported, this scenario does not offer any benefit over using a traditional adapter.
The ports are dedicated to a single partition, and other partitions cannot share them.
Unsupported: This scenario is not supported by IBM i unless the system is connected to
an HMC and the adapter is configured in shared mode.
Figure 3-2 illustrates two partitions that share a pair of SR-IOV adapters. Each partition has
one logical port associated with a physical port in two separate physical adapters. This is a
good configuration for availability, because the two logical ports can be used in a high
availability configuration, such as link aggregation.
Partition A Partition B
SR-IOV SR-IOV
Physical Adapter Physical Physical Adapter Physical
Port Port Port Port
Fabric
VIOS A VIOS B
SEA SEA
Partition
SR-IOV SR-IOV
Physical Adapter Physical Physical Adapter Physical
Port Port Port Port
Fabric
SR-IOV logical ports can coexist with virtual adapters or physical dedicated adapters without
restrictions. This capability enables a partition to use the SR-IOV logical port as the primary
network interface, and to have a Virtual Ethernet Adapter as a backup interface. In case of an
interruption in the network traffic through the logical port, it would then be routed through the
backup interface. This can be achieved as follows:
On AIX: By using the Network Interface Backup feature (Figure 3-4 on page 15).
On IBM i: By using Virtual IP Address (VIPA).
On Linux: You configure an active-backup bonding.
SEA
LPAR
Logica Logica
Virtual Virtual
l l
Eth Eth
Port Port
SR-IOV SR-IOV
Physical Adapter Physical Physical Adapter Physical
Port Port Port Port
Fabric
From a virtualization perspective, the SR-VIO logical ports are seen as physical adapters,
therefore operations like Live Partition Mobility are not supported when an SR-IOV logical port
is configured on the partition. On AIX, you can use the Network Interface Backup feature with
a virtual adapter, together with a dynamic partition operation to remove the logical port from
the partition in order to route the network traffic to the virtual adapter. You can then move the
partition to another server by using Live Partition Mobility, and at the destination server
reconfigure an SR-IOV port to the partition. Linux partitions can perform a similar operation
using channel bonding to a virtual Ethernet adapter.
PowerVM vNIC combines many of the best features of SR-IOV and PowerVM SEA to
provides a network solution with options for advanced functions such as Live Partition Mobility
along with better performance and I/O efficiency when compared to PowerVM SEA. In
addition PowerVM vNIC provides users with bandwidth control (QoS) capability by leverages
SR-IOV logical ports as the physical interface to the network.
Figure 3-5 on page 16 shows a logical diagram of the vNIC structure and illustrates how the
data flow is directly between LPAR memory and the SR-IOV adapter.
Data Data
SR-IOV SR-IOV
Logical
vNIC vNIC vNIC vNIC
Logical
Port Port Server Server Adapter Adapter
Hypervisor
… …
VF VF VF VF
SR-IOV
Control flow
Adapter
Physical Physical Tx & Rx Data flow
Port Port
VF = Virtual Function
Fabric
The key element of the vNIC model is a one-to-one mapping between a vNIC virtual adapter
in the client LPAR and the backing SR-IOV logical port in the VIOS. With this model, packet
data for transmission (similarly for receive) is moved from the client LPAR memory to the
SR-IOV adapter directly without being copied to the VIOS memory. The benefits of bypassing
the VIOS are reduction of the processing of a memory copy (specifically lower latency), and
the reduction in the CPU and VIOS memory consumption (greater efficiency).
vNIC support can be added to a partition in a single step by adding a vNIC client virtual
adapter to the partition using the management console (HMC). The management console
creates all the necessary devices in the client LPAR as well as the VIOS. From the user
perspective there is no additional configuration of the VIOS components for vNIC beyond
configuration of the vNIC client adapter.
The minimum required PowerVM and operating system levels to support vNIC are as follows:
PowerVM 2.2.4
– VIOS Version 2.2.4
– System Firmware Release 840
– HMC Release 8 Version 8.4.0
Operating Systems
– AIX 7.1 TL4 or AIX 7.2
– IBM i 7.1 TR10 or IBM i 7.2 TR3
On AIX and IBM i, only SR-IOV logical ports must be part of the link aggregation. Linux also
supports link aggregation between an SR-IOV logical port and a Virtual Ethernet Adapter.
With SR-IOV, using 802.3ad/802.1ax standards is possible.
On AIX, link aggregation LACP is part of the operating system since version AIX V5.1. IBM i
introduced link aggregation in version i V7R1 TR3, and Linux implements link aggregation
through the channel bonding driver.
Partition A Partition B
SR-IOV SR-IOV
Physical Adapter Physical Physical Adapter Physical
Port Port Port Port
Fabric
Partition A Partition B
SR-IOV SR-IOV
Physical Adapter Adapter Physical
Port Port
Fabric
Figure 3-7 Invalid LACP configuration with many SR-IOV logical ports.
Multiple primary SR-IOV logical ports are allowed in an LACP configuration. An SR-IOV
logical port cannot be included as a primary adapter in an Etherchannel configuration with
more than one primary adapter. When an SR-IOV logical port is configured in an
active-passive configuration, it must be configured with the capability to detect when to fail
over from the primary to the backup adapter.
On AIX, configure a backup adapter and an IP address to ping.
On IBM i with VIPA, options for detecting network failures besides link failures include
Routing Information Protocol (RIP), Open Shortest Path First (OSPF) or customer monitor
script.
On Linux, use the bonding support to configure monitoring to detect network failures.
Chapter 4. Configuration
This chapter describes various aspects of configuring an SR-IOV adapter.
Verify that the SR-IOV Capable entry is set to True in the Value column.
If the SR-IOV Capable entry is not listed, the system most likely does not meet the hardware
or firmware requirements that are described in 2.1, “SR-IOV hardware requirements and
planning introduction” on page 8.
If the SR-IOV Capability is False, this means you are running PowerVM Express edition. With
PowerVM Express the user can enable SR-IOV Shared mode but can create logical ports
only for a single partition, and the adapter cannot be shared.
Verify that you have at least one adapter with a value of Yes in the SR-IOV Capable column.
The number in parentheses in the last column indicates the maximum number of logical ports
that the slot supports. However, if an SR-IOV capable adapter has been installed and set to
shared mode, the number indicates the maximum logical ports that the adapter supports.
If no SR-IOV capable adapters are installed, you will need to install one into an empty slot that
has a value of “Yes” in the SR-IOV Capable column.
Click the link of an SR-IOV capable adapter in the I/O table. The Physical I/O Properties panel
opens (Figure 4-3). Click the SR-IOV tab.
Figure 4-3 SR-IOV capable adapter properties: dedicated mode (shared mode not enabled)
Chapter 4. Configuration 23
Select the Enable SR-IOV Shared Mode check box, and then click OK. To see the change,
exit the server Properties panel and then reopen it.
As Figure 4-4 shows, the adapter is now owned by the hypervisor, and the SR-IOV Capable
column is updated to display the logical port limit of the adapter.
Figure 4-4 Server Properties panel: I/O tab with SR-IOV shared mode enabled
If active or inactive partitions are using the logical ports of the adapter, the mode change
operation will fail.
For logical ports in use by active partitions, select the active partition, click Dynamic
Partitioning → SR-IOV Logical Ports from the Tasks menu, and dynamically remove the
logical ports.
The Physical Port Properties panel has three tabs: General, Advanced, and Port Counters.
Chapter 4. Configuration 25
4.3.1 Physical port properties: General
The General tab (Figure 4-8) has the following properties and settings.
Label and Sublabel These port labels are for your reference so that the physical ports are
easier to identify; they can be set to anything you choose. They will be
displayed in their respective columns on the Physical I/O Properties
panel. (Figure 4-7 on page 25)
Capacity This section shows the available capacity for this physical port. The
total capacity is always 100.
Configurable Negotiated Properties
Here you can set configured properties, like port speed. However, the
actual speed will ultimately be determined by the adapter and will be
shown in the Negotiated column.
The Advanced tab also shows the maximum number of diagnostic logical ports and
promiscuous logical ports, and how many of each are configured.
Note: Port switch mode (VEB/VEPA) can be changed only when no logical ports are
configured on the physical port.
Permissions This specifies the maximum and configured number of logical ports that
are in the diagnostic or promiscuous mode.
Chapter 4. Configuration 27
Logical Port Limits This specifies the total, maximum, and available number of configured
logical ports that are supported by the system firmware for the selected
physical port.
Total Supported
The total number of logical ports supported for all protocol
types on the physical port.
Supported
The number of logical ports of the specified type (for example,
Ethernet) supported by the physical port. The physical port
may have more logical port limits based on the logical port
type.
Max The maximum number of logical ports, of that type, that can be
allocated to logical partitions. The value is either a default
value determined by the hypervisor or a user-specified value
that is no higher than the value in the respective “Supported”
column.
Note: The logical port limits Max value can be changed only when no logical ports are
configured on the adapter, including those on other physical ports.
Click Actions → Create Logical Port and then select the type of logical port you want to
configure. The HMC displays a panel that lists all available SR-IOV physical ports of that type
in the system (Figure 4-12).
Select the radio button of a physical port that has at least one available logical port (LP), then
click OK, which then takes you to the General tab of the Logical Port Properties panel.
Chapter 4. Configuration 29
Logical port properties: General
On the Logical Port Properties panel (Figure 4-13), define the capacity for this logical port and
whether the logical port should operate in diagnostic mode, promiscuous mode, or both
modes.
Figure 4-14 shows the error message when performing a dynamic addition of a logical port
that exceeds the maximum capacity.
Figure 4-14 Error message when dynamically adding a logical port, exceeding capacity
Chapter 4. Configuration 31
The Advanced tab additional properties, of the logical port, that you can modify.
Port VLAN ID Valid values are 0 and 2 - 4094.
VLAN Restrictions When promiscuous mode is enabled, the Allow All VLAN IDs setting
is the only option available. Otherwise, you can choose whether to
allow all, deny all, or allow a specified range of VLAN IDs.
Port Vlan ID (PVID) Priority
Valid values are 0 - 7.
Configuration ID This is similar to a virtual slot ID for virtual Ethernet and should
typically be kept at its default value, which is assigned by the HMC.
MAC Address By default, the MAC address is auto-assigned by the HMC. To define a
specific MAC address, select the Override check box, which enables
an additional field where you can enter your MAC address.
MAC Address Restrictions
When promiscuous mode is enabled, Allow all O/S Defined MAC
Addresses is the only option available. Otherwise, you can choose
whether to allow all, deny all, or allow specific operating system
defined MAC addresses.
After you set the attributes of your logical port and click OK, you are returned to the Create
LPAR Wizard, where you can create more logical ports or complete the profile creation.
Note: Logical ports that are defined during profile creation are not configured on the
adapter, and no validation of resource availability or conflict is done, until the profile is
activated. During activation, HMC performs validation checks to verify resource availability.
If any one of the logical port resources is not available, the activation fails. When the profile
has been activated, select the logical partition and then navigate to Properties → SR-IOV
Logical Ports in the Tasks menu to see the SR-IOV logical ports.
Note: For AIX, Linux, and VIOS partitions, the DLPAR add and remove operations require
a working Resource Monitoring and Control (RMC) connection between the partition and
the HMC.
From the SR-IOV Logical Ports panel, select Action → Add Logical Port and choose the
type of logical port you want to add. Next, you are prompted to select the physical port from
which you want to create the logical port. Click the appropriate radio button, and click OK.
Figure 4-17 Add Ethernet Logical Port panel: select physical port
The Logical Port Properties panel opens (described in section 4.4.1, “Adding a logical port
during partition creation” on page 29). Complete the configuration details from there.
When the DLPAR add operation is complete on the HMC, you might need to act on the
partition to make the operating system aware of the new device.
For an AIX partition, run cfgmgr.
On the VIOS, use the cfgdev command.
IBM i and Linux dynamically reconfigure and require no additional action.
Note: Remember to save all DLPAR operations to the HMC partition profile if you want the
changes to remain for future activations.
Chapter 4. Configuration 33
Select the server, then from the Tasks menu, click Properties, and then click the I/O tab.
Next, click the link for the appropriate SR-IOV adapter, and click the SR-IOV tab. Click the
radio button by the appropriate physical port to see the associated logical ports
(Figure 4-18). Then click the link of the location for the logical port that you want to edit.
Figure 4-18 Server Physical I/O Properties panel: Configured Logical Ports
The Logical Port Properties panel will be similar to the panel in Figure 4-13 on page 30, but
some fields might be read-only. Figure 4-19 shows that Diagnostics can still be enabled or
disabled, but you can no longer modify the Capacity value.
Figure 4-19 Logical Port Properties: General tab from a running partition
Some properties on the Advanced tab (Figure 4-20 on page 35) have limitations also.
VLAN Restrictions and MAC Address Restrictions cannot be changed.
If the VLANs list is non-empty, you can add to the list, but you cannot remove VLAN IDs
from the list. The same rule applies to “MAC Addresses”
Figure 4-20 Logical Port Properties: Advanced tab from a running partition
Chapter 4. Configuration 35
4.5 Device mapping
The SR-IOV End-to-End Mapping task has been added under the Hardware Information →
Adapters menu at the server level (Figure 4-21).
This task launches a panel that lists the SR-IOV physical ports of the system (Figure 4-22).
Select the radio button of a physical port to view the device mapping between the configured
logical ports and the operating system devices.
If the owner partition is running AIX or Linux, an RMC connection is required to see the
operating system device names. Without RMC, Device Name will show Unknown.
End-to-end mapping is similar in IBM i but does not require an RMC connection.
4.6.1 AIX
On AIX, the logical port is seen as a physical Ethernet device, as shown in Example 4-1. The
VF string at the end of the description of the adapter indicates that it is a logical port (Virtual
Function).
The lscfg command lists the physical adapter characteristics, as shown in Example 4-2. The
Hardware Location Code attribute shows the location of the physical adapter, and also the
physical port that the logical port is attached to.
Ethernet Adapter:
Network Address.............2E8409671802
ROM Level.(alterable).......0.0.9999.19068
Hardware Location Code......U2C4B.001.DBJD102-P2-C8-T4-S12
The lsattr command lists the attributes of the adapter. Example 4-3 shows some of the
attributes from the command output.
The media speed shows the speed that the physical port is configured to use. If your adapter
is set to auto negotiation, media_speed in the lsattr output indicates Auto_Negotiation. The
virtual port has its speed based on the physical port speed, and the capacity as configured
through the HMC.
Chapter 4. Configuration 37
By default, the HMC configures the MTU size for the adapter, and whether to use jumbo
frames. You can change the configuration by using the chdev command (Example 4-4).
The entstat command shows more information about the adapter. Example 4-5. highlights
some information from the command output that is related to the logical and physical port
configuration.
General Statistics:
-------------------
No mbuf Errors: 0
Adapter Reset Count: 4
Adapter Data Rate: 20000
Driver Flags: Up Broadcast Running
Simplex Promiscuous 64BitSupport
ChecksumOffload LargeSend DataRateSet
IPV6_LSO IPV6_CSO LARGE_RECEIVE
VIRTUAL_PORT PHYS_LINK_UP
10GbE SFP+ CU Integrated Multifunction CNA 10GbE GX++ Gen2 Converged Network
Adapter
-------------------------------------------------------------
Device ID: df1028e214103904
Version: 1
Device State: Open
Physical Port Link Status: Up
Logical Port Link Status: Up
Physical Port Speed: 10 Gbps Full Duplex
...
Physical Port Promiscuous Mode Changeable: No
Physical Port Promiscuous Mode: Disabled
Logical Port Promiscuous Mode: Enabled
Physical Port All Multicast Mode Changeable: Yes
Physical Port All Multicast Mode: Disabled
Logical Port All Multicast Mode: Disabled
...
The the information from the entstat command, you can calculate the percentage of the
physical port’s bandwidth that is designated to the logical port.
VF Minimum Bandwidth is the capacity assigned to the logical port. The example shows
this value as 28%.
Physical Port Speed is the actual speed that the port is running. The example shows this
as 10 Gbps.
Chapter 4. Configuration 39
Therefore, the bandwidth assigned to this port is 2.8 Gbps.
Along with AIX commands, more information about SR-IOV port configuration and statistics
are available on the HMC.
4.6.2 IBM i
On the IBM i, the logical ports can be identified by the specific codes. As shown in the
Example 4-6, the ports are reported with 2C4C type.
Bottom
F3=Exit F5=Refresh F6=Print F12=Cancel
The resource type (CCIN) depends from the type of the physical SR-IOV adapter being used.
The Table 4-1 lists CCIN codes for specific SR-IOV adapters.
EN0H 2B93 PCIe2 LP 4-port (10 Gb FCoE & 1 GbE) SRIOV SR&RJ45
EN10 2C4C Integrated Multifunction Card w/ 10 GbE RJ45 & Copper Twinax
The ifconfig command can also be used to list the network interfaces on a system.
Example 4-8 shows the configured interface with the TCP/IP configuration.
The ethtool command shows information about an Ethernet device on Linux. It calculates
the adapter speed, based on the physical port speed and capacity that is configured at the
HMC, as Example 4-9 shows.
The SR-IOV ports in Linux are supported by the be2et device driver when the adapter is in
either shared or dedicated mode.
Chapter 4. Configuration 41
4.6.4 Virtual I/O Server
To activate a VIOS with SR-IOV adapter logical ports, you must assign the adapter logical
ports to the VIOS partition by using a dynamic logical partition (DLPAR) operation at run time.
Alternatively, you can update the partition profile by adding SR-IOV ports and then activating
the VIOS partition.
The following steps add SR-IOV adapter ports to a partition profile. SR-IOV adapter ports can
also be added while you create a new profile.
1. Open the partition profile for modification and navigate to the SR-IOV logical ports tab in
the Logical Partition Profile properties panel (Figure 4-23).
2. Click the SR-IOV menu and select Add Logical Port (Figure 4-24).
3. Select the type of adapter to assign to the partition. For this scenario, we select Ethernet
Logical Port.
Note: Only the supported logical port types on your system are listed for selection of
the adapter type.
With the physical port selected, the HMC opens a panel (Figure 4-26) where you can view
and modify the properties of the logical port that is assigned to the partition profile.
Chapter 4. Configuration 43
bandwidth, whether unassigned or unused on a physical port will be
shared equally among all logical ports.
Diagnostic With the adapter set, select this check box to run the adapter in
diagnostic mode. Only one logical port per physical port is allowed to
have diagnostic mode set at any one time.
Promiscuous With the adapter set, select this check box so that the adapter allows
the partition to enable unicast promiscuous mode. Promiscuous mode
should be selected if the logical port will be a physical device for SEA
(that is, the user wants to use SEA to further virtualize the logical port).
5. The next tab in the logical port properties panel shows advanced properties. The user can
set required properties on the logical port, as shown in Figure 4-27.
7. With the VIOS activated, the SR-IOV adapters are now available to the operating system.
You can verify the location codes from a command line, as shown in Example 4-10.
Ethernet Adapter:
Network Address.............2E84006E1900
ROM Level.(alterable).......0.0.9999.19068
Hardware Location Code......U2C4B.001.DBJD102-P2-C8-T1-S3
PLATFORM SPECIFIC
Chapter 4. Configuration 45
Name: ethernet
Node: ethernet@0
Device Type: network
Physical Location: U2C4B.001.DBJD102-P2-C8-T1-S3
Note: Restoring the partition data restores all partition data, not only the adapter
configuration.
The HMC attempts to restore the adapter configuration, but be aware of certain restrictions
because the HMC cannot switch the adapter mode and restore the physical port properties in
one operation.
If the adapter is in dedicated mode when the restore takes place, and the backup file has
the adapter in shared mode, the HMC restores only the adapter configuration (that is,
switch the adapter to shared mode).
To avoid this, you can either switch the adapter to shared mode before restoring partition
data or restore the partition data twice (first to switch the mode, then to restore physical
port properties).
If the adapter is in shared mode when the restore takes place, and the backup file has the
adapter in dedicated mode, the HMC makes no changes to the adapter.
If the adapter is in dedicated mode and the backup file has it in dedicated mode, the HMC
makes no changes to the adapter.
If the adapter is in shared mode, and the backup file has it in shared mode, the HMC
restores the physical port settings.
The HMC ignores all failures that are related to restoring the adapter or physical port
properties.
Two extra SR-IOV functions are available from the CLI only:
Force delete (hwdbg command): This function requires the hmcpe role and forces the
unconfiguring of logical ports on the adapter and switching the adapter to dedicated mode.
This command (Example 4-11) requires the owner partitions to be shut down. Otherwise,
it returns a list of active owner partitions.
Attention: This command removes all physical port properties for the adapter. If you
want to later switch the adapter back to shared mode, you must manually reconfigure
the physical and logical ports or restore partition data from a backup file.
Reallocate adapter: If an adapter is replaced with a new adapter having the same
capabilities, and the new adapter is plugged into the same slot as the original, the
hypervisor will automatically associate the old adapter’s configuration with the new
adapter. However, if the new adapter is plugged in to a different slot, the chhwres
command (Example 4-12) is needed to associate the original adapter configuration with
the new adapter.
Note: The system must be in standby mode to ensure partitions are powered off.
Chapter 4. Configuration 47
4.9 Miscellaneous notes
Consider the following information:
When a system is in Manufacturing Default Configuration (MDC) mode, all adapters are in
dedicated mode. Switching an adapter to shared mode will also switch the system out of
MDC mode.
Partitions with SR-IOV logical ports cannot be migrated, suspended, or remotely restarted.
You must use DLPAR to remove the logical ports before you perform such tasks on the
partition.
Shared mode adapters and configured SR-IOV logical ports are not be included in I/O
Registry (IOR) data collection and sysplans.
Activating full system resource profiles fails if any adapter is in shared mode.
System profile validation or activation fails if SR-IOV logical port conflicts occur across
partition profiles.
The HMC currently limits the number of logical ports to 1024 per system because of save
area space constraints.
Chapter 5. Maintenance
This chapter describes how to maintain and troubleshoot your system. The SR-IOV adapter
generates SRC codes and errorlog entries that can help you determine the cause of an error.
Maintenance can be performed concurrently or non-concurrently. Specific rules must be
followed to avoid any adapter problems after the replacement. Some command examples that
can be used to maintain the SR-IOV adapter are also shown in this section.
As soon as the available adapters are transitioned to SR-IOV mode, the system firmware
updates the adapter firmware to the current built-in adapter firmware level. That is
independent from the level that is currently installed on the adapter. The system firmware will
always install that current available level.
Note: System firmware updates the adapter firmware when the adapter is used in SR-IOV
mode, regardless of what current level is installed on the adapter.
The update takes approximately 5 minutes and stay in an initializing state during that time.
When new system firmware service packs are installed, they might contain new SR-IOV
adapter firmware. If the logical ports are in use, the adapter firmware update remains
deferred for the SR-IOV adapters. To ensure all SR-IOV enabled adapters are updated, the be
sure to reboot the server.
The current adapter firmware level can be obtained from the command line only for specific
operating systems. The lscfg -vl command (Example 5-1) shows the adapter firmware
on AIX.
Ethernet Adapter:
Network Address.............2E840A550200
ROM Level.(alterable).......0.0.9999.19068
Hardware Location Code......U2C4B.001.DBJD102-P2-C8-T1-S2
On Linux, the adapter firmware can be listed by the ethtool -i command (Example 5-2)
Note: Neither the HMC nor IBM i provide a method to check the adapter firmware.
Also, some errors affect the entire adapter. For example, if the adapter firmware detects an
internal error, recovery of the whole adapter might be necessary. In this case, the hypervisor
detects the error, in addition to all of the adapter logical ports. If the hypervisor can
successfully recover the adapter, the logical ports go through their EEH recovery steps and
recover. If the adapter cannot be recovered (for example, if the adapter is broken), then all
logical ports cannot recover either, impacting the partitions that are sharing all of the adapter
logical ports.
To debug possible problems, multiple files and data can be collected from the system, such
as these items:
PE debug information from the HMC.
Log in as the hscpe user and issue the pedbg command. Appropriate flags are provided by
the support function.
Nondisruptive platform resource dump to gather SR-IOV debug data.
Collect any LPADump that occurred around the time of the error or create a new
LPADump.
An LPADump can be initiated by the hypervisor, the operating system, the adapter itself or
a user, where the System Reference Code indicates the source:
– A2D03004: User-initiated
– A2D03010: Hidden LPAR, OS or adapter initiated
– B2D3004, B2ppF00F, B2ppF011, B2ppF012, B400F104: Hypervisor initiated
To get the MANAGEDSYSTEM name, use the lssyscfg -r sys -F name command to have
the Managed System names displayed. The location code Uxxx.001.xxxxxxx-Px-Cx can be
obtained from the HMC, AIX or IBM i. For IBM i use the System Service Tools (SST), as
shown in Example 5-5 on page 53.
Chapter 5. Maintenance 51
Figure 5-1 shows the task that allows an administrator to locate the relations between
physical and logical ports in the HMC. Figure 5-2 shows what logical ports on which LPARs
are assigned to the specific physical ports.
Figure 5-1 This option in the HMC allows to find relation between the logical and physical ports
Note: The Device Name will not be displayed if the partition is not running or it does not
have a Resource Monitoring Control (RMC) in the case of AIX and Linux.
For an AIX LPAR, use the lscfg -vl entX command, as shown on Example 5-3.
Ethernet Adapter:
Network Address.............2E840A550200
ROM Level.(alterable).......0.0.9999.19068
Hardware Location Code......U2C4B.001.DBJD102-P2-C8-T1-S2
Example 5-4 System Service Tools location of the logical SR-IOV port
Communication Hardware Resource Detail
With the restart command-line option, the dump will be disruptive. See Example 5-5 for the
complete command syntax.
When the restart option is used to perform a disruptive SR-IOV dump, the adapter is
restarted. An adapter dump will be included also.
All logical ports on the adapter will enter EEH recovery while the dump and reboot are
occurring. The logical ports will recover after the dump is complete.
Chapter 5. Maintenance 53
SR-IOV dump with ASMI
Another way to initiate the SR-IOV Platform dump is to access the Advanced System
Management Interface (ASMI) of the system and use the Resource Dump function
(Figure 5-3).
Figure 5-4 Initiating Resource Dump from HMC GUI for SR-IOV Adapter
When you select Initiate Resource Dump, you must specify the resource selector by using
the following syntax:
sriov <adapter_location_code> [restart]
The restart is optional again. See Figure 5-5 for the HMC GUI window.
Chapter 5. Maintenance 55
When done, the dump is off-loaded to the attached HMC and can be managed from the HMC.
This applies to all ways of gathering the dump. The file can then be downloaded to media
(when available), copied to a remote system, called home, or simply deleted. See Figure 5-6
When the dump data has been transferred to the appropriate support team, the analysis
helps to find a cause for the observed problems with the SR-IOV Adapter.
When errors are encountered for the SR-IOV adapter, there are the common areas to look for
details. Error logging will be done in the OS Error log, HMC Serviceable Events, and Service
Processor Errorlogs, which can be accessed from the ASM Interface. Adapter errors have the
system reference code (SRCs) in the B4xxxxxx range. These error logs contain a location
code that can be used to determine which adapter the errors are related to. If the errors are
serviceable, they may either call out the adapter or the system firmware. If other parts of the
hypervisor encounter an error with the adapter or with configuration, they may also log errors.
The complete list of SRCs is available in the IBM Knowledge Center. HSCLxxxx HMC codes
are also present that give detailed information of possible problems with the SR-IOV Adapter.
Chapter 5. Maintenance 57
Verifying the adapter status on the HMC
You can see the SR-IOV adapter status in the Physical I/O Properties window. When a
system is functional, the SR-IOV properties are displayed. If the system is not functional, then
you might see the Initializing, Failed, or Missing status, depending on the adapter status that
is returned by the hypervisor. For example, if the SR-IOV card stopped working and is not
powering on, then the status displays as Missing.To access the SR-IOV properties window
select Systems Management → Servers. Then select your managed system, and select
Properties → I/O. Select the SR-IOV Adapter, then select the SR-IOV tab. Figure 5-7 shows
the SR-IOV adapter properties window.
During a state change, for example, the adapter can show some states like these:
Initializing (can take up to 5 minutes)
Dumping (a LPADump is generated either automatically or manually)
Powering off
If that applies, follow the normal service procedures to bring the adapter back to a normal
operational state.
For these cases, if you want to clean up the SR-IOV adapter configuration, you can try to
switch the adapter to dedicated mode by clearing the Shared Mode check box and clicking
OK. If there are configured logical ports, HMC suggests that you release them and try the
switching again.
If the switching mode operation does not work, then you might need to force a delete
operation, which must be performed on the HMC command line.
The force delete procedure requires the HMC command line and the hscpe user with the
hmcpe role. This procedure forces unconfiguring logical ports on the adapter and switching
the adapter to dedicated mode. This procedure requires the owner partitions to be shut down,
otherwise it gives a list of active owner partitions, and does not delete the adapter.
hwdbg -m MANAGEDSYSTEM -r sriov -o r -a “slot_id=21010208”
After replacing an adapter with a new adapter that has the same capabilities, if the new
adapter is plugged into the same slot, the hypervisor automatically associates the old
adapter’s configuration to the new adapter. A reallocate adapter procedure is necessary; if
the new adapter is plugged into a different slot, use the following command to re-associate
the source adapter configuration to the new adapter. The system must be at standby to
ensure partitions are powered off.
chhwres -m metsfsp1 -r sriov –rsubtype adapter -o m -a
“slot_id=21010208,target_slot_id=2101020A”
See 5.6, “HMC commands for SR-IOV handling” on page 66 for a more detailed HMC
command overview.
SR-IOV adapters can also be updated to a new firmware level concurrently. The firmware
update does not affect the existing SR-IOV configurations in the server, thus minimizing
system down time.
Chapter 5. Maintenance 59
5.4.1 Adapter concurrent and non-concurrent maintenance
Two kinds of SR-IOV capable adapters are available:
Integrated adapters: Can only be added or replaced non concurrently.
PCIe I/O adapters: Can be added and replaced concurrently and non concurrently.
Concurrent add
To add an SR-IOV capable adapter to the system, use the HMC GUI. To determine the
SR-IOV capable I/O slots, check the properties. Select the server, then select Properties; the
I/O tab then shows the PCIe slots (Figure 5-8).
Use Serviceability → Hardware → MES Tasks → Add FRU to start the concurrent add. See
Figure 5-9.
Select Hardware information → Adapters → SR-IOV End to End Mapping in the HMC GUI
and then select the appropriate port. Figure 5-10 shows an example.
To replace the adapter, all the logical ports must be deconfigured. The Exchange FRU task
determines the logical resources that must deconfigured. Use the appropriate OS commands
to deconfigure the logical port before replacing the adapter.
Note: All logical ports (VFs) must be deconfigured to successfully replace the adapter.
The adapter must be replaced with a card of the same type. The configuration is preserved
during this replacement. When the replacement procedure is done, the logical ports (VFs)
must be configured again with OS-specific commands (AIX uses cfgmgr, IBM i uses VRYCFG).
Concurrent removal
If a PCIe SR-IOV adapter must be completely removed, the logical ports (VFs) must be
removed completely for all active partitions. This can be done by taking the SR-IOV adapter
out of the shared mode. Using the HMC GUI, select Serviceability → Hardware → MES
Tasks → Remove FRU and remove the PCIe adapter. After the adapter is removed, its
configuration is discarded.
Non-concurrent removal
When the adapter has to be removed completely from the System in the power off state, the
adapter must be taken out of the SR-IOV mode before the removal. If this is not done, the
adapter appears as missing status in the HMC.
This situation can be resolved by taking the removed, but still configured, adapter out of the
SR-IOV mode in the HMC GUI. The orphaned configuration data is then deleted and there is
no more missing SR-IOV adapter.
Note: To prevent having to manually remove the configuration, take the adapter out of
SR-IOV mode before removing it from the system.
Chapter 5. Maintenance 61
Move and relocate an SR-IOV adapter
It is possible to move an SR-IOV capable adapter from one SR-IOV capable slot to another
SR-IOV capable slot inside the same system. It can be done non-concurrently and the
configuration is preserved.
Diagnostics
Performing diagnostics on an SR-IOV adapter can be performed in either dedicated (non
SR-IOV) mode or in SR-IOV mode. When the diagnostics are used in the dedicated mode,
the diagnostic tools are used just like for any other dedicated adapter. When the adapter is
set to SR-IOV mode, a logical port must be configured for diagnostics. To display the current
logical port settings, select a managed server in the Systems Management pane and select
Dynamic partitioning → SR-IOV Logical ports (Figure 5-11).
This overview shows the Diagnostic column. Select the appropriate port, and then click
Action → Edit Logical Port and either select or clear the Diagnostic flag.
Note: Only one logical port per physical port can be set to diagnostic permissions.
Figure 5-12 shows the Logical Port Properties, with the Diagnostic attribute selected.
Chapter 5. Maintenance 63
The AIX diagnostics can be used in either normal mode or advanced mode. If the partition is
running AIX then it might be necessary to put the SR-IOV Adapter back to dedicated mode
and then use the Standalone AIX Diagnostics to perform extended tests on the adapter. See
Table 5-1 for an overview which diagnostic test is possible under which diagnostic mode.
External Loopback testing provides an end to end testing of the adapters physical port. This is
possible with the use of an wrap plug that needs to be connected to the physical port at the
time of testing. Part numbers for the wire plufs are provided during the diagnostic test.
IBM i
There are no facilities within IBM i for performing detailed diagnostics by a user. All advanced
diagnostics, and analysis must be coordinated with IBM technical support.
With IBM i V7.1 TR8, performance statistics are introduced regarding SR-IOV adapters.
When an IBM i partition becomes eligible to collect internal performance information, it can
collect and report the adapter’s physical port statistics for all traffic flowing.
To enable the performance collections, select Allow performance information collection for
the LPAR properties, as shown in Figure 5-13 on page 65.
The physical port metrics for Ethernet ports are stored in the QAPMETHP file. The following
metrics are stored in the file:
Port resource name
Frames transmitted without error
Frames received without error
CRC error
More than 16 retries
Out of window collisions
Alignment error
Carrier loss
Discarded inbound frames
Receive overruns
Memory error
Signal quality
More than 1 retry to transmit
Exactly one retry to transmit
Deferred conditions
Total MAC bytes received ok
Total MAC bytes transmitted ok
Transmit frames discarded
Unsupported protocol frames
Chapter 5. Maintenance 65
Example 5-6 shows performance data stored in QAPMETHP. The field MAC Bytes Transmitted
shows all data transferred by the physical port.
Example 5-6 Physical port performance data from the QAPMETHP file
Display Report
Report width . . . . . : 396
Position to line . . . . . Shift to column . . . . . .
Line +...33....+...34....+...35....+...36....+...37....+...38....+...39....+.
MAC Transmit Unsupported
Bytes Frames protocol
Transmitted Discarded frames
000001 25,368,484 0 0
000002 598 0 0
000003 876 0 0
000004 2,081 0 0
000005 4,942 0 0
000006 660,732,396 0 0
000007 2,087,810,081 0 0
000008 4,278 0 0
****** ******** End of report ********
Bottom
F3=Exit F12=Cancel F19=Left F20=Right F21=Split
List the converged Ethernet ports (Example 5-8 on page 67 shows the command output):
lshwres -m MANAGEDSYSTEM -r sriov --rsubtype physport --level ethc
List the physical Ethernet ports configured on the SR-IOV adapter (Example 5-9 shows
the Ethernet port command output):
lshwres -m MANAGEDSYSTEM -r sriov –rsubtype physport –level eth
List the logical Ethernet ports configured on the SR-IOV adapter (Example 5-10 on
page 68 shows the command output):
lshwres -m MANAGEDSYSTEM -r sriov --rsubtype logport --level eth
Chapter 5. Maintenance 67
Example 5-10 Logical Ethernet ports
hscpe@slcb27a:~>lshwres -m Server1 -r sriov --rsubtype logport --level eth
config_id=0,lpar_name=VIOS2,lpar_id=4,lpar_state=Running,is_required=1,adapter_id=1,logi
cal_port_id=27004002,logical_port_type=eth,drc_name=PHB
4098,location_code=U2C4B.001.DBJD102-P2-C8-T1-S2,functional_state=1,phys_port_id=0,debug
_mode=0,diag_mode=0,huge_dma_window_mode=0,capacity=2.0,promisc_mode=0,mac_addr=2e840a55
0200,curr_mac_addr=5cf3fccf0a20,allowed_os_mac_addrs=all,allowed_vlan_ids=all,port_vlan_
id=0
config_id=1,lpar_name=VIOS2,lpar_id=4,lpar_state=Running,is_required=1,adapter_id=1,logi
cal_port_id=27004003,logical_port_type=eth,drc_name=PHB
4099,location_code=U2C4B.001.DBJD102-P2-C8-T1-S3,functional_state=1,phys_port_id=0,debug
_mode=0,diag_mode=0,huge_dma_window_mode=0,capacity=2.0,promisc_mode=0,mac_addr=2e840d60
e701,curr_mac_addr=5cf3fccf0a20,allowed_os_mac_addrs=all,allowed_vlan_ids=all,port_vlan_
id=0
...
List all unconfigured logical ports on the SR-IOV adapter (Example 5-11 shows the
command output):
lshwres -m MANAGEDSYSTEM -r sriov --rsubtype logport
See Table 5-2 for valid chhwres attribute names for changing an SR-IOV physical port.
Note: When an attribute for an SR-IOV physical port is changed, a short network
interruption might occur for all partitions that share the physical port.
adapter_id Required
phys_port_id Required
phys_port_label 1 - 16 characters
Specify none to clear the physical port label
phys_port_sub_label 1 - 8 characters
Specify none to clear the physical port sublabel
recv_flow_control 0: disable
1: enable
trans_flow_control 0: disable
1: enable
Chapter 5. Maintenance 69
Command examples for changing values
Examples of the chhwres command are shown in these tasks:
Switch an SR-IOV adapter to shared mode:
chhwres -r sriov -m sys1 --rsubtype adapter -o a -a "slot_id=21010202"
Switch an SR-IOV adapter to dedicated mode:
chhwres -r sriov -m sys1 --rsubtype adapter -o r -a "slot_id=21010202"
Set the connection speed for SR-IOV physical port 0 to 1000 Gbps:
chhwres -r sriov -m sys1 --rsubtype physport -o s -a
"adapter_id=1,phys_port_id=0,conn_speed=1000000"
Add an SR-IOV Ethernet logical port (using defaults) to partition lpar1:
chhwres -r sriov -m sys1 --rsubtype logport -o a -p lpar1 -a
"adapter_id=1,phys_port_id=1,logical_port_type=eth"
Remove an SR-IOV Ethernet logical port from partition lpar1:
chhwres -r sriov -m sys1 --rsubtype logport -o r -p lpar1 -a
"adapter_id=1,logical_port_id=27004001"
Change the port VLAN ID for an SR-IOV Ethernet logical port in partition lpar1:
chhwres -r sriov -m sys1 --rsubtype logport -o s -p lpar1 -a
"adapter_id=1,logical_port_id=27004001,port_vlan_id=2"
Set the connection speed for SR-IOV physical port 0 to 100 Gbps:
chhwres -r sriov -m sys1 --rsubtype physport -o s -a
"adapter_id=1,phys_port_id=0,conn_speed=100000"
Add an SR-IOV Ethernet logical port (using defaults) to partition lpar1:
chhwres -r sriov -m sys1 --rsubtype logport -o a -p lpar1 -a
"adapter_id=1,phys_port_id=1,logical_port_type=eth"
Remove an SR-IOV Ethernet logical port from partition lpar1:
chhwres -r sriov -m sys1 --rsubtype logport -o r -p lpar1 -a
"adapter_id=1,logical_port_id=27004001"
Change the port VLAN ID for an SR-IOV Ethernet logical port in partition lpar1:
chhwres -r sriov -m sys1 --rsubtype logport -o s -p lpar1 -a
"adapter_id=1,logical_port_id=27004001,port_vlan_id=2"
Reset the statistics for an SR-IOV physical port:
chhwres -r sriov -m sys1 --rsubtype physport -o rs -a
"adapter_id=1,phys_port_id=0"
For more details about the command syntax, values, and attributes, see the man pages.
Chapter 5. Maintenance 71
72 IBM Power Systems SR-IOV: Technical Overview and Introduction
Back cover ®
See how SR-IOV This IBM Redpaper publication describes the adapter-based
virtualization capabilities that are being deployed in high-end IBM INTERNATIONAL
minimizes contention
POWER7+ processor-based servers. TECHNICAL
with CPU and memory
Peripheral Component Interconnect Express (PCIe) single root I/O
SUPPORT
resources
virtualization (SR-IOV) is a virtualization technology on IBM Power ORGANIZATION
Systems servers. SR-IOV allows multiple logical partitions (LPARs) to
Explore powerful
share a PCIe adapter with little or no run time involvement of a
adapter-based hypervisor or other virtualization intermediary.
virtualization for
SR-IOV does not replace the existing virtualization capabilities that are BUILDING TECHNICAL
logical partitions offered as part of the IBM PowerVM offerings. Rather, SR-IOV INFORMATION BASED ON
compliments them with additional capabilities. PRACTICAL EXPERIENCE
Learn about industry
This paper describes many aspects of the SR-IOV technology:
standard PCI IBM Redbooks are developed
specification A comparison of SR-IOV with standard virtualization technology by the IBM International
Architectural overview of SR-IOV Technical Support
Overall benefits of SR-IOV Organization. Experts from
Planning requirements IBM, Customers and Partners
SR-IOV deployment models that use standard I/O virtualization from around the world create
Configuring the adapter for dedicated or shared modes timely technical information
Tips for maintaining and troubleshooting your system based on realistic scenarios.
Scenarios for configuring your system Specific recommendations
are provided to help you
This paper is directed to clients, IBM Business Partners, and system implement IT solutions more
administrators who are involved with planning, deploying, configuring, effectively in your
and maintaining key virtualization technologies. environment.
REDP-5065-00