Oracle®VMServer For SPARC 3.3 Administration Guide
Oracle®VMServer For SPARC 3.3 Administration Guide
Oracle®VMServer For SPARC 3.3 Administration Guide
3
Administration Guide
This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual
property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license,
transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software,
unless required by law for interoperability, is prohibited.
The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing.
If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, then the following
notice is applicable:
U.S. GOVERNMENT END USERS. Oracle programs, including any operating system, integrated software, any programs installed on the hardware, and/or
documentation, delivered to U.S. Government end users are "commercial computer software" pursuant to the applicable Federal Acquisition Regulation and
agency-specific supplemental regulations. As such, use, duplication, disclosure, modification, and adaptation of the programs, including any operating system,
integrated software, any programs installed on the hardware, and/or documentation, shall be subject to license terms and license restrictions applicable to the
programs. No other rights are granted to the U.S. Government.
This software or hardware is developed for general use in a variety of information management applications. It is not developed or intended for use in any inherently
dangerous applications, including applications that may create a risk of personal injury. If you use this software or hardware in dangerous applications, then you shall
be responsible to take all appropriate fail-safe, backup, redundancy, and other measures to ensure its safe use. Oracle Corporation and its affiliates disclaim any
liability for any damages caused by use of this software or hardware in dangerous applications.
Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.
Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are trademarks or registered
trademarks of SPARC International, Inc. AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro
Devices. UNIX is a registered trademark of The Open Group.
This software or hardware and documentation may provide access to or information about content, products, and services from third parties. Oracle Corporation
and its affiliates are not responsible for and expressly disclaim all warranties of any kind with respect to third-party content, products, and services unless otherwise
set forth in an applicable agreement between you and Oracle. Oracle Corporation and its affiliates will not be responsible for any loss, costs, or damages incurred due
to your access to or use of third-party content, products, or services, except as set forth in an applicable agreement between you and Oracle.
Documentation Accessibility
For information about Oracle's commitment to accessibility, visit the Oracle Accessibility Program website at http://www.oracle.com/pls/topic/
lookup?ctx=acc&id=docacc.
Access to Oracle Support
Oracle customers that have purchased support have access to electronic support through My Oracle Support. For information, visit http://www.oracle.com/pls/
topic/lookup?ctx=acc&id=info or visit http://www.oracle.com/pls/topic/lookup?ctx=acc&id=trs if you are hearing impaired.
Ce logiciel et la documentation qui laccompagne sont protgs par les lois sur la proprit intellectuelle. Ils sont concds sous licence et soumis des restrictions
dutilisation et de divulgation. Sauf stipulation expresse de votre contrat de licence ou de la loi, vous ne pouvez pas copier, reproduire, traduire, diffuser, modifier,
breveter, transmettre, distribuer, exposer, excuter, publier ou afficher le logiciel, mme partiellement, sous quelque forme et par quelque procd que ce soit. Par
ailleurs, il est interdit de procder toute ingnierie inverse du logiciel, de le dsassembler ou de le dcompiler, except des fins dinteroprabilit avec des logiciels
tiers ou tel que prescrit par la loi.
Les informations fournies dans ce document sont susceptibles de modification sans pravis. Par ailleurs, Oracle Corporation ne garantit pas quelles soient exemptes
derreurs et vous invite, le cas chant, lui en faire part par crit.
Si ce logiciel, ou la documentation qui laccompagne, est concd sous licence au Gouvernement des Etats-Unis, ou toute entit qui dlivre la licence de ce logiciel
ou lutilise pour le compte du Gouvernement des Etats-Unis, la notice suivante sapplique:
U.S. GOVERNMENT END USERS. Oracle programs, including any operating system, integrated software, any programs installed on the hardware, and/or
documentation, delivered to U.S. Government end users are "commercial computer software" pursuant to the applicable Federal Acquisition Regulation and
agency-specific supplemental regulations. As such, use, duplication, disclosure, modification, and adaptation of the programs, including any operating system,
integrated software, any programs installed on the hardware, and/or documentation, shall be subject to license terms and license restrictions applicable to the
programs. No other rights are granted to the U.S. Government.
Ce logiciel ou matriel a t dvelopp pour un usage gnral dans le cadre dapplications de gestion des informations. Ce logiciel ou matriel nest pas conu ni nest
destin tre utilis dans des applications risque, notamment dans des applications pouvant causer des dommages corporels. Si vous utilisez ce logiciel ou matriel
dans le cadre dapplications dangereuses, il est de votre responsabilit de prendre toutes les mesures de secours, de sauvegarde, de redondance et autres mesures
ncessaires son utilisation dans des conditions optimales de scurit. Oracle Corporation et ses affilis dclinent toute responsabilit quant aux dommages causs
par lutilisation de ce logiciel ou matriel pour ce type dapplications.
Oracle et Java sont des marques dposes dOracle Corporation et/ou de ses affilis. Tout autre nom mentionn peut correspondre des marques appartenant
dautres propritaires quOracle.
Intel et Intel Xeon sont des marques ou des marques dposes dIntel Corporation. Toutes les marques SPARC sont utilises sous licence et sont des marques ou des
marques dposes de SPARC International, Inc. AMD, Opteron, le logo AMD et le logo AMD Opteron sont des marques ou des marques dposes dAdvanced Micro
Devices. UNIX est une marque dpose dThe Open Group.
Ce logiciel ou matriel et la documentation qui laccompagne peuvent fournir des informations ou des liens donnant accs des contenus, des produits et des services
manant de tiers. Oracle Corporation et ses affilis dclinent toute responsabilit ou garantie expresse quant aux contenus, produits ou services manant de tiers, sauf
mention contraire stipule dans un contrat entre vous et Oracle. En aucun cas, Oracle Corporation et ses affilis ne sauraient tre tenus pour responsables des pertes
subies, des cots occasionns ou des dommages causs par laccs des contenus, produits ou services tiers, ou leur utilisation, sauf mention contraire stipule dans
un contrat entre vous et Oracle.
Accessibilit de la documentation
160119@25097
Pour plus dinformations sur lengagement dOracle pour laccessibilit la documentation, visitez le site Web Oracle Accessibility Program, ladresse
http://www.oracle.com/pls/topic/lookup?ctx=acc&id=docacc.
Accs au support lectronique
Les clients Oracle qui ont souscrit un contrat de support ont accs au support lectronique via My Oracle Support. Pour plus dinformations, visitez le site
http://www.oracle.com/pls/topic/lookup?ctx=acc&id=info ou le site http://www.oracle.com/pls/topic/lookup?ctx=acc&id=trs si vous tes
malentendant.
Contents
Preface ...................................................................................................................................................21
5
Contents
7
Contents
9
Contents
11
Contents
How to Run Logical Domains Manager in FIPS 140-2 Mode ............................................... 287
How to Revert the Logical Domains Manager to the Default Mode From FIPS 140-2
Mode ............................................................................................................................................ 288
Domain Migration Restrictions ....................................................................................................... 289
Version Restrictions for Migration .......................................................................................... 289
CPU Restrictions for Migration ............................................................................................... 290
Migration Restrictions for Setting perf-counters ............................................................... 290
Migrating a Domain .......................................................................................................................... 291
Performing a Dry Run ............................................................................................................... 291
Performing Non-Interactive Migrations ................................................................................. 292
Migrating an Active Domain ........................................................................................................... 292
Domain Migration Requirements for CPUs .......................................................................... 293
Migration Requirements for Memory ..................................................................................... 294
Migration Requirements for Physical I/O Devices ................................................................ 295
Migration Requirements for Virtual I/O Devices .................................................................. 295
Migration Requirements for PCIe Endpoint Devices ........................................................... 296
Migration Requirements for PCIe SR-IOV Virtual Functions ............................................. 296
Migration Requirements for NIU Hybrid I/O ........................................................................ 297
Migration Requirements for Cryptographic Units ................................................................ 297
Delayed Reconfiguration in an Active Domain ..................................................................... 297
Migrating While an Active Domain Has the Power Management Elastic Policy in Effect 297
Operations on Other Domains ................................................................................................. 298
Migrating a Domain From the OpenBoot PROM or a Domain That Is Running in the
Kernel Debugger ........................................................................................................................ 298
Migrating Bound or Inactive Domains ........................................................................................... 298
Migration Requirements for Virtual I/O Devices .................................................................. 299
Migration Requirements for PCIe Endpoint Devices ........................................................... 299
Migration Requirements for PCIe SR-IOV Virtual Functions ............................................. 299
Monitoring a Migration in Progress ............................................................................................... 300
Canceling a Migration in Progress .................................................................................................. 300
Recovering From a Failed Migration .............................................................................................. 301
Migration Examples .......................................................................................................................... 301
13
Contents
15
Contents
21 Using the Oracle VM Server for SPARC Management Information Base Software .................423
Oracle VM Server for SPARC Management Information Base Overview ................................. 423
Related Products and Features ................................................................................................. 424
Software Components ............................................................................................................... 424
System Management Agent ...................................................................................................... 425
Logical Domains Manager and the Oracle VM Server for SPARC MIB ............................. 426
Oracle VM Server for SPARC MIB Object Tree ..................................................................... 426
Installing and Configuring the Oracle VM Server for SPARC MIB Software ............................ 427
Installing and Configuring the Oracle VM Server for SPARC MIB Software ..................... 428
Managing Security ............................................................................................................................. 430
How to Create the Initial snmpv3 User ..................................................................................... 430
Monitoring Domains ........................................................................................................................ 431
Setting Environment Variables ................................................................................................ 431
Querying the Oracle VM Server for SPARC MIB .................................................................. 431
17
Contents
23 Using the XML Interface With the Logical Domains Manager ....................................................469
XML Transport .................................................................................................................................. 469
XMPP Server .............................................................................................................................. 470
Local Connections ..................................................................................................................... 470
XML Protocol .................................................................................................................................... 470
Request and Response Messages .............................................................................................. 471
Event Messages .................................................................................................................................. 475
Registration and Unregistration .............................................................................................. 475
<LDM_event> Messages .............................................................................................................. 476
Event Types ................................................................................................................................. 476
Logical Domains Manager Actions ................................................................................................. 479
Logical Domains Manager Resources and Properties .................................................................. 481
Domain Information (ldom_info) Resource ......................................................................... 481
CPU (cpu) Resource .................................................................................................................. 483
MAU (mau) Resource ................................................................................................................. 484
Memory (memory) Resource ..................................................................................................... 485
Virtual Disk Server (vds) Resource ......................................................................................... 485
Virtual Disk Server Volume (vds_volume) Resource ............................................................ 486
Disk (disk) Resource ................................................................................................................. 487
Virtual Switch (vsw) Resource .................................................................................................. 487
Network (network) Resource ................................................................................................... 488
19
20
Preface
Overview Provides detailed information and procedures that describe the overview,
security considerations, installation, configuration, modification, and execution of common
tasks for the Oracle VM Server for SPARC 3.3 software on supported servers, blades, and
server modules. See Supported Platforms in Oracle VM Server for SPARC 3.3 Installation
Guide.
Note The features that are described in this book can be used with all of the supported
system software and hardware platforms that are listed in Oracle VM Server for SPARC 3.3
Installation Guide. However, some features are only available on a subset of the supported
system software and hardware platforms. For information about these exceptions, see
Whats New in This Release in Oracle VM Server for SPARC 3.3 Release Notes and What's
New in Oracle VM Server for SPARC Software (http://www.oracle.com/
technetwork/server-storage/vm/documentation/sparc-whatsnew-330281.html).
Feedback
Provide feedback about this documentation at http://www.oracle.com/goto/docfeedback.
21
22
P A R T I
23
24
1
C H A P T E R 1
This chapter provides an overview of the Oracle VM Server for SPARC software.
The version of the Oracle Solaris OS that runs on a guest domain is independent of the Oracle
Solaris OS version that runs on the primary domain. So, if you run the Oracle Solaris 11 OS in
the primary domain, you can still run the Oracle Solaris 10 OS in a guest domain.
25
Hypervisor and Logical Domains
Note Some platforms no longer support the Oracle Solaris 10 OS in the primary domain.
Check your platform documentation. You can continue to run the Oracle Solaris 10 OS in the
other domains.
The only difference between running the Oracle Solaris 10 OS or the Oracle Solaris 11 OS on the
primary domain is the feature differences in each OS.
The SPARC hypervisor is a small firmware layer that provides a stable virtualized machine
architecture to which an operating system can be written. SPARC servers that use the
hypervisor provide hardware features to support the hypervisor's control over a logical
operating system's activities.
Each logical domain is only permitted to observe and interact with those server resources that
are made available to it by the hypervisor. The Logical Domains Manager enables you to specify
what the hypervisor should do through the control domain. Thus, the hypervisor enforces the
partitioning of the server's resources and provides limited subsets to multiple operating system
environments. This partitioning and provisioning is the fundamental mechanism for creating
logical domains. The following diagram shows the hypervisor supporting two logical domains.
It also shows the following layers that make up the Oracle VM Server for SPARC functionality:
User/services (applications)
Kernel (operating systems)
Firmware (hypervisor)
Hardware, including CPU, memory, and I/O
The number and capabilities of each logical domain that a specific SPARC hypervisor supports
are server-dependent features. The hypervisor can allocate subsets of the overall CPU, memory,
and I/O resources of a server to a given logical domain. This capability enables support of
multiple operating systems simultaneously, each within its own logical domain. Resources can
be rearranged between separate logical domains with an arbitrary granularity. For example,
CPUs are assignable to a logical domain with the granularity of a CPU thread.
Each logical domain can be managed as an entirely independent machine with its own
resources, such as:
Kernel, patches, and tuning parameters
User accounts and administrators
Disks
Network interfaces, MAC addresses, and IP addresses
Each logical domain can be stopped, started, and rebooted independently of each other without
requiring you to perform a power cycle of the server.
The hypervisor software is responsible for maintaining the separation between logical domains.
The hypervisor software also provides logical domain channels (LDCs) that enable logical
domains to communicate with each other. LDCs enable domains to provide services to each
other, such as networking or disk services.
The service processor (SP), also known as the system controller (SC), monitors and runs the
physical machine, but it does not manage the logical domains. The Logical Domains Manager
manages the logical domains.
In addition to using the ldm command to manage the Oracle VM Server for SPARC software,
you can now use Oracle VM Manager.
Oracle VM Manager is a web-based user interface that you can use to manage the Oracle VM
environment. Earlier versions of this user interface only managed the Oracle VM Server x86
software, but starting with Oracle VM Manager 3.2 and Oracle VM Server for SPARC 3.0, you
can also manage the Oracle VM Server for SPARC software. For more information about Oracle
VM Manager, see Oracle VM Documentation (http://www.oracle.com/technetwork/
documentation/vm-096300.html).
A PCIe SR-IOV virtual function. See Chapter 7, Creating an I/O Domain by Using PCIe
SR-IOV Virtual Functions.
An I/O domain can share physical I/O devices with other domains in the form of virtual
devices when the I/O domain is also used as a service domain.
Root domain. A root domain has a PCIe root complex assigned to it. This domain owns the
PCIe fabric and provides all fabric-related services, such as fabric error handling. A root
domain is also an I/O domain, as it owns and has direct access to physical I/O devices.
The number of root domains that you can have depends on your platform architecture. For
example, if you are using an eight-socket Oracle SPARC T5-8 server, you can have up to 16
root domains.
The default root domain is the primary domain. You can use non-primary domains to act
as root domains.
Guest domain. A guest domain is a non-I/O domain that consumes virtual device services
that are provided by one or more service domains. A guest domain does not have any
physical I/O devices but only has virtual I/O devices, such as virtual disks and virtual
network interfaces.
You can install the Logical Domains Manager on an existing system that is not already
configured with Oracle VM Server for SPARC. In this case, the current instance of the OS
becomes the control domain. Also, the system is configured with only one domain, the control
domain. After configuring the control domain, you can balance the load of applications across
other domains to make the most efficient use of the entire system by adding domains and
moving those applications from the control domain to the new domains.
Command-Line Interface
The Logical Domains Manager uses a command-line interface (CLI) to create and configure
logical domains. The CLI is a single command, ldm, that has multiple subcommands. See the
ldm(1M) man page.
The Logical Domains Manager daemon, ldmd, must be running to use the Logical Domains
Manager CLI.
Virtual Input/Output
In an Oracle VM Server for SPARC environment, you can provision up to 128 domains on a
system (up to 256 on a Fujitsu M10 server). Some servers, particularly single-processor and
some dual-processor systems, have a limited number of I/O buses and physical I/O slots. As a
result, you might be unable to provide exclusive access to a physical disk and network devices to
all domains on these systems. You can assign a PCIe bus or endpoint device to a domain to
provide it with access to a physical device. Note that this solution is insufficient to provide all
domains with exclusive device access. This limitation on the number of physical I/O devices
that can be directly accessed is addressed by implementing a virtualized I/O model. See
Chapter 5, Configuring I/O Domains.
Any logical domains that have no physical I/O access are configured with virtual I/O devices
that communicate with a service domain. The service domain runs a virtual device service to
provide access to a physical device or to its functions. In this client-server model, virtual I/O
devices either communicate with each other or with a service counterpart through interdomain
communication channels called logical domain channels (LDCs). The virtualized I/O
functionality includes support for virtual networking, storage, and consoles.
Virtual Network
Oracle VM Server for SPARC uses the virtual network device and virtual network switch device
to implement virtual networking. The virtual network (vnet) device emulates an Ethernet
device and communicates with other vnet devices in the system by using a point-to-point
channel. The virtual switch (vsw) device primarily functions as a multiplexor of all the virtual
network's incoming and outgoing packets. The vsw device interfaces directly with a physical
network adapter on a service domain, and sends and receives packets on behalf of a virtual
network. The vsw device also functions as a simple layer-2 switch and switches packets between
the vnet devices connected to it within the system.
Virtual Storage
The virtual storage infrastructure uses a client-server model to enable logical domains to access
block-level storage that is not directly assigned to them. The model uses the following
components:
Virtual disk client (vdc), which exports a block device interface
Virtual disk service (vds), which processes disk requests on behalf of the virtual disk client
and submits them to the back-end storage that resides on the service domain
Although the virtual disks appear as regular disks on the client domain, most disk operations
are forwarded to the virtual disk service and processed on the service domain.
Virtual Console
In an Oracle VM Server for SPARC environment, console I/O from the primary domain is
directed to the service processor. The console I/O from all other domains is redirected to the
service domain that is running the virtual console concentrator (vcc). The domain that runs the
vcc is typically the primary domain. The virtual console concentrator service functions as a
concentrator for console traffic for all domains, and interfaces with the virtual network terminal
server daemon (vntsd) to provide access to each console through a UNIX socket.
Resource Configuration
A system that runs the Oracle VM Server for SPARC software can configure resources such as
virtual CPUs, virtual I/O devices, cryptographic units, and memory. Some resources can be
configured dynamically on a running domain, while others must be configured on a stopped
domain. If a resource cannot be dynamically configured on the control domain, you must first
initiate a delayed reconfiguration. The delayed reconfiguration postpones the configuration
activities until after the control domain has been rebooted. For more information, see
Resource Reconfiguration on page 305.
Persistent Configurations
You can use the ldm command to store the current configuration of a logical domain on the
service processor. You can add a configuration, specify a configuration to be used, remove a
configuration, and list the configurations. For details, see the ldm(1M) man page. You can also
specify a configuration to boot from the SP, as described in Using Oracle VM Server for SPARC
With the Service Processor on page 366.
Note You cannot use the P2V tool to convert an Oracle Solaris 11 physical system to a virtual
system.
For information about the tool and about installing it, see Chapter 19, Oracle VM Server for
SPARC Physical-to-Virtual Conversion Tool. For information about the ldmp2v command, see
the ldmp2v(1M) man page.
This chapter describes some security features that you can enable on your Oracle VM Server for
SPARC system.
Note The examples in this book are shown as being performed by superuser. However, you can
use profiles instead to have users acquire more fine-grained permissions to perform
management tasks.
These rights profiles can be assigned directly to users or to a role that is then assigned to users.
When one of these profiles is assigned directly to a user, you must use the pfexec command or a
profile shell, such as pfbash or pfksh, to successfully use the ldm command to manage your
domains. Determine whether to use roles or rights profiles based on your rights configuration.
See System Administration Guide: Security Services or Securing Users and Processes in Oracle
Solaris 11.3 .
33
Delegating the Management of Logical Domains by Using Rights
Users, authorizations, rights profiles, and roles can be configured in the following ways:
Locally on the system by using files
Centrally in a naming service, such as LDAP
Installing the Logical Domains Manager adds the necessary rights profiles to the local files. To
configure profiles and roles in a naming service, see System Administration Guide: Naming and
Directory Services (DNS, NIS, and LDAP) . For an overview of the authorizations and execution
attributes delivered by the Logical Domains Manager package, see Logical Domains Manager
Profile Contents on page 36. All of the examples in this chapter assume that the rights
configuration uses local files.
Caution Be careful when using the usermod and rolemod commands to add authorizations,
rights profiles, or roles.
For the Oracle Solaris 11 OS, add values by using the plus sign (+) for each authorization you
add.
For example, the usermod -A +auth username command grants the auth authorization to
the username user; similarly for the rolemod command.
For the Oracle Solaris 10 OS, the usermod or rolemod command replaces any existing values.
To add values instead of replacing them, specify a comma-separated list of existing values
and the new values.
1 Become an administrator.
For Oracle Solaris 11.3, see Chapter 1, About Using Rights to Control Users and Processes, in
Securing Users and Processes in Oracle Solaris 11.3 .
The advantage of using this procedure is that only a user who has been assigned a specific role
can assume that role. When assuming a role, a password is required if the role has been assigned
a password. These two layers of security prevent a user who has not been assigned a role from
assuming that role even though he has the password.
1 Become an administrator.
For Oracle Solaris 11.3, see Chapter 1, About Using Rights to Control Users and Processes, in
Securing Users and Processes in Oracle Solaris 11.3 .
2 Create a role.
# roleadd -P "profile-name" role-name
The Logical Domains Manager package also adds the following execution attribute that is
associated with the LDoms Management profile and the LDoms Power Mgmt Observability
profile to the local execution profiles database:
LDoms Management:suser:cmd:::/usr/sbin/ldm:privs=file_dac_read,file_dac_search
LDoms Power Mgmt Observability:suser:cmd:::/usr/sbin/ldmpower:privs=file_dac_search
The following table lists the ldm subcommands with the corresponding user authorization that
is needed to perform the commands.
add-* solaris.ldoms.write
bind-domain solaris.ldoms.write
list solaris.ldoms.read
list-* solaris.ldoms.read
panic-domain solaris.ldoms.write
remove-* solaris.ldoms.write
set-* solaris.ldoms.write
start-domain solaris.ldoms.write
stop-domain solaris.ldoms.write
unbind-domain solaris.ldoms.write
1
Refers to all the resources you can add, list, remove, or set.
Caution Do not configure the vntsd service to use a host other than localhost.
If you specify a host other than localhost, you are no longer restricted from connecting to
guest domain consoles from the control domain. If you use the telnet command to remotely
connect to a guest domain, the login credentials are passed as clear text over the network.
By default, an authorization to access all guest consoles is present in the local authorization
description database.
Use the usermod command to assign the required authorizations to users or roles in local files.
This command permits only the user or role who has the required authorizations to access a
given domain console or console group. To assign authorizations to users or roles in a naming
service, see System Administration Guide: Naming and Directory Services (DNS, NIS, and
LDAP) .
You can control the access to all domain consoles or to a single domain console.
To control the access to all domain consoles, see How to Control Access to All Domain
Consoles by Using Roles on page 38 and How to Control Access to All Domain Consoles
by Using Rights Profiles on page 40.
To control access to a single domain console, see How to Control Access to a Single
Console by Using Roles on page 41 and How to Control Access to a Single Console by
Using Rights Profiles on page 42.
2 Create a role that has the solaris.vntsd.consoles authorization, which permits access to all
domain consoles.
primary# roleadd -A solaris.vntsd.consoles role-name
primary# passwd role-name
The following example shows how to create the all_cons role with the
solaris.vntsd.consoles authorization, which permits access to all domain consoles.
User sam assumes the all_cons role and can access any console. For example:
$ id
uid=700299(sam) gid=1(other)
$ su all_cons
Password:
$ telnet localhost 5000
Trying 0.0.0.0...
Connected to 0.
Escape character is ^].
This example shows what happens when an unauthorized user, dana, attempts to access a
domain console:
$ id
uid=702048(dana) gid=1(other)
$ telnet localhost 5000
Trying 0.0.0.0...
Connected to 0.
Escape character is ^].
Connection to 0 closed by foreign host.
The following commands show how to verify that the user is sam and that the All, Basic
Solaris User, and LDoms Consoles rights profiles are in effect. The telnet command shows
how to access the ldg1 domain console.
$ id
uid=702048(sam) gid=1(other)
$ profiles
All
Basic Solaris User
LDoms Consoles
$ telnet localhost 5000
Trying 0.0.0.0...
Connected to 0.
Escape character is ^].
3 Create a role with the new authorization to permit access only to the console of the domain.
primary# roleadd -A solaris.vntsd.console-domain role-name
primary# passwd role-name
New Password:
Re-enter new Password:
passwd: password successfully changed for role-name
First, add an authorization for a single domain, ldg1, to the authorization description database.
Then, create a role with the new authorization to permit access only to the console of the
domain.
Assign the ldg1cons role to user terry, assume the ldg1cons role, and access the domain
console.
The following example shows that the user terry cannot access the ldg2 domain console:
Note In Oracle VM Server for SPARC, audit records are not generated for Logical Domains
Manager actions by default.
You can enable and disable the Oracle Solaris OS auditing feature. Use the audit command. See
the audit(1M) man page and Managing Auditing in Oracle Solaris 11.3 .
Note Pre-existing processes are not audited for the virtualization software (vs) class. Ensure
that you perform this step before regular users log in to the system.
a. Add the following entry to the audit_event file if not already present:
40700:AUE_ldoms:ldoms administration:vs
b. Add the following entry to the audit_class file if not already present:
0x10000000:vs:virtualization_software
c. (Optional) Log out of the system if you want to audit your processes, either as the
administrator or as the configurer.
If you do not want to log out, see How to Update the Preselection Mask of Logged In Users
in Managing Auditing in Oracle Solaris 11.3 .
Service domains support console logging for logical domains. While the service domain must
run the Oracle Solaris 11 OS, the guest domain being logged can run either the Oracle Solaris 10
OS or the Oracle Solaris 11 OS.
The domain console log is saved to a file on the service domain called
/var/log/vntsd/domain/console-log that provides the vcc service. You can rotate console
log files by using the logadm command. See the logadm(1M) and logadm.conf(4) man pages.
The Oracle VM Server for SPARC software enables you to selectively enable and disable console
logging for each logical domain. Console logging is enabled by default.
Note Even if you enable console logging for a domain, the domain's virtual console is not
logged if the required support is not available on the service domain.
This chapter describes the procedures necessary to set up default services and your control
domain.
This chapter covers the following topics:
Output Messages on page 47
Creating Default Services on page 48
Initial Configuration of the Control Domain on page 49
Rebooting to Use Domains on page 51
Enabling Networking Between the Oracle Solaris 10 Service Domain and Other Domains
on page 52
Enabling the Virtual Network Terminal Server Daemon on page 53
Verifying That the ILOM Interconnect Is Enabled on page 54
Output Messages
If a resource cannot be dynamically configured on the control domain, the recommended
practice is to first initiate a delayed reconfiguration. The delayed reconfiguration postpones the
configuration activities until after the control domain has been rebooted.
You receive the following message when you initiate a delayed reconfiguration on the primary
domain:
47
Creating Default Services
2 Create a virtual disk server (vds) to allow importing virtual disks into a logical domain.
For example, the following command adds a virtual disk server (primary-vds0) to the control
domain (primary).
primary# ldm add-vds primary-vds0 primary
3 Create a virtual switch service (vsw) to enable networking between virtual network (vnet)
devices in logical domains.
Assign a GLDv3-compliant network adapter to the virtual switch if each logical domain must
communicate outside the box through the virtual switch.
Add a virtual switch service (primary-vsw0) on a network device that you want to use for guest
domain networking.
primary# ldm add-vsw net-dev=network-device vsw-service primary
For example, the following command adds a virtual switch service (primary-vsw0) on network
device net0 to the control domain (primary):
4 Verify the services have been created by using the list-services subcommand.
Your output should look similar to the following:
primary# ldm list-services primary
VDS
NAME VOLUME OPTIONS DEVICE
primary-vds0
VCC
NAME PORT-RANGE
primary-vcc0 5000-5100
VSW
NAME MAC NET-DEV DEVICE MODE
primary-vsw0 02:04:4f:fb:9f:0d net0 switch@0 prog,promisc
For domain sizing recommendations, see Oracle VM Server for SPARC Best Practices
(http://www.oracle.com/
technetwork/server-storage/vm/ovmsparc-best-practices-2334546.pdf).
7 Reboot the control domain to make the reconfiguration changes take effect.
When in the factory-default configuration, the control domain owns all of the host system's
memory. The memory DR feature is not well suited for this purpose because an active domain is
not guaranteed to add or, more typically, give up, all of the requested memory. Rather, the OS
running in that domain makes a best effort to fulfill the request. In addition, memory removal
can be a long-running operation. These issues are amplified when large memory operations are
involved, as is the case for the initial decrease of the control domain's memory.
Note When the Oracle Solaris OS is installed on a ZFS file system, it automatically sizes and
creates swap and dump areas as ZFS volumes in the ZFS root pool based on the amount of
physical memory that is present. If you change the domain's memory allocation, it might alter
the recommended size of these volumes. The allocations might be larger than needed after
reducing control domain memory. For swap space recommendations, see Planning for Swap
Space in Managing File Systems in Oracle Solaris 11.3. Before you free disk space, you can
optionally change the swap and dump space. See Managing ZFS Swap and Dump Devices in
Managing ZFS File Systems in Oracle Solaris 11.3.
How to Decrease the CPU and Memory Resources From the Control
Domain's Initial factory-default Configuration
This procedure shows how to decrease the CPU and memory resources from the control
domain's initial factory-default configuration. You first use CPU DR to decrease the number
of cores and then initiate a delayed reconfiguration before you decrease the amount of memory.
The example values are for CPU and memory sizes for a small control domain that has enough
resources to run the ldmd daemon and to perform migrations. However, if you want to use the
control domain for additional purposes, you can assign a larger number of cores and more
memory to the control domain as needed.
How to Reboot
Shut down and reboot the control domain.
primary# shutdown -y -g0 -i6
Note Either a reboot or power cycle instantiates the new configuration. Only a power cycle
actually boots the configuration saved to the service processor (SP), which is then reflected in
the list-config output.
Guest domains can automatically communicate with the Oracle Solaris 10 service domain as
long as the corresponding network back-end device is configured in the same virtual LAN or
virtual network.
If necessary, you can configure the virtual switch as well as the physical network device. In this
case, create the virtual switch as in Step 2, and do not delete the physical device (skip Step 3).
You must then configure the virtual switch with either a static IP address or a dynamic IP
address. You can obtain a dynamic IP address from a DHCP server. For additional information
and an example of this case, see Configuring a Virtual Switch and the Service Domain for NAT
and Routing on page 244.
3 Remove the physical interface for the device that is assigned to the virtual switch (net-dev).
# ifconfig nxge0 down unplumb
4 To migrate properties of the physical network device (nxge0) to the virtual switch device (vsw0),
do one of the following:
If networking is configured by using a static IP address, reuse the IP address and netmask of
nxge0 for the virtual switch.
# ifconfig vsw0 IP-of-nxge0 netmask netmask-of-nxge0 broadcast + up
If networking is configured by using DHCP, enable DHCP for the virtual switch.
# ifconfig vsw0 dhcp start
5 Make the required configuration file modifications to make this change permanent.
# mv /etc/hostname.nxge0 /etc/hostname.vsw0
# mv /etc/dhcp.nxge0 /etc/dhcp.vsw0
Note Be sure that you have created the default service vconscon (vcc) on the control domain
before you enable vntsd. See Creating Default Services on page 48 for more information.
Note Avoid disabling the ILOM interconnect on other SPARC T-Series systems, SPARC M5,
and SPARC M6 systems. However, if you do so, the ldmd daemon can still communicate with
the SP.
If an attempt to use the ldm command to manage domain configurations on SPARC T7 series
servers and SPARC M7 series servers fails because of an error communicating with the SP, check
the ILOM interconnect state and re-enable the ilomconfig-interconnect service if necessary.
See How to Verify the ILOM Interconnect Configuration on page 54 and How to Re-Enable
the ILOM Interconnect Service on page 55.
3 Verify that the ldmd daemon can communicate with the SP.
primary# ldm list-spconfig
4 Verify that the ldmd daemon can communicate with the SP.
primary# ldm list-spconfig
Note A guest domain that has been assigned more than 1024 CPUs cannot run the Oracle
Solaris 10 OS. In addition, you cannot use CPU DR to reduce the number of CPUs below 1024
to run the Oracle Solaris 10 OS.
To work around this problem, unbind the guest domain, remove CPUs until you have no more
than 1024 CPUs, and then rebind the guest domain. You can then run the Oracle Solaris 10 OS
on this guest domain.
57
Creating and Starting a Guest Domain
Where:
vnet1 is a unique interface name to the logical domain, assigned to this virtual network
device instance for reference on subsequent set-vnet or remove-vnet subcommands.
primary-vsw0 is the name of an existing network service (virtual switch) to which to
connect.
Note Steps 5 and 6 are simplified instructions for adding a virtual disk server device (vdsdev)
to the primary domain and a virtual disk (vdisk) to the guest domain. To learn how ZFS
volumes and file systems can be used as virtual disks, see How to Export a ZFS Volume as a
Single-Slice Disk on page 181 and Using ZFS With Virtual Disks on page 192.
5 Specify the device to be exported by the virtual disk server as a virtual disk to the guest domain.
You can export a physical disk, disk slice, volumes, or file as a block device. The following
examples show a physical disk and a file.
Physical Disk Example. This example adds a physical disk with these specifics:
primary# ldm add-vdsdev /dev/dsk/c2t1d0s2 vol1@primary-vds0
Where:
/dev/dsk/c2t1d0s2 is the path name of the actual physical device. When adding a
device, the path name must be paired with the device name.
vol1 is a unique name you must specify for the device being added to the virtual disk
server. The volume name must be unique to this virtual disk server instance because this
name is exported by this virtual disk server to the clients for adding. When adding a
device, the volume name must be paired with the path name of the actual device.
primary-vds0 is the name of the virtual disk server to which to add this device.
File Example. This example exports a file as a block device.
primary# ldm add-vdsdev backend vol1@primary-vds0
Where:
backend is the path name of the actual file exported as a block device. When adding a
device, the back end must be paired with the device name.
vol1 is a unique name you must specify for the device being added to the virtual disk
server. The volume name must be unique to this virtual disk server instance because this
name is exported by this virtual disk server to the clients for adding. When adding a
device, the volume name must be paired with the path name of the actual device.
primary-vds0 is the name of the virtual disk server to which to add this device.
Where:
vdisk1 is the name of the virtual disk.
vol1 is the name of the existing volume to which to connect.
primary-vds0 is the name of the existing virtual disk server to which to connect.
Note The virtual disks are generic block devices that are associated with different types of
physical devices, volumes, or files. A virtual disk is not synonymous with a SCSI disk and,
therefore, excludes the target ID in the disk label. Virtual disks in a logical domain have the
following format: cNdNsN, where cN is the virtual controller, dN is the virtual disk number,
and sN is the slice.
7 Set the auto-boot? and boot-device variables for the guest domain.
The following example command sets auto-boot? to true for guest domain ldg1.
primary# ldm set-var auto-boot\?=true ldg1
The following example command sets boot-device to vdisk1 for guest domain ldg1.
8 Bind resources to the guest domain ldg1 and then list the domain to verify that it is bound.
primary# ldm bind-domain ldg1
primary# ldm list-domain ldg1
NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME
ldg1 bound ----- 5000 8 2G
9 To find the console port of the guest domain, you can look at the output of the preceding
list-domain subcommand.
You can see under the heading CONS that logical domain guest 1 (ldg1) has its console output
bound to port 5000.
10 Connect to the console of a guest domain from another terminal by logging into the control
domain and connecting directly to the console port on the local host.
$ ssh hostname.domain-name
$ telnet localhost 5000
Caution Do not disconnect from the virtual console during the installation of the Oracle Solaris
OS.
For Oracle Solaris 11 domains, use the DefaultFixed network configuration profile (NCP).
You can enable this profile during or after installation.
During the Oracle Solaris 11 installation, select the Manual networking configuration. After the
Oracle Solaris 11 installation, ensure that the DefaultFixed NCP is enabled by using the
netadm list command. See Chapters 2, 3, 5, and 6 of Configuring and Managing Network
Components in Oracle Solaris 11.3 .
The OpenBoot PROM has a minimum size restriction for a domain. Currently, that restriction
is 12 Mbytes. If you have a domain smaller than that size, the Logical Domains Manager will
automatically boost the size of the domain to 12 Mbytes. The minimum size restriction for a
Fujitsu M10 server is 256 Mbytes. Refer to the release notes for your system firmware for
information about memory size requirements.
The memory dynamic reconfiguration (DR) feature enforces 256-Mbyte alignment on the
address and size of the memory involved in a given operation. See Memory Alignment on
page 324.
4 Add the DVD with the DVD-ROM media as a secondary volume and virtual disk.
The following example uses c0t0d0s2 as the DVD drive in which the Oracle Solaris media
resides, dvd_vol@primary-vds0 as a secondary volume, and vdisk_cd_media as a virtual disk.
primary# ldm add-vdsdev options=ro /dev/dsk/c0t0d0s2 dvd_vol@primary-vds0
primary# ldm add-vdisk vdisk_cd_media dvd_vol@primary-vds0 ldg1
5 Verify that the DVD is added as a secondary volume and virtual disk.
primary# ldm list-bindings
NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME
primary active -n-cv SP 8 8G 0.2% 22h 45m
...
VDS
NAME VOLUME OPTIONS DEVICE
primary-vds0 vol1 /dev/dsk/c2t1d0s2
dvd_vol /dev/dsk/c0t0d0s2
....
------------------------------------------------------------------------------
NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME
ldg1 inactive ----- 60 6G
...
DISK
NAME VOLUME TOUT DEVICE SERVER
vdisk1 vol1@primary-vds0
vdisk_cd_media dvd_vol@primary-vds0
....
2 Add the Oracle Solaris ISO file as a secondary volume and virtual disk.
The following example uses solarisdvd.iso as the Oracle Solaris ISO file,
iso_vol@primary-vds0 as a secondary volume, and vdisk_iso as a virtual disk:
primary# ldm add-vdsdev /export/solarisdvd.iso iso_vol@primary-vds0
primary# ldm add-vdisk vdisk_iso iso_vol@primary-vds0 ldg1
3 Verify that the Oracle Solaris ISO file is added as a secondary volume and virtual disk.
primary# ldm list-bindings
NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME
primary active -n-cv SP 8 8G 0.2% 22h 45m
...
VDS
NAME VOLUME OPTIONS DEVICE
primary-vds0 vol1 /dev/dsk/c2t1d0s2
iso_vol /export/solarisdvd.iso
....
------------------------------------------------------------------------------
NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME
ldg1 inactive ----- 60 6G
...
DISK
NAME VOLUME TOUT ID DEVICE SERVER MPGROUP
vdisk1 vol1@primary-vds0
vdisk_iso iso_vol@primary-vds0
....
To perform an automated installation of the Oracle Solaris 11 OS, you can use the Automated
Installer (AI) feature. See Transitioning From Oracle Solaris 10 to Oracle Solaris 11.3 .
Modify your JumpStart profile to reflect the different disk device name format for the guest
domain.
Virtual disk device names in a logical domain differ from physical disk device names. Virtual
disk device names do not contain a target ID (tN). Instead of the usual cNtNdNsN format,
virtual disk device names use the cNdNsN format. cN is the virtual controller, dN is the virtual
disk number, and sN is the slice number.
Note A virtual disk can appear either as a full disk or as a single-slice disk. The Oracle Solaris
OS can be installed on a full disk by using a regular JumpStart profile that specifies multiple
partitions. A single-slice disk only has a single partition, s0, that uses the entire disk. To install
the Oracle Solaris OS on a single disk, you must use a profile that has a single partition (/) that
uses the entire disk. You cannot define any other partitions, such as swap. For more
information about full disks and single-slice disks, see Virtual Disk Appearance on page 173.
This chapter describes I/O domains and how to configure them in an Oracle VM Server for
SPARC environment.
You might want to configure I/O domains for the following reasons:
An I/O domain has direct access to a physical I/O device, which avoids the performance
overhead that is associated with virtual I/O. As a result, the I/O performance on an I/O
domain more closely matches the I/O performance on a bare-metal system.
An I/O domain can host virtual I/O services to be used by guest domains.
For information about configuring I/O domains, see the information in the following chapters:
Chapter 6, Creating a Root Domain by Assigning PCIe Buses
Chapter 8, Creating an I/O Domain by Using Direct I/O
Chapter 7, Creating an I/O Domain by Using PCIe SR-IOV Virtual Functions
Chapter 9, Using Non-primary Root Domains
67
General Guidelines for Creating an I/O Domain
Note You cannot migrate a domain that has PCIe buses, PCIe endpoint devices, or SR-IOV
virtual functions. For information about other migration limitations, see Chapter 13,
Migrating Domains.
This type of direct access to I/O devices means that more I/O bandwidth is available to provide
the following:
Services to the applications in the I/O domain
Virtual I/O services to guest domains
The following basic guidelines enable you to effectively use the I/O bandwidth:
Assign CPU resources at the granularity of CPU cores. Assign one or more CPU cores based
on the type of I/O device and the number of I/O devices in the I/O domain.
For example, a 1-Gbps Ethernet device might require fewer CPU cores to use the full
bandwidth compared to a 10-Gbps Ethernet device.
Abide by memory requirements. Memory requirements depend on the type of I/O device
that is assigned to the domain. A minimum of 4 Gbytes is recommended per I/O device. The
more I/O devices you assign, the more memory you must allocate.
When you use the PCIe SR-IOV feature, follow the same guidelines for each SR-IOV virtual
function that you would use for other I/O devices. So, assign one or more CPU cores and
memory (in Gbytes) to fully use the bandwidth that is available from the virtual function.
Note that creating and assigning a large number of virtual functions to a domain that does not
have sufficient CPU and memory resources is unlikely to produce an optimal configuration.
SPARC systems, up to and including the SPARC T5 and SPARC M6 platforms, provide a finite
number of interrupts, so Oracle Solaris limits the number of interrupts that each device can use.
The default limit should match the needs of a typical system configuration but you might need
to adjust this value for certain system configurations. For more information, see Adjusting the
Interrupt Limit on page 380.
This chapter describes how to create a root domain by assigning PCIe buses.
Creating a Root Domain by Assigning PCIe Buses on page 69
The following diagram shows a system that has three root complexes, pci_0, pci_1, and pci_2.
69
Creating a Root Domain by Assigning PCIe Buses
The maximum number of root domains that you can create with PCIe buses depends on the
number of PCIe buses that are available on the server. Use the ldm list-io to determine the
number of PCIe buses available on your system.
When you assign a PCIe bus to a root domain, all devices on that bus are owned by that root.
You can assign any of the PCIe endpoint devices on that bus to other domains.
While the root domain is stopped or in delayed reconfiguration, you can run one or more of the
ldm add-io and ldm remove-io commands before you reboot the root domain. To minimize
domain downtime, plan ahead before assigning or removing PCIe buses.
For root domains, both primary and non-primary, use delayed reconfiguration. After you
have added or removed the PCIe buses, reboot the root domain to make the changes take
effect.
primary# ldm start-reconf root-domain
Add or remove the PCIe bus by using the ldm add-io or ldm remove-io command
primary# ldm stop -r domain-name
Note that you can use delayed reconfiguration only if the domain already owns a PCIe bus.
For non-root domains, stop the domain and then add or remove the PCie bus.
primary# ldm stop domain-name
Add or remove the PCIe bus by using the ldm add-io or ldm remove-io command
primary# ldm start domain-name
The dynamic PCIe bus assignment feature is enabled when your system runs the required
firmware and software. See Dynamic PCIe Bus Assignment Requirements on page 71. If your
system does not run the required firmware and software, the ldm add-io and ldm remove-io
commands fail gracefully.
When enabled, you can run the ldm add-io and ldm remove-io commands without stopping
the root domain or putting the root domain in delayed reconfiguration.
least the 9.4.2 version of the system firmware, SPARC T7 series servers and SPARC M7 series
servers must run at least 9.4.3, and the Fujitsu M10 server must run at least XCP2240.
Caution All internal disks on the supported servers might be connected to a single PCIe bus. If a
domain is booted from an internal disk, do not remove that bus from the domain.
Ensure that you do not remove a bus that has devices that are used by a domain, such as
network ports or usbecm devices. If you remove the wrong bus, a domain might not be able to
access the required devices and could become unusable. To remove a bus that has devices that
are used by a domain, reconfigure that domain to use devices from other buses. For example,
you might have to reconfigure the domain to use a different on-board network port or a PCIe
card from a different PCIe slot.
On certain SPARC servers, you can remove a PCIe bus that contains USB, graphics controllers,
and other devices. However, you cannot add such a PCIe bus to any other domain. Such PCIe
buses can be added only to the primary domain.
In this example, the primary domain uses only a ZFS pool (rpool) and network interface
(igb0). If the primary domain uses more devices, repeat Steps 2-4 for each device to ensure that
none are located on the bus that will be removed.
You can add a bus to or remove a bus from a domain by using its device path (pci@nnn) or its
pseudonym (pci_n). The ldm list-bindings primary or ldm list -l -o physio primary
command shows the following:
pci@400 corresponds to pci_0
pci@500 corresponds to pci_1
pci@600 corresponds to pci_2
pci@700 corresponds to pci_3
1 Verify that the primary domain owns more than one PCIe bus.
primary# ldm list-io
NAME TYPE BUS DOMAIN STATUS
---- ---- --- ------ ------
2 Determine the device path of the boot disk that must be retained.
For UFS file systems, run the df / command to determine the device path of the boot disk.
primary# df /
/ (/dev/dsk/c0t5000CCA03C138904d0s0):22755742 blocks 2225374 files
For ZFS file systems, first run the df / command to determine the pool name. Then, run the
zpool status command to determine the device path of the boot disk.
primary# zpool status rpool
pool: rpool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
c0t5000CCA03C138904d0s0 ONLINE 0 0 0
For a disk that is managed with Solaris I/O multipathing, determine the PCIe bus under
which the boot disk is connected by using the mpathadm command.
Starting with the SPARC T3 servers, the internal disks are managed by Solaris I/O
multipathing.
Revision: A2B0
Name Type: unknown type
Name: 5000cca03c138904
Asymmetric: no
Current Load Balance: round-robin
Logical Unit Group ID: NA
Auto Failback: on
Auto Probing: NA
Paths:
Initiator Port Name: w50800200014100c8
Target Port Name: w5000cca03c138905
Override Path: NA
Path State: OK
Disabled: no
Target Ports:
Name: w5000cca03c138905
Relative ID: 0
For a disk that is not managed with Solaris I/O multipathing, determine the physical device
to which the block device is linked by using the ls -l command.
Use this command for a disk on an UltraSPARC T2 or UltraSPARC T2 Plus system that is
not managed with Solaris I/O multipathing.
The following example uses block device c1t0d0s0:
primary# ls -l /dev/dsk/c0t1d0s0
lrwxrwxrwx 1 root root 49 Oct 1 10:39 /dev/dsk/c0t1d0s0 ->
../../devices/pci@400/pci@0/pci@1/scsi@0/sd@1,0:a
In this example, the physical device for the primary domain's boot disk is connected to the
pci@400 bus.
6 Remove a bus that does not contain the boot disk or the network interface from the primary
domain.
In this example, the pci_2 bus is being removed from the primary domain.
Dynamic method:
Ensure that the devices in the pci_2 bus are not in use by the primary domain OS. If they
are, this command might fail to remove the bus. Use the static method to forcibly remove
the pci_2 bus.
primary# ldm remove-io pci_2 primary
Static method:
Before you remove the bus, you must initiate a delayed reconfiguration.
primary# ldm start-reconf primary
primary# ldm remove-io pci_2 primary
primary# shutdown -y -g0 -i6
The bus that the primary domain uses for the boot disk and the network device cannot be
assigned to other domains. You can assign any of the other buses to another domain. In this
example, the pci@600 is not used by the primary domain, so you can reassign it to another
domain.
Dynamic method:
primary# ldm add-io pci_2 ldg1
Static method:
Before you add the bus, you must stop the domain.
primary# ldm stop-domain ldg1
primary# ldm add-io pci_2 ldg1
primary# ldm start-domain ldg1
9 Confirm that the correct bus is still assigned to the primary domain and that the correct bus is
assigned to domain ldg1.
primary# ldm list-io
NAME TYPE BUS DOMAIN STATUS
---- ---- --- ------ ------
pci_0 BUS pci_0 primary
pci_1 BUS pci_1 primary
pci_2 BUS pci_2 ldg1
pci_3 BUS pci_3 primary
/SYS/MB/PCIE1 PCIE pci_0 primary EMP
/SYS/MB/SASHBA0 PCIE pci_0 primary OCC
/SYS/MB/NET0 PCIE pci_0 primary OCC
/SYS/MB/PCIE5 PCIE pci_1 primary EMP
/SYS/MB/PCIE6 PCIE pci_1 primary EMP
/SYS/MB/PCIE7 PCIE pci_1 primary EMP
/SYS/MB/PCIE2 PCIE pci_2 ldg1 EMP
/SYS/MB/PCIE3 PCIE pci_2 ldg1 EMP
/SYS/MB/PCIE4 PCIE pci_2 ldg1 EMP
/SYS/MB/PCIE8 PCIE pci_3 primary EMP
/SYS/MB/SASHBA1 PCIE pci_3 primary OCC
/SYS/MB/NET2 PCIE pci_3 primary OCC
/SYS/MB/NET0/IOVNET.PF0 PF pci_0 primary
/SYS/MB/NET0/IOVNET.PF1 PF pci_0 primary
/SYS/MB/NET2/IOVNET.PF0 PF pci_3 primary
/SYS/MB/NET2/IOVNET.PF1 PF pci_3 primary
This output confirms that PCIe buses pci_0, pci_1, and pci_3 and their devices are assigned to
the primary domain. It also confirms that PCIe bus pci_2 and its devices are assigned to the
ldg1 domain.
SR-IOV Overview
Note Because root domains cannot have dependencies on other root domains, a root domain
that owns a PCIe bus cannot have its PCIe endpoint devices or SR-IOV virtual functions
assigned to another root domain. However, you can assign a PCIe endpoint device or virtual
function from a PCIe bus to the root domain that owns that bus.
The Peripheral Component Interconnect Express (PCIe) single root I/O virtualization
(SR-IOV) implementation is based on version 1.1 of the standard as defined by the PCI-SIG.
The SR-IOV standard enables the efficient sharing of PCIe devices among virtual machines and
is implemented in the hardware to achieve I/O performance that is comparable to native
performance. The SR-IOV specification defines a new standard wherein new devices that are
created enable the virtual machine to be directly connected to the I/O device.
77
SR-IOV Overview
A single I/O resource, which is known as a physical function, can be shared by many virtual
machines. The shared devices provide dedicated resources and also use shared common
resources. In this way, each virtual machine has access to unique resources. Therefore, a PCIe
device, such as an Ethernet port, that is SR-IOV-enabled with appropriate hardware and OS
support can appear as multiple, separate physical devices, each with its own PCIe configuration
space.
For more information about SR-IOV, see the PCI-SIG web site (http://www.pcisig.com/).
The following figure shows the relationship between virtual functions and a physical function in
an I/O domain.
Each SR-IOV device can have a physical function and each physical function can have up to 256
virtual functions associated with it. This number is dependent on the particular SR-IOV device.
The virtual functions are created by the physical function.
After SR-IOV is enabled in the physical function, the PCI configuration space of each virtual
function can be accessed by the bus, device, and function number of the physical function. Each
virtual function has a PCI memory space, which is used to map its register set. The virtual
function device drivers operate on the register set to enable its functionality and the virtual
function appears as an actual PCI device. After creation, you can directly assign a virtual
function to an I/O domain. This capability enables the virtual function to share the physical
device and to perform I/O without CPU and hypervisor software overhead.
You might want to use the SR-IOV feature in your environment to reap the following benefits:
Higher performance and reduced latency Direct access to hardware from a virtual
machines environment
Cost reduction Capital and operational expenditure savings, which include:
Power savings
Reduced adapter count
Less cabling
Fewer switch ports
The Oracle VM Server for SPARC SR-IOV implementation includes both static and dynamic
configuration methods. For more information, see Static SR-IOV on page 84 and Dynamic
SR-IOV on page 85.
The Oracle VM Server for SPARC SR-IOV feature enables you to perform the following
operations:
Creating a virtual function on a specified physical function
Destroying a specified virtual function on a physical function
Assigning a virtual function to a domain
Removing a virtual function from a domain
To create and destroy virtual functions in the SR-IOV physical function devices, you must first
enable I/O virtualization on that PCIe bus. You can use the ldm set-io or ldm add-io
command to set the iov property to on. You can also use the ldm add-domain or ldm
set-domain command to set the rc-add-policy property to iov. See the ldm(1M) man page.
Note On SPARC M7 series servers, SPARC T7 series servers, and Fujitsu M10 servers, PCIe
buses are enabled for I/O virtualization by default.
Assigning a SR-IOV virtual function to a domain creates an implicit dependency on the domain
providing the SR-IOV physical function service. You can view these dependencies or view
domains that depend on this SR-IOV physical function by using the ldm list-dependencies
command. See Listing Domain I/O Dependencies on page 382.
Refer to your platform's hardware documentation to verify which cards can be used on your
platform. For an up-to-date list of supported PCIe cards, see https://
support.oracle.com/
CSP/main/article?cmd=show&type=NOT&doctype=REFERENCE&id=1325454.1.
Ethernet SR-IOV. To use the SR-IOV feature, you can use on-board PCIe SR-IOV
devices as well as PCIe SR-IOV plug-in cards. All on-board SR-IOV devices in a given
platform are supported unless otherwise explicitly stated in the platform
documentation.
InfiniBand SR-IOV. InfiniBand devices are supported on the SPARC T4 server, SPARC
T5 server, SPARC T7 series server, SPARC M5 server, SPARC M6 server, SPARC M7
series server, and Fujitsu M10 server.
Fibre Channel SR-IOV. Fibre Channel devices are supported on the SPARC T4 server,
SPARC T5 server, SPARC T7 series server, SPARC M5 server, SPARC M6 server, SPARC
M7 series server, and Fujitsu M10 server.
For an up-to-date list of supported devices on Fujitsu M10 platforms, see Fujitsu
M10/SPARC M10 Systems PCI Card Installation Guide in the product notes for your
model at http://www.fujitsu.com/
global/services/computing/server/sparc/downloads/manual/
Firmware Requirements.
Ethernet SR-IOV. To use the dynamic SR-IOV feature, SPARC T4 server must run at
least version 8.4.0.a of the system firmware. SPARC T5 servers, SPARC M5 servers, and
SPARC M6 servers must run at least version 9.1.0.a of the system firmware. SPARC T7
series servers and SPARC M7 series servers must run at least version 9.4.3 of the system
firmware. Fujitsu M10 servers must run at least version XCP2210 of the system
firmware. The SPARC T3 server supports only the static SR-IOV feature.
To use the SR-IOV feature, PCIe SR-IOV devices must run at least device firmware
version 3.01. Perform the following steps to update the firmware for the Sun Dual
10-Gigabit Ethernet SFP+ PCIe 2.0 network adapters:
1. Determine whether you need to upgrade the FCode version on the device.
Perform these commands from the ok prompt:
{0} ok cd path-to-device
{0} ok .properties
The version value in the output must be one of the following:
LP Sun Dual 10GbE SFP+ PCIe 2.0 LP FCode 3.01 4/2/2012
PEM Sun Dual 10GbE SFP+ PCIe 2.0 EM FCode 3.01 4/2/2012
FEM Sun Dual 10GbE SFP+ PCIe 2.0 FEM FCode 3.01 4/2/2012
2. Download patch ID 13932765 from My Oracle Support (https://
support.oracle.com/
CSP/ui/
flash.html#tab=PatchHomePage(page=PatchHomePage&id=h0wvdxy6())).
3. Install the patch.
The patch package includes a document that describes how to use the tool to perform
the upgrade.
InfiniBand SR-IOV. To use this feature, your system must run at least the following
version of the system firmware:
SPARC T4 Servers 8.4
SPARC T5 Servers 9.1.0.x
SPARC T7 Series Servers 9.4.3
SPARC M5 and SPARC M6 Servers 9.1.0.x
SPARC M7 Series Servers 9.4.3
Fujitsu M10 Server XCP2210
Use the Oracle Solaris 11.1 fwflash command to list and update the firmware in the
primary domain. To list the current firmware version, use the fwflash -lc IB
command. To update the firmware, use the fwflash -f firmware-file -d device
command. See the fwflash(1M) man page.
To use InfiniBand SR-IOV, ensure that InfiniBand switches have at least firmware
version 2.1.2. You can obtain this version of the firmware by installing the following
patches:
Sun Datacenter InfiniBand Switch 36 (X2821A-Z) Patch ID 16221424
Sun Network QDR InfiniBand GatewaySwitch (X2826A-Z) Patch ID 16221538
For information about how to update the firmware, see your InfiniBand switch
documentation.
Fibre Channel SR-IOV. To use this feature, your system must run at least the following
version of the system firmware:
SPARC T4 Server 8.4.2.c
SPARC T5 Server 9.1.2.d
SPARC T7 Series Server 9.4.3
SPARC M5 Server 9.1.2.d
SPARC M6 Server 9.1.2.d
SPARC M7 Series Server 9.4.3
Fujitsu M10 Server XCP2210
The firmware on the Sun Storage 16 Gb Fibre Channel Universal HBA, Emulex must be
at least revision 1.1.60.1 to enable the Fibre Channel SR-IOV feature. The installation
instructions are provided with the firmware.
Caution Only perform the firmware update to the Fibre Channel card if you plan to use
the Fibre Channel SR-IOV feature.
Software Requirements.
Ethernet SR-IOV. To use the SR-IOV feature, all domains must be running at least the
Oracle Solaris 11.1 SRU 10 OS.
InfiniBand SR-IOV. The following domains must run the supported Oracle Solaris OS:
The primary domain or a non-primary root domain must run at least the Oracle
Solaris 11.1 SRU 10 OS.
The I/O domains must run at least the Oracle Solaris 11.1 SRU 10 OS.
Update the /etc/system file on any root domain that has an InfiniBand SR-IOV
physical function from which you plan to configure virtual functions.
set ldc:ldc_maptable_entries = 0x20000
Update the /etc/system file on the I/O domain to which you add a virtual function.
set rdsv3:rdsv3_fmr_pool_size = 16384
Fibre Channel SR-IOV. To use the SR-IOV feature, all domains must be running at
least the Oracle Solaris 11.1 SRU 17 OS.
See the following for more information about static and dynamic SR-IOV software
requirements:
Static SR-IOV Software Requirements on page 85
Dynamic SR-IOV Software Requirements on page 86
See the following for more information about the class-specific SR-IOV hardware
requirements:
Ethernet SR-IOV Hardware Requirements on page 89
InfiniBand SR-IOV Hardware Requirements on page 108
Fibre Channel SR-IOV Hardware Requirements on page 121
PCIe resources, such as interrupt vectors for each PCIe bus, are divided among the root
domain and I/O domains. As a result, the number of devices that you can assign to a
particular I/O domain is also limited. Make sure that you do not assign a large number
virtual functions to the same I/O domain. There is no interrupt limitation for the SPARC T7
series servers and SPARC M7 series servers. For a description of the problems related to
SR-IOV, see Oracle VM Server for SPARC 3.3 Release Notes.
The root domain is the owner of the PCIe bus and is responsible for initializing and
managing the bus. The root domain must be active and running a version of the Oracle
Solaris OS that supports the SR-IOV feature. Shutting down, halting, or rebooting the root
domain interrupts access to the PCIe bus. When the PCIe bus is unavailable, the PCIe
devices on that bus are affected and might become unavailable.
The behavior of I/O domains with PCIe SR-IOV virtual functions is unpredictable when the
root domain is rebooted while those I/O domains are running. For instance, I/O domains
with PCIe endpoint devices might panic during or after the reboot. Upon reboot of the root
domain, you would need to manually stop and start each domain.
If the I/O domain is resilient, it can continue to operate even if the root domain that is the
owner of the PCIe bus becomes unavailable. See I/O Domain Resiliency on page 134.
SPARC systems, up to and including the SPARC T5 and SPARC M6 platforms, provide a
finite number of interrupts, so Oracle Solaris limits the number of interrupts that each
device can use. The default limit should match the needs of a typical system configuration
but you might need to adjust this value for certain system configurations. For more
information, see Adjusting the Interrupt Limit on page 380.
Static SR-IOV
The static SR-IOV method requires that the root domain be in delayed reconfiguration or the
I/O domain be stopped while performing SR-IOV operations. After you complete the
configuration steps on the root domain, you must reboot it. You must use this method when the
Oracle VM Server for SPARC 3.1 firmware is not installed in the system or when the OS version
that is installed in the respective domain does not support dynamic SR-IOV.
To create or destroy an SR-IOV virtual function, you first must initiate a delayed
reconfiguration on the root domain. Then you can run one or more ldm create-vf and ldm
destroy-vf commands to configure the virtual functions. Finally, reboot the root domain. The
following commands show how to create a virtual function on a non-primary root domain:
To statically add a virtual function to or remove one from a guest domain, you must first stop
the guest domain. Then perform the ldm add-io and ldm remove-io commands to configure
the virtual functions. After the changes are complete, start the domain. The following
commands show how to assign a virtual function in this way:
You can also add a virtual function to or remove one from a root domain instead of a guest
domain. To add an SR-IOV virtual function to or remove one from a root domain, first initiate a
delayed reconfiguration on the root domain. Then, you can run one or more of the ldm add-io
and ldm remove-io commands. Finally, reboot the root domain.
Note InfiniBand SR-IOV devices are supported only with static SR-IOV.
You can use the ldm set-io or ldm add-io command to set the iov property to on. You can
also use the ldm add-domain or ldm set-domain command to set the rc-add-policy property
to iov. See the ldm(1M) man page.
Rebooting the root domain affects SR-IOV, so carefully plan your direct I/O configuration
changes to maximize the SR-IOV related changes to the root domain and to minimize root
domain reboots.
Dynamic SR-IOV
The dynamic SR-IOV feature removes the following static SR-IOV requirements:
Root domain. Initate a delayed reconfiguration on the root domain, create or destroy a
virtual function, and reboot the root domain
I/O domain. Stop the I/O domain, add or remove a virtual function, and start the I/O
domain
With dynamic SR-IOV you can dynamically create or destroy a virtual function without having
to initiate a delayed reconfiguration on the root domain. A virtual function can also be
dynamically added to or removed from an I/O domain without having to stop the domain. The
Logical Domains Manager communicates with the Logical Domains agent and the Oracle
Solaris I/O virtualization framework to effect these changes dynamically.
Note If your system does not meet the dynamic SR-IOV software and firmware requirements,
you must use the static SR-IOV method to perform SR-IOV-related tasks. See Static SR-IOV
on page 84.
Note You cannot use aggregation for Ethernet SR-IOV virtual functions because the
current multipathing implementation does not support virtual functions.
Note On SPARC M7 series servers, SPARC T7 series servers, and Fujitsu M10 servers, PCIe
buses are enabled for I/O virtualization by default.
Enable I/O virtualization if the specified PCIe bus already is assigned to a root domain.
primary# ldm set-io iov=on bus
Enable I/O virtualization while you add a PCIe bus to a root domain.
primary# ldm add-io iov=on bus
If you have not yet enabled I/O virtualization, which requires using the static method, combine
this step with the steps to create virtual functions. By combining these steps, you need to reboot
the root domain only once.
Even when dynamic SR-IOV is available, the recommended practice is to create all the virtual
functions at once because you might not be able to create them dynamically after they have been
assigned to I/O domains.
In the static SR-IOV case, planning helps you to avoid performing multiple root domain
reboots, each of which might negatively affect I/O domains.
For information about I/O domains, see General Guidelines for Creating an I/O Domain on
page 68.
Use the following general steps to plan and perform SR-IOV virtual function configuration and
assignment:
1. Determine which PCIe SR-IOV physical functions are available on your system and which
ones are best suited to your needs.
Use the following commands to identify the required information:
ldm list-io Identifies the available SR-IOV physical function devices.
prtdiag -v Identifies which PCIe SR-IOV cards and on-board devices are
available.
ldm list-io -l pf-name Identifies additional information about a specified physical
function, such as the maximum number of virtual functions
that are supported by the device.
ldm list-io -d pf-name Identifies the device-specific properties that are supported by
the device. See Advanced SR-IOV Topics: Ethernet SR-IOV
on page 101.
2. Enable I/O virtualization operations for a PCIe bus.
See How to Enable I/O Virtualization for a PCIe Bus on page 87.
3. Create the required number of virtual functions on the specified SR-IOV physical function.
Use the following command to create the virtual functions for the physical function:
primary# ldm create-vf -n max pf-name
For more information, see How to Create an Ethernet SR-IOV Virtual Function on
page 90, How to Create an InfiniBand Virtual Function on page 108, and How to Create a
Fibre Channel SR-IOV Virtual Function on page 124.
4. Use the ldm add-config command to save the configuration to the SP.
For more information, see How to Add an Ethernet SR-IOV Virtual Function to an I/O
Domain on page 99, How to Add an InfiniBand Virtual Function to an I/O Domain on
page 113, and How to Add a Fibre Channel SR-IOV Virtual Function to an I/O Domain on
page 131.
If you cannot use multipathing or must plumb the physical function, use the static method to
create the virtual functions. See Static SR-IOV on page 84.
Note that the mac-addr, alt-mac-addrs, and mtu network-specific properties can be changed
only when the virtual function is assigned to the primary domain while in a delayed
reconfiguration.
Attempts to change these properties fail when the virtual function is assigned as follows:
When the virtual function is assigned to an active I/O domain: A property change request is
rejected because the change must be made when the owning domain is in the inactive or
bound state.
When the virtual function is assigned to a non-primary domain and a delayed
reconfiguration is already in effect: A property change request fails with an error message.
The pvid and vid network-specific properties can be changed without restriction.
2 If I/O virtualization for the bus that has the physical function is not enabled already, enable it.
Perform this step only if I/O virtualization is not enabled already for the bus that has the
physical function.
See How to Enable I/O Virtualization for a PCIe Bus on page 87.
3 Create a single virtual function or multiple virtual functions from an Ethernet physical function
either dynamically or statically.
After you create one or more virtual functions, you can assign them to a guest domain.
Dynamic method:
To create multiple virtual functions from a physical function all at the same time, use the
following command:
primary# ldm create-vf -n number | max pf-name
Use the ldm create-vf -n max command to create all the virtual functions for that
physical function at one time.
Caution When your system uses an Intel 10-Gbit Ethernet card, maximize performance
by creating no more than 31 virtual functions from each physical function.
You can use either the path name or the pseudonym name to specify virtual functions.
However, the recommended practice is to use the pseudonym name.
To create one virtual function from a physical function, use the following command:
ldm create-vf [mac-addr=num] [alt-mac-addrs=[auto|num1,[auto|num2,...]]]
[pvid=pvid] [vid=vid1,vid2,...] [mtu=size] [name=value...] pf-name
Note If not explicitly assigned, the MAC address is automatically allocated for network
devices.
Use this command to create one virtual function for that physical function. You can also
manually specify Ethernet class-specific property values.
Note Sometimes a newly created virtual function is not available for immediate use while
the OS probes for IOV devices. Use the ldm list-io command to determine whether the
parent physical function and its child virtual functions have the INV value in the Status
column. If they have this value, wait until the ldm list-io output no longer shows the INV
value in the Status column (about 45 seconds) before you use that physical function or any
of its child virtual functions. If this status persists, there is a problem with the device.
A device status might be INV immediately following a root domain reboot (including that of
the primary) or immediately after you use the ldm create-vf or ldm destroy-vf
command.
Static method:
b. Create a single virtual function or multiple virtual functions from an Ethernet physical
function.
Use the same commands as shown previously to dynamically create the virtual
functions.
The following command shows more details about the specified physical function. The maxvfs
value indicates the maximum number of virtual functions that is supported by the device.
Ensure that I/O virtualization is enabled on the pci_0 PCIe bus. See How to Enable I/O
Virtualization for a PCIe Bus on page 87.
Now, you can use the ldm create-vf command to create the virtual function from the
/SYS/MB/NET0/IOVNET.PF0 physical function.
Example 74 Dynamically Creating an Ethernet Virtual Function With Two Alternate MAC
Addresses
This example dynamically creates a virtual function that has two alternate MAC addresses. One
MAC address is automatically allocated, and the other is explicitly specified as
00:14:2f:f9:14:c2.
First you initiate a delayed reconfiguration on the primary domain and then enable I/O
virtualization on the pci_0 PCIe bus. Because the pci_0 bus has already been assigned to the
primary root domain, use the ldm set-io command to enable I/O virtualization.
Now, you can use the ldm create-vf command to create the virtual function from the
/SYS/MB/NET0/IOVNET.PF0 physical function.
Finally, reboot the primary root domain to make the changes take effect.
Note that the ldm create-vf -n command creates multiple virtual functions that are set with
default property values, if appropriate. You can later specify non-default property values by
using the ldm set-io command.
2 Destroy single a virtual function or multiple virtual functions either dynamically or statically.
Dynamic method:
To destroy some or all of the virtual functions from a physical function at one time, use
the following command:
primary# ldm destroy-vf -n number | max pf-name
Use the ldm destroy-vf -n max command to destroy all the virtual functions for that
physical function at one time.
If you specify number as an argument to the -n option, the last number of virtual
functions are destroyed. Use this method as it performs this operation with only one
physical function device driver state transition.
You can use either the path name or the pseudonym name to specify virtual functions.
However, the recommended practice is to use the pseudonym name.
A device status might be INV immediately following a root domain reboot (including that of
the primary) or immediately after you use the ldm create-vf or ldm destroy-vf
command.
Static method:
To destroy all of the virtual functions from the specified physical function at the same
time, use the following command:
primary# ldm destroy-vf -n number | max pf-name
You can use either the path name or the pseudonym name to specify virtual
functions. However, the recommended practice is to use the pseudonym name.
If you cannot use this dynamic method, use the static method instead. See Static SR-IOV on
page 84.
You can use the ldm set-io command to modify the following properties:
mac-addr, alt-mac-addrs, and mtu
To change these virtual function properties, stop the domain that owns the virtual function,
use the ldm set-io command to change the property values, and start the domain.
pvid and vid
You can dynamically change these properties while the virtual functions are assigned to a
domain. Note that doing so might result in a change to the network traffic of an active virtual
function; setting the pvid property enables a transparent VLAN. Setting the vid property to
specify VLAN IDs permits VLAN traffic to those specified VLANs.
Device-specific properties
Use the ldm list-io -d pf-name command to view the list of valid device-specific
properties. You can modify these properties for both the physical function and the virtual
function. You must use the static method to modify device-specific properties. See Static
SR-IOV on page 84. For more information about device-specific properties, see Advanced
SR-IOV Topics: Ethernet SR-IOV on page 101.
Note that this command dynamically changes the VLAN association for a virtual function.
To use these VLANs, the VLAN interfaces in the I/O domains must be configured by using
the appropriate Oracle Solaris OS networking commands.
The following example sets the pvid property value to 2 for the
/SYS/MB/NET0/IOVNET.PF0.VF0 virtual function, which transparently makes the virtual
function part of VLAN 2. Namely, the virtual function will not view any tagged VLAN
traffic.
primary# ldm set-io pvid=2 /SYS/MB/NET0/IOVNET.PF0.VF0
The following example assigns three automatically allocated alternate MAC addresses to a
virtual function. The alternate addresses enable the creation of Oracle Solaris 11 virtual
network interface cards (VNICs) on top of a virtual function. Note that to use VNICs, you
must run the Oracle Solaris 11 OS in the domain.
Note Before you run this command, stop the domain that owns the virtual function.
The following example sets the device-specific unicast-slots property to 12 for the
specified virtual function. To find the device-specific properties that are valid for a physical
function, use the ldm list-io -d pf-name command.
primary# ldm set-io unicast-slots=12 /SYS/MB/NET0/IOVNET.PF0.VF0
All configuration changes for other domains are disabled until the primary
domain reboots, at which time the new configuration for the primary domain
will also take effect.
1 Identify the virtual function that you want to add to an I/O domain.
primary# ldm list-io
The device path name for the virtual function in the domain is the path shown in the
list-io -l output.
If you cannot add the virtual function dynamically, use the static method:
Caution Before removing the virtual function from the domain, ensure that it is not critical for
booting that domain.
1 Identify the virtual function that you want to remove from an I/O domain.
primary# ldm list-io
100 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Using Ethernet SR-IOV Virtual Functions
If the command succeeds, the virtual function is removed from the ldg1 domain. When ldg1 is
restarted, the specified virtual function no longer appears in that domain.
If you cannot remove the virtual function dynamically, use the static method:
Chapter 7 Creating an I/O Domain by Using PCIe SR-IOV Virtual Functions 101
Using Ethernet SR-IOV Virtual Functions
You can create virtual I/O services and assign them to I/O domains. These virtual I/O
services can be created on the same physical function from which virtual functions are also
created. For example, you can use an on-board 1-Gbps network device (net0 or igb0) as a
network back-end device for a virtual switch and also statically create virtual functions from
the same physical function device.
Note When booting the Oracle Solaris OS from a virtual function device, verify that the Oracle
Solaris OS that is being loaded has virtual function device support. If so, you can continue with
the rest of the installation as planned.
The ldm list-io -d command shows device-specific properties that are exported by the
specified physical function device driver. The information for each property includes its name,
brief description, default value, maximum values, and one or more of the following flags:
P Applies to a physical function
V Applies to a virtual function
R Read-only or informative parameter only
Use the ldm create-vf or ldm set-io command to set the read-write properties for a physical
function or a virtual function. Note that to set a device-specific property, you must use the static
method. See Static SR-IOV on page 84.
The following example shows the device-specific properties that are exported by the on-board
Intel 1-Gbps SR-IOV device:
102 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Using Ethernet SR-IOV Virtual Functions
max-config-vfs
Flags = PR
Default = 7
Descr = Max number of configurable VFs
max-vf-mtu
Flags = VR
Default = 9216
Descr = Max MTU supported for a VF
max-vlans
Flags = VR
Default = 32
Descr = Max number of VLAN filters supported
pvid-exclusive
Flags = VR
Default = 1
Descr = Exclusive configuration of pvid required
unicast-slots
Flags = PV
Default = 0 Min = 0 Max = 24
Descr = Number of unicast mac-address slots
The following example shows the creation of four VNICs on an SR-IOV virtual function. The
first command assigns alternate MAC addresses to the virtual function device. This command
uses the automatic allocation method to allocate four alternate MAC addresses to the
/SYS/MB/NET0/IOVNET.PF0.VF0 virtual function device:
The next command starts the ldg1 I/O domain. Because the auto-boot? property is set to true
in this example, the Oracle Solaris 11 OS is also booted in the I/O domain.
The following command uses the Oracle Solaris 11 dladm command in the guest domain to
show virtual function that has alternate MAC addresses. This output shows that the net30
virtual function has four alternate MAC addresses.
Chapter 7 Creating an I/O Domain by Using PCIe SR-IOV Virtual Functions 103
Using Ethernet SR-IOV Virtual Functions
The following commands create four VNICs. Note that attempts to create more VNICs than are
specified by using alternate MAC addresses will fail.
Before You Begin Before you begin, ensure that you have enabled I/O virtualization for the PCIe bus that is the
parent of the physical function from which you create virtual functions. See How to Enable I/O
Virtualization for a PCIe Bus on page 87.
1 Identify an SR-IOV physical function to share with an I/O domain that uses the SR-IOV feature.
primary# ldm list-io
104 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Using Ethernet SR-IOV Virtual Functions
You can run this command for each virtual function that you want to create. You can also use
the -n option to create more than one virtual function from the same physical function in a
single command. See Example 76 and the ldm(1M) man page.
Note This command fails if other virtual functions have already been created from the
associated physical function and if any of those virtual functions are bound to another domain.
4 Assign the virtual function that you created in Step 2 to its target I/O domain.
primary# ldm add-io vf-name domain-name
Note If the OS in the target I/O domain does not support dynamic SR-IOV, you must use the
static method. See Static SR-IOV on page 84.
Example 712 Dynamically Creating an I/O Domain by Assigning an SR-IOV Virtual Function to It
The following dynamic example shows how to create a virtual function,
/SYS/MB/NET0/IOVNET.PF0.VF0, for a physical function, /SYS/MB/NET0/IOVNET.PF0, and
assign the virtual function to the ldg1 I/O domain.
Chapter 7 Creating an I/O Domain by Using PCIe SR-IOV Virtual Functions 105
Using Ethernet SR-IOV Virtual Functions
The following command shows that the virtual function has been added to the ldg1 domain.
Example 713 Statically Creating an I/O Domain by Assigning an SR-IOV Virtual Function to It
The following static example shows how to create a virtual function,
/SYS/MB/NET0/IOVNET.PF0.VF0, for a physical function, /SYS/MB/NET0/IOVNET.PF0, and
assign the virtual function to the ldg1 I/O domain.
106 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Using Ethernet SR-IOV Virtual Functions
First, initiate a delayed reconfiguration on the primary domain, enable I/O virtualization, and
create the virtual function from the /SYS/MB/NET0/IOVNET.PF0 physical function.
Stop the ldg1 domain, add the virtual function, and start the domain.
The following command shows that the virtual function has been added to the ldg1 domain.
Chapter 7 Creating an I/O Domain by Using PCIe SR-IOV Virtual Functions 107
Using InfiniBand SR-IOV Virtual Functions
To minimize downtime, run all of the SR-IOV commands as a group while the root domain is
in delayed reconfiguration or a guest domain is stopped. The SR-IOV commands that are
limited in this way are the ldm create-vf, ldm destroy-vf, ldm add-io, and ldm remove-io
commands.
Typically, virtual functions are assigned to more than one guest domain. A reboot of the root
domain affects all of the guest domains that have been assigned the root domain's virtual
functions.
Because an unused InfiniBand virtual function has very little overhead, you can avoid
downtime by creating the necessary virtual functions ahead of time, even if they are not used
immediately.
For InfiniBand SR-IOV support, the root domain must be running at least the Oracle Solaris
11.1 SRU 10 OS. The I/O domains can run at least the Oracle Solaris 11.1 SRU 10 OS.
108 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Using InfiniBand SR-IOV Virtual Functions
3 Create one or more virtual functions that are associated with the physical functions from that
root domain.
primary# ldm create-vf pf-name
You can run this command for each virtual function that you want to create. You can also use
the -n option to create more than one virtual function from the same physical function in a
single command. See Example 76 and the ldm(1M) man page.
The following command shows more details about the specified physical function. The maxvfs
value indicates the maximum number of virtual functions that are supported by the device.
Chapter 7 Creating an I/O Domain by Using PCIe SR-IOV Virtual Functions 109
Using InfiniBand SR-IOV Virtual Functions
The following example shows how to create a static virtual function. First, initiate a delayed
reconfiguration on the primary domain and enable I/O virtualization on the pci_0 PCIe bus.
Because the pci_0 bus has been assigned already to the primary root domain, use the ldm
set-io command to enable I/O virtualization.
Now, use the ldm create-vf command to create a virtual function from the
/SYS/MB/RISER1/PCIE4/IOVIB.PF0 physical function.
Note that you can create more than one virtual function during the same delayed
reconfiguration. The following command creates a second virtual function:
Finally, reboot the primary root domain to make the changes take effect.
110 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Using InfiniBand SR-IOV Virtual Functions
A virtual function can be destroyed if it is not currently assigned to a domain. A virtual function
can be destroyed only in the reverse sequential order of creation, so only the last virtual function
that was created can be destroyed. The resulting configuration is validated by the physical
function driver.
2 Destroy one or more virtual functions that are associated with the physical functions from that
root domain.
primary# ldm destroy-vf vf-name
You can run this command for each virtual function that you want to destroy. You can also use
the -n option to destroy more than one virtual function from the same physical function in a
single command. See Example 78 and the ldm(1M) man page.
The ldm list-io command shows information about the buses, physical functions, and virtual
functions.
Chapter 7 Creating an I/O Domain by Using PCIe SR-IOV Virtual Functions 111
Using InfiniBand SR-IOV Virtual Functions
You can obtain more details about the physical function and related virtual functions by using
the ldm list-io -l command.
A virtual function can be destroyed only if it is unassigned to a domain. The DOMAIN column
of the ldm list-io -l output shows the name of any domain to which a virtual function is
assigned. Also, virtual functions must be destroyed in the reverse order of their creation.
Therefore, in this example, you must destroy the /SYS/MB/RISER1/PCIE4/IOVIB.PF0.VF1
virtual function before you can destroy the /SYS/MB/RISER1/PCIE4/IOVIB.PF0.VF0 virtual
function.
After you identify the proper virtual function, you can destroy it. First, initiate a delayed
reconfiguration.
You can issue more than one ldm destroy-vf command while in delayed reconfiguration.
Thus, you could also destroy the /SYS/MB/RISER1/PCIE4/IOVIB.PF0.VF0.
Finally, reboot the primary root domain to make the changes take effect.
112 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Using InfiniBand SR-IOV Virtual Functions
To add a virtual function to an I/O domain, it must be unassigned. The DOMAIN column
indicates the name of the domain to which the virtual function is assigned. In this case, the
/SYS/MB/RISER1/PCIE4/IOVIB.PF0.VF2 is not assigned to a domain.
To add a virtual function to a domain, the domain must be in the inactive or bound state.
Chapter 7 Creating an I/O Domain by Using PCIe SR-IOV Virtual Functions 113
Using InfiniBand SR-IOV Virtual Functions
The ldm list-domain output shows that the iodom1 I/O domain is active, so it must be
stopped.
Now you can add the virtual function to the I/O domain.
Note that you can add more than one virtual function while an I/O domain is stopped. For
example, you might add other unassigned virtual functions such as
/SYS/MB/RISER1/PCIE4/IOVIB.PF0.VF3 to iodom1. After you add the virtual functions, you
can restart the I/O domain.
114 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Using InfiniBand SR-IOV Virtual Functions
Note Before removing the virtual function from the I/O domain, ensure that it is not critical for
booting that domain.
The DOMAIN column shows the name of the domain to which the virtual function is assigned.
The /SYS/MB/RISER1/PCIE4/IOVIB.PF0.VF2 virtual function is assigned to iodom1.
To remove a virtual function from an I/O domain, the domain must be inactive or bound state.
Use the ldm list-domain command to determine the state of the domain.
Chapter 7 Creating an I/O Domain by Using PCIe SR-IOV Virtual Functions 115
Using InfiniBand SR-IOV Virtual Functions
Note that the DOMAIN column for the virtual function is now empty.
You can remove more than one virtual function while an I/O domain is stopped. In this
example, you could also remove the /SYS/MB/RISER1/PCIE4/IOVIB.PF0.VF3 virtual function.
After you remove the virtual functions, you can restart the I/O domain.
116 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Using InfiniBand SR-IOV Virtual Functions
Chapter 7 Creating an I/O Domain by Using PCIe SR-IOV Virtual Functions 117
Using InfiniBand SR-IOV Virtual Functions
The ldm list-io -l command provides more detailed information about the specified
physical function device, /SYS/MB/RISER1/PCIE4/IOVIB.PF0. The maxvfs value shows that the
maximum number of virtual functions supported by the physical device is 64. For each virtual
function that is associated with the physical function, the output shows the following:
Function name
Function type
Bus name
Domain name
Optional status of the function
Device path
This ldm list-io -l output shows that VF0 and VF1 are assigned to the primary domain and
that VF2 and VF3 are assigned to the iodom1 I/O domain.
118 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Using InfiniBand SR-IOV Virtual Functions
Use the ldm list-io -l command to show the Oracle Solaris device path name that is
associated with each physical function and virtual function.
Use the dladm show-phys -L command to match each IP over InfiniBand (IPoIB) instance to
its physical card. For example, the following command shows which IPoIB instances use the
card in slot PCIE4, which is the same card shown in the previous ldm list-io -l example.
Each InfiniBand host channel adapter (HCA) device has a globally unique ID (GUID). There
are also GUIDs for each port (typically there are two ports to an HCA). An InfiniBand HCA
GUID uniquely identifies the adapter. The port GUID uniquely identifies each HCA port and
plays a role similar to a network device's MAC address. These 16-hexadecimal digit GUIDs are
used by InfiniBand management tools and diagnostic tools.
Use the dladm show-ib command to obtain GUID information about the InfiniBand SR-IOV
devices. Physical functions and virtual functions for the same device have related HCA GUID
values. The 11th hexadecimal digit of the HCA GUID shows the relationship between a physical
function and its virtual functions. Note that leading zeros are suppressed in the HCAGUID and
PORTGUID columns.
For example, physical function PF0 has two virtual functions, VF0 and VF1, which are assigned
to the primary domain. The 11th hexadecimal digit of each virtual function is incremented by
one from the related physical function. So, if the GUID for the PF0 is 8, the GUIDs for VF0 and
VF1 will be 9 and A, respectively.
Chapter 7 Creating an I/O Domain by Using PCIe SR-IOV Virtual Functions 119
Using InfiniBand SR-IOV Virtual Functions
The following dladm show-ib command output shows that the net5 and net6 links belong to
the physical function PF0. The net19 and net9 links belong to VF0 of the same device while the
net18 and net11 links belong to VF1.
The device in the following dladm show-phys output shows the relationship between the links
and the underlying InfiniBand port devices (ibpX).
Use the ls -l command to show the actual InfiniBand port (IB port) device paths. An IB port
device is a child of a device path that is shown in the ldm list-io -l output. A physical
function has a one-part unit address such as pciex15b3,673c@0 while virtual functions have a
two-part unit address, pciex15b3,1002@0,2. The second part of the unit address is one higher
than the virtual function number. (In this case, the second component is 2, so this device is
virtual function 1.) The following output shows that /dev/ibp0 is a physical function and
/dev/ibp5 is a virtual function.
primary# ls -l /dev/ibp0
lrwxrwxrwx 1 root root 83 Apr 18 12:02 /dev/ibp0 ->
../devices/pci@400/pci@1/pci@0/pci@0/pciex15b3,673c@0/hermon@0/ibport@1,0,ipib:ibp0
primary# ls -l /dev/ibp5
lrwxrwxrwx 1 root root 85 Apr 22 23:29 /dev/ibp5 ->
../devices/pci@400/pci@1/pci@0/pci@0/pciex15b3,1002@0,2/hermon@3/ibport@2,0,ipib:ibp5
You can use the OpenFabrics ibv_devices command to view the OpenFabrics device name
and the node (HCA) GUID. When virtual functions are present, the Type column indicates
whether the function is physical or virtual.
primary# ibv_devices
device node GUID type
------ ---------------- ----
mlx4_4 0002c90300a38910 PF
mlx4_5 0021280001a17f56 PF
mlx4_0 0002cb0300a38910 VF
mlx4_1 0002ca0300a38910 VF
120 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Using Fibre Channel SR-IOV Virtual Functions
mlx4_2 00212a0001a17f56 VF
mlx4_3 0021290001a17f56 VF
Each Fibre Channel physical function has unique port and node world-wide name (WWN)
values that are provided by the card manufacturer. When you create virtual functions from a
Fibre Channel physical function, the virtual functions behave like a Fibre Channel HBA device.
Each virtual function must have a unique identity that is specified by the port WWN and node
WWN of the SAN fabric. You can use the Logical Domains Manager to automatically or
manually assign the port and node WWNs. By assigning your own values, you can fully control
the identity of any virtual function.
The Fibre Channel HBA virtual functions use the N_Port ID Virtualization (NPIV) method to
log in to the SAN fabric. Because of this NPIV requirement, you must connect the Fibre
Channel HBA port to an NPIV-capable Fibre Channel switch. The virtual functions are
managed entirely by the hardware or the firmware of the SR-IOV card. Other than these
exceptions, Fibre Channel virtual functions work and behave the same way as a non-SR-IOV
Fibre Channel HBA device. The SR-IOV virtual functions have the same capabilities as the
non-SR-IOV devices, so all types of SAN storage devices are supported in either configuration.
The virtual functions' unique port and node WWN values enable a SAN administrator to assign
storage to the virtual functions in the same way as he would for any non-SR-IOV Fibre Channel
HBA port. This mananagement includes zoning, LUN masking, and quality of service (QoS).
You can configure the storage so that it is accessible exclusively to a specific logical domain
without being visible to the physical function in the root domain.
You can use both the static and dynamic SR-IOV methods to manage Fibre Channel SR-IOV
devices.
Chapter 7 Creating an I/O Domain by Using PCIe SR-IOV Virtual Functions 121
Using Fibre Channel SR-IOV Virtual Functions
I/O domain.
QLogic cards. At least the Oracle Solaris 11.2 OS
Emulex cards. At least the Oracle Solaris 11.1 SRU 17 OS
122 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Using Fibre Channel SR-IOV Virtual Functions
You cannot modify the node-wwn or the port-wwn property values while the Fibre Channel
virtual function is in use. However, you can modify the bw-percent property value dynamically
even when the Fibre Channel virtual function is in use.
This automatic allocation method produces unique WWNs when the control domains of all
systems that are connected to the same Fibre Channel fabric are also connected by Ethernet
and are part of the same multicast domain. If you cannot meet this requirement, you must
manually assign unique WWNs, which is required on the SAN.
Manual World-Wide Name Allocation
You can construct unique WWNs by using any method. This section describes how to create
WWNs from the Logical Domains Manager manual MAC address allocation pool. You must
guarantee the uniqueness of the WWNs you allocate.
Logical Domains Manager has a pool of 256,000 MAC addresses that are available for
manual allocation in the 00:14:4F:FC:00:00 - 00:14:4F:FF:FF:FF range.
The following example shows the port-wwn and node-wwn property values based on the
00:14:4F:FC:00:01 MAC address:
port-wwn = 10:00:00:14:4F:FC:00:01
node-wwn = 20:00:00:14:4F:FC:00:01
Note It is best to manually assign the WWNs to ensure predictable configuration of the SAN
storage.
You must use the manual WWN allocation method when all systems are not connected to
the same multicast domain by Ethernet. You can also use this method to guarantee that the
same WWNs are used when Fibre Channel virtual functions are destroyed and re-created.
Chapter 7 Creating an I/O Domain by Using PCIe SR-IOV Virtual Functions 123
Using Fibre Channel SR-IOV Virtual Functions
2 If I/O virtualization for the bus that has the physical function is not enabled already, enable it.
Perform this step only if I/O virtualization is not enabled already for the bus that has the
physical function.
See How to Enable I/O Virtualization for a PCIe Bus on page 87.
3 Create a single virtual function or multiple virtual functions from a physical function either
dynamically or statically.
After you create one or more virtual functions, you can assign them to a guest domain.
Dynamic method:
To create multiple virtual functions from a physical function all at the same time, use the
following command:
primary# ldm create-vf -n number | max pf-name
Use the ldm create-vf -n max command to create all the virtual functions for that
physical function at one time. This command automatically allocates the port and node
WWNs for each virtual function and sets the bw-percent property to the default value,
which is 0. This value specifies that fair share bandwidth is allocated to all virtual
functions.
124 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Using Fibre Channel SR-IOV Virtual Functions
Tip Create all virtual functions for the physical function at once. If you want to
manually assign WWNs, first create all of the virtual functions and then use the ldm
set-io command to manually assign your WWN values for each virtual function. This
technique minimizes the number of state transitions when creating virtual functions
from a physical function.
You can use either the path name or the pseudonym name to specify virtual functions.
However, the recommended practice is to use the pseudonym name.
To create one virtual function from a physical function, use the following command:
ldm create-vf [bw-percent=value] [port-wwn=value node-wwn=value] pf-name
You can also manually specify Fibre Channel class-specific property values.
Note Sometimes a newly created virtual function is not available for immediate use while
the OS probes for IOV devices. Use the ldm list-io command to determine whether the
parent physical function and its child virtual functions have the INV value in the Status
column. If they have this value, wait until the ldm list-io output no longer shows the INV
value in the Status column (about 45 seconds) before you use that physical function or any
of its child virtual functions. If this status persists, there is a problem with the device.
A device status might be INV immediately following a root domain reboot (including that of
the primary) or immediately after you use the ldm create-vf or ldm destroy-vf
command.
Static method:
b. Create a single virtual function or multiple virtual functions from a physical function.
Use the same commands as shown previously to dynamically create the virtual
functions.
Chapter 7 Creating an I/O Domain by Using PCIe SR-IOV Virtual Functions 125
Using Fibre Channel SR-IOV Virtual Functions
Example 718 Displaying Information About the Fibre Channel Physical Function
This example shows information about the /SYS/MB/PCIE7/IOVFC.PF0 physical function:
This physical function is from a board in a PCIe slot, PCIE7.
The IOVFC string indicates that the physical function is a Fibre Channel SR-IOV device.
The following command shows more details about the specified physical function. The maxvfs
value indicates the maximum number of virtual functions that is supported by the device.
Example 719 Dynamically Creating a Fibre Channel Virtual Function Without Setting Optional
Properties
This example dynamically creates a virtual function without setting any optional properties. In
this case, the ldm create-vf command automatically allocates the default bandwidth
percentage, port world-wide name (WWN), and node WWN values.
126 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Using Fibre Channel SR-IOV Virtual Functions
Ensure that I/O virtualization is enabled on the pci_1 PCIe bus. See How to Enable I/O
Virtualization for a PCIe Bus on page 87.
You can use the ldm create-vf command to create all the virtual functions from the
/SYS/MB/PCIE7/IOVFC.PF0 physical function.
Example 720 Dynamically Creating a Fibre Channel Virtual Function and Setting Properties
This example dynamically creates a virtual function while setting the bw-percent property
value to 25 and specifies the port and node WWNs.
Example 721 Statically Creating a Fibre Channel Virtual Function Without Setting Optional
Properties
This example statically creates a virtual function without setting any optional properties. In this
case, the ldm create-vf command automatically allocates the default bandwidth percentage,
port world-wide name (WWN), and node WWN values.
First you initiate a delayed reconfiguration on the rootdom1 domain. Then, enable I/O
virtualization on the pci_1 PCIe bus. Because the pci_1 bus has already been assigned to the
rootdom1 root domain, use the ldm set-io command to enable I/O virtualization.
Now, you can use the ldm create-vf command to create all the virtual functions from the
/SYS/MB/PCIE7/IOVFC.PF0 physical function.
Chapter 7 Creating an I/O Domain by Using PCIe SR-IOV Virtual Functions 127
Using Fibre Channel SR-IOV Virtual Functions
Any changes made to the rootdom1 domain will only take effect after it reboots.
------------------------------------------------------------------------------
Created new vf: /SYS/MB/PCIE7/IOVFC.PF0.VF0
Created new vf: /SYS/MB/PCIE7/IOVFC.PF0.VF1
Created new vf: /SYS/MB/PCIE7/IOVFC.PF0.VF2
Created new vf: /SYS/MB/PCIE7/IOVFC.PF0.VF3
Created new vf: /SYS/MB/PCIE7/IOVFC.PF0.VF4
Created new vf: /SYS/MB/PCIE7/IOVFC.PF0.VF5
Created new vf: /SYS/MB/PCIE7/IOVFC.PF0.VF6
Created new vf: /SYS/MB/PCIE7/IOVFC.PF0.VF7
Finally, reboot the rootdom1 root domain to make the changes take effect in one of the
following ways:
rootdom1 is a non-primary root domain
primary# ldm stop-domain -r rootdom1
rootdom1 is the primary domain
primary# shutdown -i6 -g0 -y
2 Destroy a single virtual function or multiple virtual functions either dynamically or statically.
Dynamic method:
To destroy all of the virtual functions from a physical function at one time, use the
following command:
primary# ldm destroy-vf -n number | max pf-name
You can use either the path name or the pseudonym name to specify virtual functions.
However, the recommended practice is to use the pseudonym name.
Use the ldm destroy-vf -n max command to destroy all the virtual functions for that
physical function at one time.
128 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Using Fibre Channel SR-IOV Virtual Functions
If you specify number as an argument to the -n option, the last number of virtual
functions are destroyed. Use this method as it performs this operation with only one
physical function device driver state transition.
Static method:
To destroy all of the virtual functions from the specified physical function at the same
time, use the following command:
primary# ldm destroy-vf -n number | max pf-name
You can use either the path name or the pseudonym name to specify virtual
functions. However, the recommended practice is to use the pseudonym name.
Chapter 7 Creating an I/O Domain by Using PCIe SR-IOV Virtual Functions 129
Using Fibre Channel SR-IOV Virtual Functions
Example 722 Dynamically Destroying Multiple Fibre Channel SR-IOV Virtual Functions
This example shows the results of destroying all the virtual functions from the
/SYS/MB/PCIE5/IOVFC.PF1 physical function. The ldm list-io output shows that the physical
function has eight virtual functions. The ldm destroy-vf -n max command destroys all the
virtual functions, and the final ldm list-io output shows that none of the virtual functions
remain.
If you cannot use this dynamic method, use the static method instead. See Static SR-IOV on
page 84.
You can use the ldm set-io command to modify the bw-percent, port-wwn, and node-wwn
properties.
You can dynamically change only the bw-percent propery while the virtual functions are
assigned to a domain.
130 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Using Fibre Channel SR-IOV Virtual Functions
1 Identify the virtual function that you want to add to an I/O domain.
primary# ldm list-io
Chapter 7 Creating an I/O Domain by Using PCIe SR-IOV Virtual Functions 131
Using Fibre Channel SR-IOV Virtual Functions
The device path name for the virtual function in the domain is the path shown in the
list-io -l output.
If you cannot add the virtual function dynamically, use the static method:
Caution Before removing the virtual function from the domain, ensure that it is not critical for
booting that domain.
1 Identify the virtual function that you want to remove from an I/O domain.
primary# ldm list-io
132 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Using Fibre Channel SR-IOV Virtual Functions
If the command succeeds, the virtual function is removed from the ldg2 domain. When ldg2 is
restarted, the specified virtual function no longer appears in that domain.
If you cannot remove the virtual function dynamically, use the static method:
Chapter 7 Creating an I/O Domain by Using PCIe SR-IOV Virtual Functions 133
I/O Domain Resiliency
ldg2# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c2d0 <Unknown-Unknown-0001-25.00GB>
/virtual-devices@100/channel-devices@200/disk@0
1. c3t21000024FF4C4BF8d0 <SUN-COMSTAR-1.0-10.00GB>
/pci@340/pci@1/pci@0/pci@6/SUNW,emlxs@0,2/fp@0,0/ssd@w21000024ff4c4bf8,0
Specify disk (enter its number): ^D
ldg2#
The following diagrams show and describe what happens when one of the configured root
domains fails and what happens when the root domain returns to service.
134 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
I/O Domain Resiliency
In this configuration, the virtual function could be a virtual network device or a virtual storage
device, which means that the I/O domain can be configured with any combination of virtual
functions or virtual devices.
You can create a configuration where you have both resilient and non-resilient I/O domains.
For an example, see Example Using Resilient and Non-Resilient Configurations on
page 140.
Note The Oracle Solaris 10 OS does not provide I/O domain resiliency.
Chapter 7 Creating an I/O Domain by Using PCIe SR-IOV Virtual Functions 135
I/O Domain Resiliency
Ensure that the I/O domain, root domain, service domain, and primary domain run at least the
Oracle Solaris 11.2 SRU 8 OS and the Logical Domains Manager 3.2 software.
136 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
I/O Domain Resiliency
Note If you add any devices to the I/O domain that are not supported for resiliency, that
domain is no longer resilient. So, reset the failure-policy property value to stop, reset, or
panic.
For information about domain dependencies, see Configuring Domain Dependencies on
page 367.
2 On the I/O domain, set the master property to the name of the root domain.
primary# ldm set-domain master=root-domain-name I/O-domain-name
Example 727 Using IPMP to Configure Multipathing With Ethernet SR-IOV Functions
This example shows how to use IPMP to configure network virtual-function devices for a
resilient I/O domain. For more information, see Administering TCP/IP Networks, IPMI, and IP
Tunnels in Oracle Solaris 11.2.
1. Identify two Ethernet SR-IOV physical functions that are assigned to different root
domains.
In this example, the root-1 and root-2 root domains have Ethernet SR-IOV physical
functions.
primary# ldm list-io | grep root-1 | grep PF
/SYS/PCI-EM8/IOVNET.PF0 PF pci_1 root-1
primary# ldm list-io | grep root-2 | grep PF
/SYS/RIO/NET2/IOVNET.PF0 PF pci_2 root-2
2. Create two Ethernet virtual functions on each of the specified physical functions.
primary# ldm create-vf /SYS/MB/NET0/IOVNET.PF0
Created new vf: /SYS/PCI-EM8/IOVNET.PF0.VF0
primary# ldm create-vf /SYS/RIO/NET2/IOVNET.PF0
Created new vf: /SYS/RIO/NET2/IOVNET.PF0.VF0
3. Assign the Ethernet virtual functions to the io-1 I/O domain.
primary# ldm add-io /SYS/PCI-EM8/IOVNET.PF0.VF0 io-1
primary# ldm add-io /SYS/RIO/NET2/IOVNET.PF0.VF0 io-1
4. Configure the Ethernet virtual functions into an IPMP group on the I/O domain.
Chapter 7 Creating an I/O Domain by Using PCIe SR-IOV Virtual Functions 137
I/O Domain Resiliency
a. Identify the newly added network devices, net1 and net2, on the I/O domain.
i0-1# dladm show-phys
LINK MEDIA STATE SPPED DUPLEX DEVICE
net0 Ethernet up 0 unknown vnet0
net1 Ethernet up 1000 full igbvf0
net2 Ethernet up 1000 full igbvf1
b. Create IP interfaces for the newly added network devices.
io-1# ipadm create-ip net1
io-1# ipadm create-ip net2
c. Create the ipmp0 IPMP group for the two network interfaces.
io-1# ipadm create-ipmp -i net1 -i net2 ipmp0
d. Assign an IP address to the IPMP group.
This example configures the DHCP option.
io-1# ipadm create-addr -T dhcp ipmp0/v4
e. Check the status of the IPMP group interface.
io-1# ipmpstat -g
Example 728 Using MPxIO to Configure Multipathing With Fibre Channel SR-IOV Functions
This example shows how to use MPxIO to configure Fibre Channel virtual-function devices for
a resilient I/O domain. For more information, see Managing SAN Devices and Multipathing
Oracle Solaris 11.2.
1. Identify two Fibre Channel SR-IOV physical functions that are assigned to different root
domains.
In this example, the root-1 and root-2 root domains have Fibre Channel SR-IOV physical
functions.
primary# ldm list-io | grep root-1 | grep PF
/SYS/PCI-EM4/IOVFC.PF0 PF pci_1 root-1
primary# ldm list-io | grep root-2 | grep PF
/SYS/PCI-EM15/IOVFC.PF0 PF pci_2 root-2
2. Create two virtual functions on each of the specified physical functions.
For more information, see How to Create a Fibre Channel SR-IOV Virtual Function on
page 124.
primary# ldm create-vf port-wwn=10:00:00:14:4f:fc:60:00 \
node-wwn=20:00:00:14:4f:fc:60:00 /SYS/PCI-EM4/IOVFC.PF0
Created new vf: /SYS/PCI-EM4/IOVFC.PF0.VF0
primary# ldm create-vf port-wwn=10:00:00:14:4f:fc:70:00 \
node-wwn=20:00:00:14:4f:fc:70:00 /SYS/PCI-EM15/IOVFC.PF0
Created new vf: /SYS/PCI-EM15/IOVFC.PF0.VF0
3. Add the newly created virtual functions to the io-1 I/O domain.
primary# ldm add-io /SYS/PCI-EM4/IOVFC.PF0.VF0 io-1
primary# ldm add-io /SYS/PCI-EM15/IOVFC.PF0.VF0 io-1
138 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
I/O Domain Resiliency
4. Determine whether MPxIO is enabled on the I/O domain by using the prtconf -v
command.
If the output for the fp device includes the following device property setting, MPxIO is
enabled:
mpxio-disable="no"
If the mpxio-disable property is set to yes, update the property value to no in the
/etc/driver/drv/fp.conf file and then reboot the I/O domain.
If the mpxio-disable device property does not appear in the prtconf -v output, add the
mpxio-disable="no" entry to the /etc/driver/drv/fp.conf file and then reboot the I/O
domain.
5. Check the status of MPxIO group.
io-1# mpathadm show LU
Paths:
Initiator Port Name: 100000144ffc6000
Target Port Name: 201700a0b82a3846
Override Path: NA
Path State: OK
Disabled: no
Chapter 7 Creating an I/O Domain by Using PCIe SR-IOV Virtual Functions 139
Rebooting the Root Domain With Non-Resilient I/O Domains Configured
The following figure shows that I/O domain A and I/O domain C are not resilient because
neither use multipathing. I/O domain A has a virtual function and I/O domain C has a direct
I/O device.
I/O domain B and I/O domain D are resilient. I/O domains A, B, and D depend on root domain
A. I/O domains B and D depend on root domain B. I/O domain C depends on root domain C.
If root domain A is interrupted, I/O domain A is interrupted as well. I/O domains B and D,
however, fail over to alternate paths and continue to run applications. If root domain C is
interrupted, I/O domain C fails in the way specified by the failure-policy property value of
root domain C.
140 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Rebooting the Root Domain With Non-Resilient I/O Domains Configured
As with PCIe slots in the I/O domain, the concerns that are described in Rebooting the Root
Domain With PCIe Endpoints Configured on page 149 also pertain to the virtual functions that
are assigned to an I/O domain.
Note An I/O domain cannot start if the associated root domain is not running.
Chapter 7 Creating an I/O Domain by Using PCIe SR-IOV Virtual Functions 141
142
8
C H A P T E R 8
The DIO feature enables you to create more I/O domains than the number of PCIe buses in a
system. The possible number of I/O domains is now limited only by the number of PCIe
endpoint devices.
Note Because root domains cannot have dependencies on other root domains, a root domain
that owns a PCIe bus cannot have its PCIe endpoint devices or SR-IOV virtual functions
assigned to another root domain. However, you can assign a PCIe endpoint device or virtual
function from a PCIe bus to the root domain that owns that bus.
143
Creating an I/O Domain by Assigning PCIe Endpoint Devices
The following diagram shows that the PCIe endpoint device, PCIE3, is assigned to an I/O
domain. Both bus pci_0 and the switch in the I/O domain are virtual. The PCIE3 endpoint
device is no longer accessible in the primary domain.
In the I/O domain, the pci_0 block and the switch are a virtual root complex and a virtual PCIe
switch, respectively. This block and switch are similar to the pci_0 block and the switch in the
primary domain. In the primary domain, the devices in slot PCIE3 are a shadow form of the
original devices and are identified as SUNW,assigned.
Caution You cannot use Oracle Solaris hot-plug operations to hot-remove a PCIe endpoint
device after that device is removed from the primary domain by using the ldm remove-io
command. For information about replacing or removing a PCIe endpoint device, see Making
PCIe Hardware Changes on page 150.
144 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Direct I/O Hardware and Software Requirements
Use the ldm list-io command to list the PCIe endpoint devices.
Though the DIO feature permits any PCIe card in a slot to be assigned to an I/O domain, only
certain PCIe cards are supported. See Direct I/O Hardware and Software Requirements on
page 145.
Caution PCIe cards that have a bridge are not supported. PCIe function-level assignment is also
not supported. Assigning an unsupported PCIe card to an I/O domain might result in
unpredictable behavior.
The following items describe important details about the DIO feature:
This feature is enabled only when all the software requirements are met. See Direct I/O
Hardware and Software Requirements on page 145.
Only PCIe endpoints that are connected to a PCIe bus assigned to a root domain can be
assigned to another domain with the DIO feature.
I/O domains that use DIO have access to the PCIe endpoint devices only when the root
domain is running.
Rebooting the root domain affects I/O domains that have PCIe endpoint devices. See
Rebooting the Root Domain With PCIe Endpoints Configured on page 149. The root
domain also performs the following tasks:
Initializes and manages the PCIe bus.
Handles all bus errors that are triggered by the PCIe endpoint devices that are assigned
to I/O domains. Note that only the primary domain receives all PCIe bus-related errors.
Note The SPARC T7 series server and SPARC M7 series server have an I/O controller that
provides several PCIe buses and you can assign PCIe cards to different domains by using the
PCIe bus assignment feature. For information, see Chapter 6, Creating a Root Domain by
Assigning PCIe Buses.
Software Requirements. To use the DIO feature, the following domains must run the
supported OS:
Root domain. At least the Oracle Solaris 11.3 OS.
The recommended practice is for all domains to run at least the Oracle Solaris 10 1/13
OS plus the required patches in Fully Qualified Oracle Solaris OS Versions in Oracle
VM Server for SPARC 3.3 Installation Guide or the Oracle Solaris 11.3 OS.
I/O domain. At least the Oracle Solaris 11 OS. Note that additional feature support is
included in more recent Oracle Solaris 11 releases.
Note All PCIe cards that are supported on a platform are supported in the root domains. See
the documentation for your platform for the list of supported PCIe cards. However, only direct
I/O-supported PCIe cards can be assigned to I/O domains.
To add or remove PCIe endpoint devices by using the direct I/O feature, you must first enable
I/O virtualization on the PCIe bus itself.
You can use the ldm set-io or ldm add-io command to set the iov property to on. You can
also use the ldm add-domain or ldm set-domain command to set the rc-add-policy property
to iov. See the ldm(1M) man page.
Rebooting the root domain affects direct I/O, so carefully plan your direct I/O configuration
changes to maximize the direct I/O-related changes to the root domain and to minimize root
domain reboots.
Assignment or removal of a PCIe endpoint device to any non-root domain is permitted only
when that domain is either stopped or inactive.
146 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Planning PCIe Endpoint Device Configuration
Note The Fujitsu M10 server supports the dynamic reconfiguration of PCIe endpoint devices.
You can assign or remove PCIe endpoint devices without rebooting the root domain or
stopping the I/O domain.
For up-to-date information about this feature, see Fujitsu M10/SPARC M10 Systems System
Operation and Administration Guide for your model at http://www.fujitsu.com/
global/services/computing/server/sparc/downloads/manual/.
Note The direct I/O feature is not supported on the SPARC M7 series server and SPARC T7
series server. Instead, use the PCIe bus assignment feature. See Chapter 6, Creating a Root
Domain by Assigning PCIe Buses.
SPARC systems, up to and including the SPARC T5 and SPARC M6 platforms, provide a finite
number of interrupts, so Oracle Solaris limits the number of interrupts that each device can use.
The default limit should match the needs of a typical system configuration but you might need
to adjust this value for certain system configurations. For more information, see Adjusting the
Interrupt Limit on page 380.
When in a delayed reconfiguration, you can continue to add or remove more devices and then
reboot the root domain only one time to make all the changes take effect.
For an example, see How to Create an I/O Domain by Assigning a PCIe Endpoint Device on
page 152.
You must take the following general steps to plan and perform a DIO device configuration:
1. Understand and record your system hardware configuration.
Specifically, record information about the part numbers and other details of the PCIe cards
in the system.
Use the ldm list-io -l and prtdiag -v commands to obtain the information and save it
for future reference.
2. Determine which PCIe endpoint devices are required to be in the primary domain.
For example, determine the PCIe endpoint devices that provide access to the following:
Boot disk device
Network device
Other devices that the primary domain offers as services
3. Remove all PCIe endpoint devices that you might use in I/O domains.
This step helps you to avoid performing subsequent reboot operations on the root domain,
because reboots affect I/O domains.
Use the ldm remove-io command to remove the PCIe endpoint devices. Use pseudonyms
rather than device paths to specify the devices to the remove-io and add-io subcommands.
Note After you have removed all the devices you want during a delayed reconfiguration,
you need to reboot the root domain only one time to make all the changes take effect.
148 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Rebooting the Root Domain With PCIe Endpoints Configured
The behavior of I/O domains with PCIe endpoint devices is unpredictable when the root
domain is rebooted while those I/O domains are running. For instance, I/O domains with PCIe
endpoint devices might panic during or after the reboot. Upon reboot of the root domain, you
would need to manually stop and start each domain.
Note that if the I/O domain is resilient, it can continue to operate even if the root domain that is
the owner of the PCIe bus becomes unavailable. See I/O Domain Resiliency on page 134.
Note An I/O domain cannot start if the associated root domain is not running.
EXAMPLE 81 Configuring Failure Policy Dependencies for a Configuration With a Non-primary Root
Domain and I/O Domains
The following example describes how you can configure failure policy dependencies in a
configuration that has a non-primary root domain and I/O domains.
In this example, ldg1 is a non-primary root domain. ldg2 is an I/O domain that has either PCIe
SR-IOV virtual functions or PCIe endpoint devices assigned from a root complex that is owned
by the ldg1 domain.
150 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Making PCIe Hardware Changes
If so, no action is required to automatically assign the new card to the current I/O domain.
If not, first remove that PCIe card from the I/O domain by using the ldm remove-io
command. Next, use the ldm add-io command to reassign that PCIe card to the root
domain. Then, physically replace the PCIe card you assigned to the root domain with a
different PCIe card. These steps enable you to avoid a configuration that is unsupported by
the DIO feature.
Note This procedure does not apply when the PCIe card is on a root complex owned by a
non-primary root domain. Instead, see How to Replace PCIe Direct I/O Cards Assigned to an
Oracle VM Server for SPARC Guest Domain (Doc ID 1684273.1) (https://
support.oracle.com/
epmos/faces/
DocumentDisplay?_afrLoop=226878266536565&id=1684273.1&_adf.ctrl-state=bo9fbmr1n_49).
1 Stop the guest domain that has the PCIe slot assigned to it.
primary# ldm stop domain-name
3 Stop the guest domains that have PCIe slots and SR-IOV virtual functions assigned to them.
primary# ldm stop domain-name
Note You do not need to stop guest domains that have PCIe buses assigned to them because
they might be providing alternate paths to network and disk devices to the guest domains.
4 Initiate a delayed reconfiguration on the primary domain so that you can assign this slot to it.
primary# ldm start-reconf primary
8 After the card is replaced, perform the following steps if you must reassign this same PCIe slot to
the guest domain:
c. Reboot the primary domain to cause the removal of the PCIe slot to take effect.
primary# shutdown -i6 -g0 -y
e. Start the guest domains to which you want to assign PCIe slots and SR-IOV virtual functions.
primary# ldm start domain-name
152 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Creating an I/O Domain by Assigning a PCIe Endpoint Device
Caution The primary domain loses access to the on-board DVD device if you assign the
/SYS/MB/SASHBA1 slot on a SPARC T3-1 or a SPARC T4-1 system to a DIO domain.
The SPARC T3-1 and SPARC T4-1 systems include two DIO slots for on-board storage, which
are represented by the /SYS/MB/SASHBA0 and /SYS/MB/SASHBA1 paths. In addition to hosting
multiheaded on-board disks, the /SYS/MB/SASHBA1 slot hosts the on-board DVD device. So, if
you assign /SYS/MB/SASHBA1 to a DIO domain, the primary domain loses access to the
on-board DVD device.
The SPARC T3-2 and SPARC T4-2 systems have a single SASHBA slot that hosts all on-board
disks as well as the on-board DVD device. So, if you assign SASHBA to a DIO domain, the
on-board disks and the on-board DVD device are loaned to the DIO domain and unavailable to
the primary domain.
For an example of adding a PCIe endpoint device to create an I/O domain, see Planning PCIe
Endpoint Device Configuration on page 147.
Note In this release, use the DefaultFixed NCP to configure datalinks and network interfaces
on Oracle Solaris 11 systems.
Ensure that the DefaultFixed NCP is enabled by using the netadm list command. See
Chapter 7, Using Datalink and Interface Configuration Commands on Profiles, in Oracle
Solaris Administration: Network Interfaces and Network Virtualization .
1 Identify and archive the devices that are currently installed on the system.
The output of the ldm list-io -l command shows how the I/O devices are currently
configured. You can obtain more detailed information by using the prtdiag -v command.
Note After the devices are assigned to I/O domains, the identity of the devices can be
determined only in the I/O domains.
154 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Creating an I/O Domain by Assigning a PCIe Endpoint Device
[pci@500/pci@1/pci@0/pci@6]
/SYS/MB/PCIE9 PCIE pci_1 primary EMP
[pci@500/pci@1/pci@0/pci@0]
/SYS/MB/NET2 PCIE pci_1 primary OCC
[pci@500/pci@1/pci@0/pci@5]
network@0
network@0,1
ethernet@0,80
/SYS/MB/NET0/IOVNET.PF0 PF pci_0 primary
[pci@400/pci@1/pci@0/pci@4/network@0]
maxvfs = 7
/SYS/MB/NET0/IOVNET.PF1 PF pci_0 primary
[pci@400/pci@1/pci@0/pci@4/network@0,1]
maxvfs = 7
/SYS/MB/PCIE5/IOVNET.PF0 PF pci_1 primary
[pci@500/pci@2/pci@0/pci@0/network@0]
maxvfs = 63
/SYS/MB/PCIE5/IOVNET.PF1 PF pci_1 primary
[pci@500/pci@2/pci@0/pci@0/network@0,1]
maxvfs = 63
/SYS/MB/NET2/IOVNET.PF0 PF pci_1 primary
[pci@500/pci@1/pci@0/pci@5/network@0]
maxvfs = 7
/SYS/MB/NET2/IOVNET.PF1 PF pci_1 primary
[pci@500/pci@1/pci@0/pci@5/network@0,1]
maxvfs = 7
2 Determine the device path of the boot disk that must be retained.
See Step 2 in How to Create a Root Domain by Assigning a PCIe Bus on page 72.
6 Remove the PCIe endpoint devices that you might use in I/O domains.
In this example, you can remove the PCIE2, PCIE3, PCIE4, and PCIE5 endpoint devices because
they are not being used by the primary domain.
Caution Do not remove the devices that are used or required by the primary domain. Do
not remove a bus that has devices that are used by a domain, such as network ports or
usbecm devices.
If you mistakenly remove the wrong devices, use the ldm cancel-reconf primary
command to cancel the delayed reconfiguration on the primary domain.
You can remove multiple devices at one time to avoid multiple reboots.
c. Reboot the system to reflect the removal of the PCIe endpoint devices.
primary# shutdown -i6 -g0 -y
7 Log in to the primary domain and verify that the PCIe endpoint devices are no longer assigned
to the domain.
primary# ldm list-io
NAME TYPE BUS DOMAIN STATUS
---- ---- --- ------ ------
niu_0 NIU niu_0 primary
156 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Creating an I/O Domain by Assigning a PCIe Endpoint Device
Note The ldm list-io -l output might show SUNW,assigned-device for the PCIe endpoint
devices that were removed. Actual information is no longer available from the primary domain,
but the domain to which the device is assigned has this information.
9 Log in to the ldg1 domain and verify that the device is available for use.
Verify that the network device is available and then configure the network device for use in the
domain.
primary# dladm show-phys
LINK MEDIA STATE SPEED DUPLEX DEVICE
net0 Ethernet unknown 0 unknown nxge0
net1 Ethernet unknown 0 unknown nxge1
net2 Ethernet unknown 0 unknown nxge2
net3 Ethernet unknown 0 unknown nxge3
You can perform direct I/O and SR-IOV operations on PCIe buses that are assigned to any root
domain. You can now perform the following operations for all root domains, including
non-primary root domains:
Show the status of PCIe slots
Show the SR-IOV physical functions that are present
Assign a PCIe slot to an I/O domain or a root domain
Remove a PCIe slot from an I/O domain or a root domain
Create a virtual function from its physical function
Destroy a virtual function
Assign a virtual function to another domain
Remove a virtual function from another domain
The Logical Domains Manager obtains the PCIe endpoint devices and SR-IOV physical
function devices from the Logical Domains agents that run in the non-primary root domains.
This information is cached while the root domain is down after it is first discovered but only
until the root domain is booted.
159
Non-primary Root Domain Requirements
You can perform direct I/O and SR-IOV operations only when the root domain is active.
Logical Domains Manager operates on the actual devices that are present at that time. The
cached data might be refreshed when the following operations occur:
The Logical Domains agent is restarted in the specified root domain
A hardware change, such as a hot-plug operation, is performed in the specified root domain
Use the ldm list-io command to view the PCIe endpoint device status. The output also shows
the sub-devices and physical function devices from the root complexes that are owned by each
non-primary root domain.
You can use apply the following commands to any root domain:
ldm add-io
ldm remove-io
ldm set-io
ldm create-vf
ldm destroy-vf
ldm start-reconf
ldm cancel-reconf
Delayed reconfiguration support has been extended to include non-primary root domains.
However, it can be used only to run the ldm add-io, ldm remove-io, ldm set-io, ldm
create-vf and ldm destroy-vf commands. The delayed reconfiguration can be used for any
operation that cannot be completed by using dynamic operations such as the following:
Performing direct I/O operations
Creating and destroying virtual functions from a physical function that does not meet the
dynamic SR-IOV configuration requirements.
Caution Plan ahead to minimize the number of reboots of the root domain, which minimizes
downtime.
160 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Non-primary Root Domain Limitations
cards can be used, but not for DIO and SR-IOV. To determine which cards you can use on
your platform, see your platform's hardware documentation.
Firmware Requirements.
SPARC T4 platforms must run at least version 8.4.0.a of the system firmware.
SPARC T5 servers, SPARC M5 servers, and SPARC M6 servers must run at least version
9.1.0.x of the system firmware.
SPARC T7 series servers and SPARC M7 series servers must run at least version 9.4.3 of the
system firmware.
Fujitsu M10 servers must run at least version XCP2210 of the system firmware.
Software Requirements.
Non-primary domains must run at least the Oracle Solaris 11.2 OS.
The reboot of a root domain affects any I/O domain that has a device from the PCIe buses
that the root domain owns. See Rebooting the Root Domain With PCIe Endpoints
Configured on page 149.
You cannot assign an SR-IOV virtual function or a PCIe slot from one root domain to
another root domain. This limitation prevents circular dependencies.
The following SPARC T4-2 I/O configuration shows that bus pci_1 already has been removed
from the primary domain.
The following listing shows that the guest domains are in the bound state:
162 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Non-primary Root Domain Examples
The following ldm add-io command adds the pci_1 bus to the rootdom1 domain with I/O
virtualization enabled for that bus. The ldm start command starts the rootdom1 domain.
If a specified PCIe bus is assigned already to a root domain, use the ldm set-io command to
enable I/O virtualization.
The root domain must be running its OS before you can configure the I/O devices. Connect to
the console of the rootdom1 guest domain and then boot the OS of the rootdom1 root domain if
your guest domains are not already set to autoboot.
The following command shows that the pci_1 PCIe bus and its children are now owned by the
rootdom1 root domain.
The following command produces an error because it attempts to remove a slot from the root
domain while it is still active:
The following command shows the correct method of removing a slot by first initiating a
delayed reconfiguration on the root domain.
The following ldm list-io command verifies that the /SYS/MB/PCIE7 slot is no longer on the
root domain.
164 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Non-primary Root Domain Examples
The following commands assign the /SYS/MB/PCIE7 slot to the ldg2 domain. The ldm start
command starts the ldg2 domain.
You can also use the -n option to create the two virtual functions by using the following two
commands:
If you were unable to dynamically create the virtual functions on a given physical function,
initiate a delayed reconfiguration to create them statically.
166 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Non-primary Root Domain Examples
Connect to the console of the ldg3 domain and then boot its OS.
The following output shows that all the assignments appear as expected. One virtual function is
unassigned so it can be assigned dynamically to the ldg1, ldg2, or ldg3 domain.
# ldm list-io
NAME TYPE BUS DOMAIN STATUS
---- ---- --- ------ ------
pci_0 BUS pci_0 primary IOV
pci_1 BUS pci_1 ldg1 IOV
niu_0 NIU niu_0 primary
niu_1 NIU niu_1 primary
/SYS/MB/PCIE0 PCIE pci_0 primary OCC
/SYS/MB/PCIE2 PCIE pci_0 primary OCC
/SYS/MB/PCIE4 PCIE pci_0 primary OCC
/SYS/MB/PCIE6 PCIE pci_0 primary EMP
/SYS/MB/PCIE8 PCIE pci_0 primary EMP
/SYS/MB/SASHBA PCIE pci_0 primary OCC
/SYS/MB/NET0 PCIE pci_0 primary OCC
/SYS/MB/PCIE1 PCIE pci_1 ldg1 OCC
/SYS/MB/PCIE3 PCIE pci_1 ldg1 OCC
/SYS/MB/PCIE5 PCIE pci_1 ldg1 OCC
/SYS/MB/PCIE7 PCIE pci_1 ldg2 OCC
/SYS/MB/PCIE9 PCIE pci_1 ldg1 EMP
/SYS/MB/NET2 PCIE pci_1 ldg1 OCC
/SYS/MB/NET0/IOVNET.PF0 PF pci_0 primary
/SYS/MB/NET0/IOVNET.PF1 PF pci_0 primary
/SYS/MB/PCIE5/IOVNET.PF0 PF pci_1 ldg1
/SYS/MB/PCIE5/IOVNET.PF1 PF pci_1 ldg1
/SYS/MB/NET2/IOVNET.PF0 PF pci_1 ldg1
/SYS/MB/NET2/IOVNET.PF1 PF pci_1 ldg1
/SYS/MB/PCIE5/IOVNET.PF0.VF0 VF pci_1
/SYS/MB/PCIE5/IOVNET.PF0.VF1 VF pci_1 ldg1
/SYS/MB/NET2/IOVNET.PF1.VF0 VF pci_1 ldg2
/SYS/MB/NET2/IOVNET.PF1.VF1 VF pci_1 ldg3
This chapter describes how to use virtual disks with Oracle VM Server for SPARC software.
169
Introduction to Virtual Disks
Note You can refer to a disk either by using /dev/dsk or /dev/rdsk as part of the disk path
name. Either reference produces the same result.
Caution Do not use the d0 device to represent the entire disk. This device represents the entire
disk only when the disk has an EFI label and not a VTOC label. Using the d0 device results in
the virtual disk being a single-slice disk, which might cause you to corrupt the disk label if you
write the beginning of the disk.
Instead, use the s2 slice to virtualize the entire disk. The s2 slice is independent of the label.
The virtual disk back end can be physical or logical. Physical devices can include the following:
Physical disk or disk logical unit number (LUN)
Physical disk slice
170 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Managing Virtual Disks
Each virtual disk of a domain has a unique device number that is assigned when the domain is
bound. If a virtual disk is added with an explicit device number (by setting the id property), the
specified device number is used. Otherwise, the system automatically assigns the lowest device
number available. In that case, the device number assigned depends on how virtual disks were
added to the domain. The device number eventually assigned to a virtual disk is visible in the
output of the ldm list-bindings command when a domain is bound.
When a domain with virtual disks is running the Oracle Solaris OS, each virtual disk appears in
the domain as a c0dn disk device, where n is the device number of the virtual disk.
In the following example, the ldg1 domain has two virtual disks: rootdisk and pdisk.
rootdisk has a device number of 0 (disk@0) and appears in the domain as the disk device c0d0.
pdisk has a device number of 1 (disk@1) and appears in the domain as the disk device c0d1.
Caution If a device number is not explicitly assigned to a virtual disk, its device number can
change when the domain is unbound and is later bound again. In that case, the device name
assigned by the OS running in the domain can also change and break the existing configuration
of the system. This might happen, for example, when a virtual disk is removed from the
configuration of the domain.
A virtual disk back end can be exported multiple times either through the same or different
virtual disk servers. Each exported instance of the virtual disk back end can then be assigned to
either the same or different guest domains.
When a virtual disk back end is exported multiple times, it should not be exported with the
exclusive (excl) option. Specifying the excl option will only allow exporting the back end once.
The back end can be safely exported multiple times as a read-only device with the ro option.
Assigning a virtual disk device to a domain creates an implicit dependency on the domain
providing the virtual disk service. You can view these dependencies or view domains that
depend on the virtual disk service by using the ldm list-dependencies command. See Listing
Domain I/O Dependencies on page 382.
Note A back end is actually exported from the service domain and assigned to the guest
domain when the guest domain (domain-name) is bound.
The following example describes how to add the same virtual disk to two different guest
domains through the same virtual disk service.
1 Export the virtual disk back end two times from a service domain.
ldm add-vdsdev [options={ro,slice}] backend volume1@service-name
# ldm add-vdsdev -f [options={ro,slice}] backend volume2@service-name
Note that the second ldm add-vdsdev command uses the -f option to force the second export
of the back end. Use this option when using the same back-end path for both commands and
when the virtual disk servers are located on the same service domain.
172 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Virtual Disk Appearance
After a back end is exported from the service domain, you can change the virtual disk options.
primary# ldm set-vdsdev options=[{ro,slice,excl}] volume-name@service-name
After a virtual disk is assigned to a guest domain, you can change the timeout of the virtual disk.
primary# ldm set-vdisk timeout=seconds disk-name domain-name
2 Stop exporting the corresponding back end from the service domain.
primary# ldm rm-vdsdev volume-name@service-name
Caution The SPARC T7 series server and SPARC M7 series server introduce Non-Volatile
Memory Express (NVMe) storage, which can be a disk drive or a Flash Accelerator F160 PCIe
card. This disk type cannot be used as a full disk to build a virtual disk back end.
When using such a disk, use a slice of the disk or the slice option when creating the vdsdev.
For example, after you partition the disk, use one of the following commands:
Or:
Caution Single-slice disks do not have device IDs. If a device ID is required, use a full physical
disk backend.
Full Disk
When a back end is exported to a domain as a full disk, it appears in that domain as a regular
disk with eight slices (s0 to s7). This type of disk is visible with the format(1M) command. The
disk's partition table can be changed using either the fmthard or format command.
A full disk is also visible to the OS installation software and can be selected as a disk onto which
the OS can be installed.
Any back end can be exported as a full disk except physical disk slices that can be exported only
as single-slice disks.
Single-Slice Disk
When a back end is exported to a domain as a single-slice disk, it appears in that domain as a
regular disk with eight slices (s0 to s7). However, only the first slice (s0) is usable. This type of
disk is visible with the format(1M) command, but the disk's partition table cannot be changed.
A single-slice disk is also visible from the OS installation software and can be selected as a disk
onto which you can install the OS. In that case, if you install the OS using the UNIX File System
(UFS), then only the root partition (/) must be defined, and this partition must use all the disk
space.
Any back end can be exported as a single-slice disk except physical disks that can only be
exported as full disks.
174 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Virtual Disk Back End Options
Note Prior to the Oracle Solaris 10 10/08 OS release, a single-slice disk appeared as a disk with a
single partition (s0). This type of disk was not visible with the format command. The disk also
was not visible from the OS installation software and could not be selected as a disk device onto
which the OS could be installed.
Note Some drivers do not honor the excl option and will disallow some virtual disk back ends
from being opened exclusively. The excl option is known to work with physical disks and
slices, but the option does not work with files. It might work with pseudo devices, such as disk
volumes. If the driver of the back end does not honor the exclusive open, the back end excl
option is ignored, and the back end is not opened exclusively.
Because the excl option prevents applications running in the service domain from accessing a
back end exported to a guest domain, do not set the excl option in the following situations:
When guest domains are running, if you want to be able to use commands such as format or
luxadm to manage physical disks, then do not export these disks with the excl option.
When you export a Solaris Volume Manager volume, such as a RAID or a mirrored volume,
do not set the excl option. Otherwise, this can prevent Solaris Volume Manager from
starting some recovery operation in case a component of the RAID or mirrored volume
fails. See Using Virtual Disks With Solaris Volume Manager on page 197 for more
information.
If the Veritas Volume Manager (VxVM) is installed in the service domain and Veritas
Dynamic Multipathing (VxDMP) is enabled for physical disks, then physical disks have to
be exported without the (non-default) excl option. Otherwise, the export fails, because the
virtual disk server (vds) is unable to open the physical disk device. See Using Virtual Disks
When VxVM Is Installed on page 197 for more information.
If you are exporting the same virtual disk back end multiple times from the same virtual disk
service, see How to Export a Virtual Disk Back End Multiple Times on page 172 for more
information.
By default, the back end is opened non-exclusively. That way the back end still can be used by
applications running in the service domain while it is exported to another domain.
This option is useful when you want to export the raw content of a back end. For example, if you
have a ZFS or Solaris Volume Manager volume where you have already stored data and you
want your guest domain to access this data, then you should export the ZFS or Solaris Volume
Manager volume using the slice option.
For more information about this option, see Virtual Disk Back End on page 176.
176 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Virtual Disk Back End
A physical disk or disk LUN is exported from a service domain by exporting the device that
corresponds to the slice 2 (s2) of that disk without setting the slice option. If you export the
slice 2 of a disk with the slice option, only this slice is exported and not the entire disk.
Caution When configuring virtual disks, ensure that each virtual disk references a distinct
physical (back-end) resource, such as a physical disk, a disk slice, a file, or a volume.
Some disks, such as FibreChannel and SAS, have a dual-ported nature, which means that the
same disk can be referenced by two different paths. Ensure that the paths you assign to different
domains do not refer to the same physical disk.
3 After the guest domain is started and running the Oracle Solaris OS, verify that the disk is
accessible and is a full disk.
A full disk is a regular disk that has eight (8) slices.
For example, the disk being checked is c0d1.
ldg1# ls -1 /dev/dsk/c0d1s*
/dev/dsk/c0d1s0
/dev/dsk/c0d1s1
/dev/dsk/c0d1s2
/dev/dsk/c0d1s3
/dev/dsk/c0d1s4
/dev/dsk/c0d1s5
/dev/dsk/c0d1s6
/dev/dsk/c0d1s7
Caution The SPARC T7 series server and SPARC M7 series server introduce Non-Volatile
Memory Express (NVMe) storage, which can be a disk drive or a Flash Accelerator F160 PCIe
card. This disk type cannot be used as a full disk to build a virtual disk back end.
When using such a disk, use a slice of the disk or the slice option when creating the vdsdev.
3 After the guest domain is started and running the Oracle Solaris OS, you can list the disk (c0d13,
for example) and see that the disk is accessible.
ldg1# ls -1 /dev/dsk/c0d13s*
/dev/dsk/c0d13s0
/dev/dsk/c0d13s1
/dev/dsk/c0d13s2
/dev/dsk/c0d13s3
/dev/dsk/c0d13s4
/dev/dsk/c0d13s5
/dev/dsk/c0d13s6
/dev/dsk/c0d13s7
Although there are eight devices, because the disk is a single-slice disk, only the first slice (s0) is
usable.
178 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Virtual Disk Back End
When a blank file or volume is exported as full disk, it appears in the guest domain as an
unformatted disk; that is, a disk with no partition. Then you need to run the format command
in the guest domain to define usable partitions and to write a valid disk label. Any I/O to the
virtual disk fails while the disk is unformatted.
Note You must run the format command in the guest domain to create partitions.
1 From the service domain, create a file (fdisk0 for example) to use as the virtual disk.
service# mkfile 100m /ldoms/domain/test/fdisk0
The size of the file defines the size of the virtual disk. This example creates a 100-Mbyte blank
file to get a 100-Mbyte virtual disk.
4 After the guest domain is started and running the Oracle Solaris OS, verify that the disk is
accessible and is a full disk.
A full disk is a regular disk with eight slices.
The following example shows how to list the disk, c0d5, and verify that it is accessible and is a
full disk.
ldg1# ls -1 /dev/dsk/c0d5s*
/dev/dsk/c0d5s0
/dev/dsk/c0d5s1
/dev/dsk/c0d5s2
/dev/dsk/c0d5s3
/dev/dsk/c0d5s4
/dev/dsk/c0d5s5
/dev/dsk/c0d5s6
/dev/dsk/c0d5s7
2 From the control domain, export the corresponding device to that ZFS volume.
primary# ldm add-vdsdev /dev/zvol/dsk/ldoms/domain/test/zdisk0 \
zdisk0@primary-vds0
In this example, the slice option is not set so the file is exported as a full disk.
4 After the guest domain is started and running the Oracle Solaris OS, verify that the disk is
accessible and is a full disk.
A full disk is a regular disk with eight slices.
The following example shows how to list the disk, c0d9, and verify that it is accessible and is a
full disk:
ldg1# ls -1 /dev/dsk/c0d9s*
/dev/dsk/c0d9s0
/dev/dsk/c0d9s1
/dev/dsk/c0d9s2
/dev/dsk/c0d9s3
/dev/dsk/c0d9s4
/dev/dsk/c0d9s5
180 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Virtual Disk Back End
/dev/dsk/c0d9s6
/dev/dsk/c0d9s7
When a file or volume is exported as a single-slice disk, the system simulates a fake disk
partitioning which makes that file or volume appear as a disk slice. Because the disk partitioning
is simulated, you do not create partitioning for that disk.
2 From the control domain, export the corresponding device to that ZFS volume, and set the
slice option so that the volume is exported as a single-slice disk.
primary# ldm add-vdsdev options=slice /dev/zvol/dsk/ldoms/domain/test/zdisk0 \
zdisk0@primary-vds0
4 After the guest domain is started and running the Oracle Solaris OS, you can list the disk (c0d9,
for example) and see that the disk is accessible and is a single-slice disk (s0).
ldg1# ls -1 /dev/dsk/c0d9s*
/dev/dsk/c0d9s0
/dev/dsk/c0d9s1
/dev/dsk/c0d9s2
/dev/dsk/c0d9s3
/dev/dsk/c0d9s4
/dev/dsk/c0d9s5
/dev/dsk/c0d9s6
/dev/dsk/c0d9s7
For information about correctly creating or updating /etc/system property values, see
Updating Property Values in the /etc/system File on page 362.
Note Setting this tunable forces the export of all volumes as single-slice disks, and you
cannot export any volume as a full disk.
1
Disk (disk slice 2) Full disk Single-slice disk2
182 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Configuring Virtual Disk Multipathing
If you directly or indirectly export a disk slice which starts on the first block of a physical disk,
you might overwrite the partition table of the physical disk and make all partitions of that disk
inaccessible.
Note that the virtual disk multipathing feature can detect when the service domain cannot
access the back-end storage. In such an instance, the virtual disk driver attempts to access the
back-end storage by another path.
To enable virtual disk multipathing, you must export a virtual disk back end from each service
domain and add the virtual disk to the same multipathing group (mpgroup). The mpgroup is
identified by a name and is configured when you export the virtual disk back end.
The following figure shows a virtual disk multipathing configuration that is used as an example
in the procedure How to Configure Virtual Disk Multipathing on page 184. In this example, a
multipathing group named mpgroup1 is used to create a virtual disk, whose back end is
accessible from two service domains: primary and alternate.
The virtual disk timeout property specifies the amount of time after which an I/O fails when no
service domain is available to process the I/O. This timeout applies to all virtual disks, even
those that use virtual disk multipathing.
As a consequence, setting a virtual disk timeout when virtual disk multipathing is configured
can prevent multipathing from working correctly, especially with a small timeout value. So,
avoid setting a virtual disk timeout for virtual disks that are part of a multipathing group.
1 Export the virtual disk back end from the primary service domain.
primary# ldm add-vdsdev mpgroup=mpgroup1 backend-path1 volume@primary-vds0
184 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Configuring Virtual Disk Multipathing
backend-path1 is the path to the virtual disk back end from the primary domain.
2 Export the same virtual disk back end from the alternate service domain.
primary# ldm add-vdsdev mpgroup=mpgroup1 backend-path2 volume@alternate-vds0
backend-path2 is the path to the virtual disk back end from the alternate domain.
Note backend-path1 and backend-path2 are paths to the same virtual disk back end, but from
two different domains (primary and alternate). These paths might be the same or different,
depending on the configuration of the primary and alternate domains. The volume name is a
user choice. It might be the same or different for both commands.
Note Although the virtual disk back end is exported several times through different service
domains, you assign only one virtual disk to the guest domain and associate it with the virtual
disk back end through any of the service domains.
Example 101 Using an Mpgroup to Add a LUN to the Virtual Disk Service of Both Primary and
Alternate Domains
The following shows how to create a LUN and add it to the virtual disk service for both primary
and alternate domains by using the same mpgroup:
To determine which domain to use first when accessing the LUN, specify the associated path
when adding the disk to the domain.
Create the virtual disk devices:
primary# ldm add-vdsdev mpgroup=ha lun1@primary-vds0
primary# ldm add-vdsdev mpgroup=ha lun1@alternate-vds0
To use the LUN from primary-vds0 first, perform the following command:
primary# ldm add-vdisk disk1 lun1@primary-vds0 gd0
To use the LUN from alternate-vds0 first, perform the following command:
primary# ldm add-vdisk disk1 lun1@alternate-vds0 gd0
Caution When defining a multipathing group (mpgroup), ensure that the virtual disk back ends
that are part of the same mpgroup are effectively the same virtual disk back end. If you add
different back ends into the same mpgroup, you might see some unexpected behavior, and you
can potentially lose or corrupt data stored on the back ends.
Dynamic path selection occurs when the first path in an mpgroup disk is changed by using the
ldm set-vdisk command to set the volume property to a value in the form
volume-name@service-name. An active domain that supports dynamic path selection can switch
to only the selected path. If the updated drivers are not running, this path is selected when the
Oracle Solaris OS reloads the disk instance or at the next domain reboot.
The dynamic path selection feature enables you to dynamically perform the following steps
while the disk is in use:
Specify the disk path to be tried first by the guest domain when attaching the disk
Change the currently active path to the one that is indicated for already attached
multipathing disks
Using the ldm add-vdisk command with an mpgroup disk now specifies the path indicated by
volume-name@service-name as the selected path with which to access the disk.
The selected disk path is listed first in the set of paths provided to the guest domain irregardless
of its rank when the associated mpgroup was created.
You can use the ldm set-vdisk command on bound, inactive, and active domains. When used
on active domains, this command permits you to choose only the selected path of the mpgroup
disk.
186 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
CD, DVD and ISO Images
The following example shows that the selected path is vol-ldg2@opath-ldg2 and that the
currently used active path is going through the ldg1 domain. You might see this situation if
the selected path could not be used and the second possible path was used instead. Even if
the selected path comes online, the non-selected path continues to be used. To make the first
path active again, re-issue the ldm set-vdisk command to set the volume property to the
name of the path you want.
DISK
NAME VOLUME TOUT ID DEVICE SERVER MPGROUP
disk disk-ldg4@primary-vds0 0 disk@0 primary
tdiskgroup vol-ldg2@opath-ldg2 1 disk@1 ldg2 testdiskgroup
PORT MPGROUP VOLUME MPGROUP SERVER STATE
2 vol-ldg2@opath-ldg2 ldg2 standby
0 vol-ldg1@opath-vds ldg1 active
1 vol-prim@primary-vds0 primary standby
If you use the ldm set-vdisk command on an mpgroup disk of a bound domain that does not
run at least the Oracle Solaris 11.2 SRU 1 OS, the operation changes the order of the path
priorities and the new path can be used first during next disk attach or reboot or if the OBP
needs to access it.
Note You cannot export the CD or DVD drive itself. You can export only the CD or DVD that
is inside the CD or DVD drive. Therefore, a CD or DVD must be present inside the drive before
you can export it. Also, to be able to export a CD or DVD, that CD or DVD cannot be in use in
the service domain. In particular, the Volume Management file system, volfs service must not
use the CD or DVD. See How to Export a CD or DVD From the Service Domain to the Guest
Domain on page 188 for instructions on how to remove the device from use by volfs.
When you export a CD, DVD, or an ISO image, it automatically appears as a read-only device in
the guest domain. However, you cannot perform any CD control operations from the guest
domain; that is, you cannot start, stop, or eject the CD from the guest domain. If the exported
CD, DVD, or ISO image is bootable, the guest domain can be booted on the corresponding
virtual disk.
For example, if you export a Oracle Solaris OS installation DVD, you can boot the guest domain
on the virtual disk that corresponds to that DVD and install the guest domain from that DVD.
To do so, when the guest domain reaches the ok prompt, use the following command.
ok boot /virtual-devices@100/channel-devices@200/disk@n:f
Where n is the index of the virtual disk representing the exported DVD.
Note If you export a Oracle Solaris OS installation DVD and boot a guest domain on the virtual
disk that corresponds to that DVD to install the guest domain, then you cannot change the
DVD during the installation. So, you might need to skip any step of the installation requesting a
different CD/DVD, or you will need to provide an alternate path to access this requested media.
2 If the volume management daemon is running and online, as in the example in Step 1, do the
following:
a. In the /etc/vold.conf file, comment out the line starting with the following words:
use cdrom drive....
See the vold.conf(4) man page.
c. From the service domain, restart the volume management file system service.
service# svcadm refresh volfs
service# svcadm restart volfs
3 From the service domain, find the disk path for the CD-ROM device.
service# cdrw -l
Looking for CD devices...
Node Connected Device Device type
-----------+----------------+--------
/dev/rdsk/c1t0d0s2 | MATSHITA CD-RW CW-8124 DZ13 | CD Reader/Writer
188 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
CD, DVD and ISO Images
For example, the following ldm list shows that both the primary and ldom1 domains are
configured:
3 Add the virtual disk for the ISO image to the logical domain.
In this example, the logical domain is ldom1.
primary# ldm add-vdisk s10-dvd dvd-iso@primary-vds0 ldom1
However, in some cases, file systems or applications might not want the I/O operation to block
but rather to fail and report an error if the service domain is down for too long. You can now set
a connection timeout period for each virtual disk, which can then be used to establish a
connection between the virtual disk client on a guest domain and the virtual disk server on the
service domain. When that timeout period is reached, any pending I/O and any new I/O will fail
as long as the service domain is down and the connection between the virtual disk client and
server is not reestablished.
190 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Virtual Disk and SCSI
For information about correctly creating or updating /etc/system property values, see
Updating Property Values in the /etc/system File on page 362.
Note If this tunable is set, it overwrites any timeout setting done using the ldm CLI. Also,
the tunable sets the timeout for all virtual disks in the guest domain.
Specify the timeout in seconds. If the timeout is set to 0, the timeout is disabled and I/O is
blocked while the service domain is down (this is the default setting and behavior).
Note SCSI operations are effectively executed by the service domain, which manages the
physical SCSI disk or LUN used as a virtual disk back end. In particular, SCSI reservations are
done by the service domain. Therefore, applications running in the service domain and in guest
domains should not issue SCSI commands to the same physical SCSI disks. Doing so can lead to
an unexpected disk state.
Virtual disks whose back ends are SCSI disks support all format(1M) subcommands. Virtual
disks whose back ends are not SCSI disks do not support some format(1M) subcommands,
such as repair and defect. In that case, the behavior of format(1M) is similar to the behavior
of Integrated Drive Electronics (IDE) disks.
Refer to Oracle Solaris ZFS Administration Guide for more information about using ZFS.
In the following descriptions and examples, the primary domain is also the service domain
where disk images are stored.
192 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Using ZFS With Virtual Disks
Disk images can be stored on ZFS volumes or ZFS files. Creating a ZFS volume, whatever its
size, is quick using the zfs create -V command. On the other hand, ZFS files have to be
created by using the mkfile command. This command can take some time to complete,
especially if the file to be created is quite large, which is often the case when creating a disk
image.
Both ZFS volumes and ZFS files can take advantage of ZFS features such as the snapshot and
clone features, but a ZFS volume is a pseudo device while a ZFS file is a regular file.
If the disk image is to be used as a virtual disk onto which an OS is installed, the disk image must
be large enough to accommodate the OS installation requirements. This size depends on the
version of the OS and on the type of installation performed. If you install the Oracle Solaris OS,
you can use a disk size of 20 Gbytes to accommodate any type of installation of any version of
the Oracle Solaris OS.
When the guest domain is started, the ZFS volume or file appears as a virtual disk on which the
Oracle Solaris OS can be installed.
Before you create a snapshot of the disk image, ensure that the disk is not currently in use in the
guest domain to ensure that data currently stored on the disk image are coherent. You can
ensure that a disk is not in use in a guest domain in one of the following ways:
Stop and unbind the guest domain. This solution is the safest, and is the only solution
available if you want to create a snapshot of a disk image used as the boot disk of a guest
domain.
Unmount any slices of the disk you want to snapshot that are used in the guest domain, and
ensure that no slice is in use in the guest domain.
In this example, because of the ZFS layout, the command to create a snapshot of the disk image
is the same whether the disk image is stored on a ZFS volume or on a ZFS file.
For example, if the disk0 created was the boot disk of domain ldg1, do the following to clone
that disk to create a boot disk for domain ldg2.
Then ldompool/ldg2/disk0 can be exported as a virtual disk and assigned to the new ldg2
domain. The domain ldg2 can directly boot from that virtual disk without having to go through
the OS installation process.
194 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Using ZFS With Virtual Disks
Note The host ID of a domain is not stored on the boot disk, but rather is assigned by the
Logical Domains Manager when you create a domain. Therefore, when you clone a disk image,
the new domain does not keep the host ID of the original domain.
At this point, you have the snapshot of the boot disk image of an unconfigured system.
5 Clone this image to create a new domain which, when first booted, asks for the configuration of
the system.
Note The remainder of this section uses a Solaris Volume Manager volume as an example.
However, the discussion also applies to ZFS and VxVM volumes.
The virtual disk in the guest domain (for example, /dev/dsk/c0d2s0) is directly mapped to the
associated volume (for example, /dev/md/dsk/d0), and data stored onto the virtual disk from
the guest domain are directly stored onto the associated volume with no extra metadata. Data
stored on the virtual disk from the guest domain can therefore also be directly accessed from the
service domain through the associated volume.
Examples
If the Solaris Volume Manager volume d0 is exported from the primary domain to domain1,
then the configuration of domain1 requires some extra steps.
primary# metainit d0 3 1 c2t70d0s6 1 c2t80d0s6 1 c2t90d0s6
primary# ldm add-vdsdev options=slice /dev/md/dsk/d0 vol3@primary-vds0
primary# ldm add-vdisk vdisk3 vol3@primary-vds0 domain1
After domain1 has been bound and started, the exported volume appears as
/dev/dsk/c0d2s0, for example, and you can use it.
domain1# newfs /dev/rdsk/c0d2s0
domain1# mount /dev/dsk/c0d2s0 /mnt
domain1# echo test-domain1 > /mnt/file
196 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Using Volume Managers in an Oracle VM Server for SPARC Environment
After domain1 has been stopped and unbound, data stored on the virtual disk from domain1
can be directly accessed from the primary domain through Solaris Volume Manager volume
d0.
primary# mount /dev/md/dsk/d0 /mnt
primary# cat /mnt/file
test-domain1
For example, /dev/md/dsk/d0 is a RAID Solaris Volume Manager volume exported as a virtual
disk with the excl option to another domain, and d0 is configured with some hot-spare devices.
If a component of d0 fails, Solaris Volume Manager replaces the failing component with a hot
spare and resynchronizes the Solaris Volume Manager volume. However, the
resynchronization does not start. The volume is reported as resynchronizing, but the
resynchronization does not progress.
primary# metastat d0
d0: RAID
State: Resyncing
Hot spare pool: hsp000
Interlace: 32 blocks
Size: 20097600 blocks (9.6 GB)
Original device:
Size: 20100992 blocks (9.6 GB)
Device Start Block Dbase State Reloc
c2t2d0s1 330 No Okay Yes
c4t12d0s1 330 No Okay Yes
/dev/dsk/c10t600C0FF0000000000015153295A4B100d0s1 330 No Resyncing Yes
In such a situation, the domain using the Solaris Volume Manager volume as a virtual disk has
to be stopped and unbound to complete the resynchronization. Then the Solaris Volume
Manager volume can be resynchronized using the metasync command.
# metasync d0
198 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Virtual Disk Issues
This removal of the device ID of the virtual disk can cause problems to applications attempting
to reference the device ID of virtual disks. In particular, Solaris Volume Manager might be
unable to find its configuration or to access its metadevices.
Workaround: After upgrading a service domain to Oracle Solaris 10 1/13 OS, if a guest domain
is unable to find its Solaris Volume Manager configuration or its metadevices, perform the
following procedure.
2 Disable the devid feature of Solaris Volume Manager by adding the following lines to the
/kernel/drv/md.conf file:
md_devid_destroy=1;
md_keep_repl_state=1;
4 Check the Solaris Volume Manager configuration and ensure that it is correct.
5 Re-enable the Solaris Volume Manager devid feature by removing from the
/kernel/drv/md.conf file the two lines that you added in Step 2.
The following servers cannot boot from a disk that has an EFI GPT disk label:
UltraSPARC T2, UltraSPARC T2 Plus, and SPARC T3 servers no matter which system
firmware version is used
SPARC T4 servers that run system firmware versions prior to 8.4.0
SPARC T5 servers, SPARC M5 servers, and SPARC M6 servers that run system firmware
versions prior to 9.1.0
SPARC T7 series servers and SPARC M7 series servers that run system firmware versions
prior to 9.4.3
So, an Oracle Solaris 11.1 boot disk that is created on an up-to-date SPARC T4 server, SPARC
T5 server, SPARC T7 series server, SPARC M5 server, SPARC M6 server, or SPARC M7 series
server cannot be used on older servers or on servers that run older firmware.
This limitation restrains the ability to use either cold or live migration to move a domain from a
recent server to an older server. This limitation also prevents you from using an EFI GPT boot
disk image on an older server.
To determine whether an Oracle Solaris 11.1 boot disk is compatible with your server and its
firmware, ensure that the Oracle Solaris 11.1 OS is installed on a disk that is configured with an
SMI VTOC disk label.
200 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Virtual Disk Issues
To maintain backward compatibility with systems that run older firmware, use one of the
following procedures. Otherwise, the boot disk uses the EFI GPT disk label by default. These
procedures show how to ensure that the Oracle Solaris 11.1 OS is installed on a boot disk with
an SMI VTOC disk label on a SPARC T4 server with at least system firmware version 8.4.0, on a
SPARC T5 server, SPARC M5 server, or SPARC M6 server with at least system firmware version
9.1.0, and on a SPARC T7 series server or SPARC M7 series server with at least system firmware
version 9.4.3.
Solution 1: Remove the gpt property so that the firmware does not report that it supports
EFI.
1. From the OpenBoot PROM prompt, disable automatic booting and reset the system to
be installed.
ok setenv auto-boot? false
ok reset-all
5 unassigned wm 0 0 (0/0/0) 0
6 unassigned wm 0 0 (0/0/0) 0
7 unassigned wm 0 0 (0/0/0) 0
3. Re-write the SMI VTOC disk label.
partition> label
[0] SMI Label
[1] EFI Label
Specify Label type[0]: 0
Ready to label disk, continue? y
4. Configure your Oracle Solaris Automatic Installer (AI) to install the Oracle Solaris OS
on slice 0 of the boot disk.
Change the <disk> excerpt in the AI manifest as follows:
<target>
<disk whole_disk="true">
<disk_keyword key="boot_disk"/>
<slice name="0" in_zpool="rpool"/>
</disk>
[...]
</target>
5. Perform the installation of the Oracle Solaris 11.1 OS.
202 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
11
C H A P T E R 1 1
This chapter describes how to use virtual SCSI Host Bus Adapters (HBAs) with Oracle VM
Server for SPARC software.
A vHBA instance provides access to all SCSI devices that are reachable by a specific vSAN
instance. A vHBA can recognize any SCSI device type such as disk, CD, DVD, or tape. The set of
reachable SCSI devices changes based on the set of physical SCSI devices that is currently
203
Introduction to Virtual SCSI Host Bus Adapters
known to the virtual SAN's associated physical HBA driver. The identity and number of SCSI
devices known to a specific vHBA are not known until runtime, which also occurs with a
physical HBA driver.
The vHBA has virtual LUNs (vLUNs) as its child devices and they behave the same way as
physical LUNs. For example, you can use the native Oracle Solaris multipathing solution
(MPxIO) with a vHBA instance and its vLUNs. The device path for a vLUN uses the full
cXtYdZsN notation: /dev/[r]dsk/cXtYdZsN. The tY part of the device name indicates the
SCSI target device.
After you configure the virtual SAN and virtual SCSI HBA, you can perform operations such as
booting a virtual LUN from the OpenBoot prompt or viewing all the virtual LUNs by using the
format command.
FIGURE 111 Virtual SCSI HBAs With Oracle VM Server for SPARC
A virtual SAN exists in a service domain and is implemented by the vsan kernel module,
whereas a virtual SCSI HBA exists in a guest domain and is implemented by the vhba module. A
virtual SAN is associated with a specific physical SCSI HBA initiator port, whereas a virtual
SCSI HBA is associated with a specific virtual SAN.
204 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Operational Model for Virtual SCSI HBAs
The vhba module exports a SCSA-compliant interface to receive I/O requests from any
SCSA-compliant SCSI target driver. The vhba module translates the I/O requests into virtual
I/O protocol messages that are sent through an LDC to the service domain.
The vsan module translates the virtual I/O messages sent by vhba into I/O requests. These
requests are sent to a SCSA-compliant physical SCSI HBA driver. The vsan module returns the
I/O payload and status to vhba through the LDC. Finally, the vhba forwards this I/O response to
the originator of the I/O request.
Although you use the ldm command to name a virtual disk explicitly, a virtual LUN in a guest
domain derives its identity from the identity of the associated physical LUN in the service
domain. See the ldm(1M) man page.
For example, the physical LUN and the virtual LUN share the text shown in bold in the
following device paths:
Physical LUN in the service domain:
/pci@0/pci@0/pci@8/pci@0/pci@2/SUNW,qlc@0/fp@0,0/ssd@w216000c0ff8089d5,0
Virtual LUN in the guest domain:
/virtual-devices@100/channel-devices@200/scsi@1/iport@0/disk@w216000c0ff8089d5,0
Note The guest domain device path is present only when MPxIO is disabled in the guest
domain. If MPxIO is enabled, the scsi_vhci module creates the device path in the guest
domain and it has a different syntax.
Note that the scsi@1 component in the virtual LUN's device path denotes the virtual SCSI HBA
instance of which this virtual LUN is a member.
Because a virtual SCSI HBA's set of virtual LUNs is derived from the service domain at runtime,
virtual LUNs cannot be added or removed explicitly from the guest domain. Instead, you must
add or remove the underlying physical LUNs so that the guest domain's virtual LUN
membership can be altered. An event such as a domain reboot or a domain migration, might
cause the guest domains' virtual LUN membership to change. This change occurs because the
virtual LUNs are rediscovered automatically whenever the virtual SCSI HBA's LDC connection
is reset. If a virtual LUN's underlying physical LUN is not found on a future discovery, the
virtual LUN is marked as unavailable and accesses to the virtual LUN return an error similar to
the following:
A virtual SCSI HBA instance is managed by the vhba driver, but a virtual LUN is managed by a
SCSI target driver based on the device type of the underlying physical LUN. The following
output confirms that the vhba driver manages the virtual SCSI HBA instance and that the sd
SCSI disk driver manages the virtual LUN:
# prtconf -a -D /dev/dsk/c2t216000C0FF8089D5d0
SUNW,SPARC-Enterprise-T5220 (driver name: rootnex)
virtual-devices, instance #0 (driver name: vnex)
channel-devices, instance #0 (driver name: cnex)
scsi, instance #0 (driver name: vhba)
iport, instance #3 (driver name: vhba)
disk, instance #30 (driver name: sd)
Each virtual SCSI HBA of a domain has a unique device number that is assigned when the
domain is bound. If a virtual SCSI HBA is added with an explicit device number (by setting the
id property to a decimal value), the specified device number is used. Otherwise, the system
automatically assigns the lowest device number available. In that case, the device number
assigned depends on how the virtual SCSI HBAs were added to the domain. When a domain is
bound, the device number eventually assigned to a virtual SCSI HBA is visible in the output of
the ldm list-bindings and ldm list -o hba commands.
The ldm list-bindings, ldm list -o hba, and ldm add-vhba id=id commands all show and
specify the id property value as a decimal value. The Oracle Solaris OS shows the virtual SCSI
HBA id value as hexadecimal.
The following example shows that the vhba@0 device is the device name for the vh1 virtual SCSI
HBA on the gdom domain.
206 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Managing Virtual SCSI HBAs
Caution If a device number is not assigned explicitly to a virtual SCSI HBA, its device number
can change when the domain is unbound and is later re-bound. In that case, the device name
assigned by the OS running in the domain might also change and break the existing
configuration of the system. This might happen, for example, when a virtual SCSI HBA is
removed from the configuration of the domain.
For more information about the commands that are shown in this section, see the ldm(1M) man
page.
Note If at least the Oracle Solaris 11.3 OS is installed in the primary domain, the service
domain can be the control domain.
The ldm list-hba command lists the physical SCSI HBA initiator ports for the specified active
domain. After identifying a logical domain's SCSI HBA initiator ports, you can specify a
particular initiator port on the ldm add-vsan command line to create a virtual SAN.
The following example shows the initiator ports for the SCSI HBAs that are attached to the
svcdom service domain. The -l option shows detailed information.
If the LUNs you expect to see for an initiator port do not appear in the ldm list-hba output,
verify that multipathing is disabled in the referenced service domain for the referenced initiator
port. See Managing SAN Devices and Multipathing in Oracle Solaris 11.3.
If you add a virtual SAN to a bound service domain, sometimes the connection to the guest
domain is not established. You might see the following symptoms:
Disks from the virtual SCSI HBA do not appear in format output
LDC errors appear when booting from a virtual LUN
The vSAN name is unique to the control domain and not to the specified domain name. The
domain name identifies the domain in which the SCSI HBA initiator port is configured. You
can create multiple virtual SANs that reference the same initiator port path.
You can configure an initiator port path into one or more virtual SANs by using the ldm
add-vsan command. This configuration enables multiple service domains in the control
domain to use the same initiator port path.
208 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Managing Virtual SCSI HBAs
Note When the Oracle Solaris 11.3 OS is running on the service domain, the ldm add-vsan
command verifies that the initiator port path is a valid device path. If the specified service
domain is not active when you run the ldm add-vsan command, the specified initiator port
path cannot be verified by the service domain. If the initiator port path does not correspond to
an installed physical SCSI HBA initiator port that is part of the service domain, a warning
message is written to the service domain's system log after the service domain becomes active.
In this example, you create the port0_vhba virtual SCSI HBA on the gdom guest domain that
communicates with the port0 virtual SAN.
In this example, the service domain that has the virtual SAN is svcdom and the guest domain
that has the virtual SCSI HBA is gdom. Note that the virtual HBA identifier is not allocated in
this example because the gdom domain is not yet bound.
VSAN
NAME TYPE DEVICE IPORT
port0 VSAN [/pci@300/pci@1/pci@0/pci@4/SUNW,emlxs@0,1/fp@0,0]
------------------------------------------------------------------------------
NAME
gdom
VHBA
NAME VSAN DEVICE TOUT SERVER
port0_vhba port0 0 svcdom
The default timeout value of zero causes vhba to wait indefinitely to create the LDC connection
with the virtual SAN.
In this example, you set a timeout of 90 sections for the port0_vhba virtual SCSI HBA on the
gdom guest domain.
Ensure that neither the OS nor any applications are actively using the virtual SCSI HBA before
you attempt to remove it. If the virtual SCSI HBA is in use, the ldm remove-vhba command
fails.
In this example, you remove the port0_vhba virtual SCSI HBA from the gdom guest domain.
210 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Appearance of Virtual LUNs in a Guest Domain
First, remove the virtual SCSI HBA that is associated with the virtual SAN. Then, use the ldm
remove-vsan command to remove the virtual SAN.
For example, the following command synchronizes the SCSI devices for the port0_vhba virtual
SCSI HBA on the gdom domain:
A virtual LUN that represents a SCSI disk, for example, appears in the guest domain as a regular
disk under /dev/[r]dsk. The virtual LUN is visible in the output of the format command
because the underlying associated physical LUN is of type disk. The device path of the virtual
LUN can be operated on by other commands, such as prtpicl and prtconf.
If the virtual LUN represents a CD or a DVD, its device path is defined in /dev/[r]dsk. If the
virtual LUN represents a tape device, its device path is defined in /dev/rmt. For commands that
can operate on the virtual LUN, see Managing Devices in Oracle Solaris 11.3 .
Although a virtual SAN is conceptually associated with a physical SAN, it does not need to be.
You can create a virtual SAN that comprises one or more of a server's local disks. For example,
some systems have disks that are reachable from the motherboard's SAS HBA, which is shown
as SASHBA in the following ldm list-hba output:
If you define a virtual SAN that encapsulates a server's local disks, be sure to use the following
zpool command so that you do not mistakenly create a virtual LUN for the disk from which the
primary domain boots. For example, the following zpool command confirms that the root
rpool is mounted on disk c4t5000CCA0564F6B89d0, which is reachable from the
/SYS/MB/SASHBA0/HBA/PORT2 initiator port:
# zpool iostat -v
capacity operations bandwidth
pool alloc free read write read write
----------------------- ----- ----- ----- ----- ----- -----
rpool 25.0G 531G 0 10 257 81.7K
c4t5000CCA0564F6B89d0 25.0G 531G 0 10 257 81.7K
----------------------- ----- ----- ----- ----- ----- -----
When configuring a virtual SAN, note that only SCSI target devices with a LUN 0 have their
physical LUNs visible in the guest domain. This constraint is imposed by an Oracle Solaris OS
implementation that requires a target's LUN 0 to respond to the SCSI REPORT LUNS command.
212 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Configuring Virtual SCSI HBA Multipathing
As in native multipathing, a specific back-end SCSI device can be accessed by one or more
paths. For the virtual SCSI HBA subsystem, each path is associated with one virtual LUN. The
scsi_vhci module implements the native multipathing behavior, which sends I/O requests to
the set of virtual LUNs based on arguments passed to the associated mpathadm administrative
command. For more information, see the scsi_vhci(7D) and mpathadm(1M) man pages.
To configure multipathing, you must configure two or more distinct paths from the guest
domain to the same back-end device. Note that multipathing still operates with one configured
path. However, the expected configuration has two or more paths that send their I/O requests
through distinct physical SCSI HBA initiator ports that reside on distinct service domains.
1. Execute a pair of ldm add-vhba and ldm add-vsan commands for each separate path to the
back-end storage.
2. Enable native multipathing in the guest domain for the initiator ports that are managed by
the vhba virtual HBA module.
Note The virtual SCSI HBA subsystem does not support the enabling of native multipathing in
the service domain. If you expect to see LUNs and they are not shown in the ldm list-hba
output for a specific service domain, verify that MPxIO is disabled for the LUNs' associated
physical SCSI HBA initiator port. If necessary, use the stmsboot command to disable MPxIO
for the initiator port.
The following figure is an example of a multipathing configuration. It shows one physical LUN
of a SAN that is accessed by two paths that are managed by MPxIO. For a procedure that
describes how to create the configuration shown in this figure, see How to Configure Virtual
SCSI HBA Multipathing on page 214.
1 Create an I/O domain with the physical SCSI HBA assigned to it.
See Chapter 5, Configuring I/O Domains.
2 Export the virtual SAN for the first initiator port path from the first service domain.
ldm add-vsan vSAN-path1 vSAN-name domain-name
vSAN-path1 is the first initiator port path to the SAN.
For example, the following command exports the vsan-mpxio1 virtual SAN from the svcdom1
domain:
3 Export the virtual SAN for the second initiator port path from the second service domain.
ldm add-vsan vSAN-path2 vSAN-name domain-name
vSAN-path2 is the second initiator port path to the SAN.
214 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Configuring Virtual SCSI HBA Multipathing
For example, the following command exports the vsan-mpxio2 virtual SAN from the svcdom2
domain:
5 Specify the timeout property value for the virtual SCSI HBAs on the guest domain.
ldm set-vhba timeout=seconds vHBA-name domain-name
For example, the following commands set the timeout property value to 30 for the
vsan-mpxio1 and vsan-mpxio2 virtual SCSI HBAs on the gdom domain:
Before you issue the boot command at the OpenBoot PROM prompt, run the probe-scsi-all
command to find the guest domain's virtual SCSI HBAs and associated virtual LUNs.
The following annotated example highlights the relevant parts of the output:
{0} ok probe-scsi-all
/virtual-devices@100/channel-devices@200/scsi@0 Line 1
vHBA
This example probe-scsi-all output shows one virtual SCSI HBA instance (scsi@0) that has
two LUNs that are of type disk.
To boot from a specific virtual LUN, manually compose the device path to pass to the boot
command. The device path has this syntax:
vhba-device-path/disk@target-port,lun:slice
To boot from the LUN on Line 3, you must compose the device path as follows:
Take the value of target-port from Line 2
Take the value of vhba-device-path from Line 1
216 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Installing a Virtual LUN
/virtual-devices@100/channel-devices@200/scsi@0/disk@w200200110d214900,0
You can pass this device path to the OBP boot command as follows:
The following example shows the configuration of a virtual SCSI HBA that has a SCSI DVD
device attached to the primary domain.
From the gdom domain console, probe the SCSI devices and boot from the DVD.
{0} ok probe-scsi-all
/virtual-devices@100/channel-devices@200/scsi@0
vHBA
TPORT-PHYS: p3
LUN: 0 Removable Read Only device TEAC DV-W28SS-V 1.0B
Sometimes, file systems or applications might require an I/O operation to fail and report an
error if the service domain is unavailable for too long. You can set a connection timeout period
for each virtual SCSI HBA to establish a connection between the virtual SCSI HBA on a guest
domain and the virtual SAN on the service domain. When that timeout period is reached, any
pending I/O and any new I/O operations fail as long as the service domain is unavailable and
the connection between the virtual SCSI HBA and the virtual SAN is not re-established.
Other circumstances in which you might want to specify the timeout value include the
following:
If you want MPxIO to fail over to another configured path, you must set the timeout for each
virtual SCSI HBA involved.
If you perform a live migration, set the timeout property value to 0 for each virtual SCSI
HBA in the guest domain to be migrated. After the migration completes, reset the timeout
property to the original setting for each virtual SCSI HBA.
To find out how to set the timeout value, see Setting the Virtual SCSI HBA Timeout Option
on page 210.
The scsi_vhci driver, which implements native Oracle Solaris multipathing, handles
reservation persistency during path failover for both SCSI-2 reservations and SCSI-3
reservations. The vhba module plugs in to the Oracle Solaris I/O framework and thus supports
SCSI reservations by leveraging the scsi_vhci support.
218 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Supporting Highly Fragmented I/O Buffers in the Guest Domain
Each physical memory fragment is associated with one cookie. If the actual number of cookies
cannot be supported by the maximum number of cookies, the I/O request will fail.
The error message shows the actual number of cookies that are required. To eliminate the error,
change the vhba_desc_ncookies value in the /etc/system file, which specifies the number of
per-I/O buffer cookies to use, to be at least as large as the actual value. Also, increase the value of
the vhba_desc_max_ncookies property, which specifies the allowable maximum number of
cookies.
For information about correctly creating or updating /etc/system property values, see
Updating Property Values in the /etc/system File on page 362.
You then re-create the virtual SCSI HBA's connection by performing a sequence of ldm
remove-vhba and ldm add-vhba commands or by rebooting the guest domain.
For example, to set the vhba_desc_max_ncookies property value to 12, add the following line to
the /etc/system file:
set vhba:vhba_desc_ncookies = 12
This chapter describes how to use a virtual network with Oracle VM Server for SPARC software,
and covers the following topics:
Introduction to a Virtual Network on page 222
Oracle Solaris 11 Networking Overview on page 222
Oracle Solaris 10 Networking Overview on page 225
Maximizing Virtual Network Performance on page 226
Virtual Switch on page 227
Virtual Network Device on page 229
Viewing Network Device Configurations and Statistics on page 231
Controlling the Amount of Physical Network Bandwidth That Is Consumed by a Virtual
Network Device on page 235
Virtual Device Identifier and Network Interface Name on page 238
Assigning MAC Addresses Automatically or Manually on page 241
Using Network Adapters With Domains That Run Oracle Solaris 10 on page 243
Configuring a Virtual Switch and the Service Domain for NAT and Routing on page 244
Configuring IPMP in an Oracle VM Server for SPARC Environment on page 248
Using VLAN Tagging on page 255
Using Private VLANs on page 260
Tuning Packet Throughput Performance on page 265
Using NIU Hybrid I/O on page 266
Using Link Aggregation With a Virtual Switch on page 271
Configuring Jumbo Frames on page 273
Oracle Solaris 11 Networking-Specific Feature Differences on page 277
Using Virtual NICs on Virtual Networks on page 279
Oracle Solaris OS networking changed a great deal between the Oracle Solaris 10 OS and the
Oracle Solaris 11 OS. For information about issues to consider, see Oracle Solaris 10
Networking Overview on page 225, Oracle Solaris 11 Networking Overview on page 222, and
Oracle Solaris 11 Networking-Specific Feature Differences on page 277.
221
Introduction to a Virtual Network
Oracle Solaris networking differs greatly between the Oracle Solaris 10 OS and the Oracle
Solaris 11 OS. The following two sections provide overview information about networking for
each OS.
Note Oracle Solaris 10 networking behaves the same as it would on a domain or a system. The
same is true for Oracle Solaris 11 networking. For more information about Oracle Solaris OS
networking, see Oracle Solaris 10 Documentation and Oracle Solaris 11.3 Documentation.
The feature differences between Oracle Solaris 10 and Oracle Solaris 11 networking are
described in Oracle Solaris 11 Networking Overview on page 222.
The following Oracle Solaris 11 networking features are important to understand when you use
the Oracle VM Server for SPARC software:
All network configuration is performed by the ipadm and dladm commands.
The vanity name by default feature generates generic link names, such as net0, for all
physical network adapters. This feature also generates generic names for virtual switches
(vswn) and virtual network devices (vnetn), which appear like physical network adapters to
the OS. To identify the generic link name that is associated with a physical network device,
use the dladm show-phys command.
By default in Oracle Solaris 11, physical network device names use generic vanity names.
Generic names, such as net0, are used instead of device driver names, such as nxge0, which
were used in Oracle Solaris 10.
The following command creates a virtual switch for the primary domain by specifying the
generic name, net0, instead of a driver name, such as nxge0:
primary# ldm add-vsw net-dev=net0 primary-vsw0 primary
The Oracle Solaris 11 OS uses virtual network interface cards (VNICs) to create internal
virtual networks.
A VNIC is a virtual instantiation of a physical network device that can be created from the
physical network device and assigned to a zone.
222 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Oracle Solaris 11 Networking Overview
Use the Oracle Solaris 11 DefaultFixed network configuration profile (NCP) when
configuring the Oracle VM Server for SPARC software.
For Oracle Solaris 11 domains, use the DefaultFixed NCP. You can enable this profile
during or after installation. During an Oracle Solaris 11 installation, select the Manual
networking configuration.
Do not replace the primary network interface with the virtual switch (vsw) interface. The
service domain can use the existing primary network interface to communicate with the
guest domains that have virtual network devices connected to the same virtual switch.
Do not use the physical network adapter's MAC address for the virtual switch because using
the physical adapter's MAC address for the virtual switch conflicts with the primary network
interface.
Note In this release, use the DefaultFixed NCP to configure datalinks and network interfaces
on Oracle Solaris 11 systems by using the dladm or ipadm command.
Ensure that the DefaultFixed NCP is enabled by using the netadm list command. See
Chapter 7, Using Datalink and Interface Configuration Commands on Profiles, in Oracle
Solaris Administration: Network Interfaces and Network Virtualization .
The following diagram shows that a guest domain that runs the Oracle Solaris 10 OS is fully
compatible with an Oracle Solaris 11 service domain. The only differences are features added or
enhanced in the Oracle Solaris 11 OS.
FIGURE 121 Oracle VM Server for SPARC Network Overview for the Oracle Solaris 11 OS
The diagram shows that network device names, such as nxge0 and vnet0, can be represented by
generic link names, such as netn in Oracle Solaris 11 domains. Also note the following:
The virtual switch in the service domain is connected to the guest domains, which enables
guest domains to communicate with each other.
The virtual switch is also connected to the physical network device nxge0, which enables
guest domains to communicate with the physical network.
The virtual switch also enables guest domains to communicate with the service domain
network interface net0 and with VNICs on the same physical network device as nxge0. This
includes communication between the guest domains and the Oracle Solaris 11 service
domain. Do not configure the virtual switch itself (the vswn device) as a network device, as
this functionality has been deprecated in Oracle Solaris 11 and is no longer supported.
The virtual network device vnet0 in an Oracle Solaris 10 guest domain can be configured as
a network interface by using the ifconfig command.
The virtual network device vnet0 in an Oracle Solaris 11 guest domain might appear with a
generic link name, such as net0. It can be configured as a network interface by using the
ipadm command.
224 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Oracle Solaris 10 Networking Overview
A virtual switch behaves like a regular physical network switch and switches network packets
between the different systems to which it is connected. A system can be a guest domain, a
service domain, or a physical network.
FIGURE 122 Oracle VM Server for SPARC Network Overview for the Oracle Solaris 10 OS
The previous diagram shows interface names such as nxge0, vsw0, and vnet0 which apply to the
Oracle Solaris 10 OS only. Also note the following:
The virtual switch in the service domain is connected to the guest domains, which enables
guest domains to communicate with each other.
The virtual switch is also connected to the physical network interface nxge0, which enables
guest domains to communicate with the physical network.
The virtual switch network interface vsw0 is created in the service domain, which enables
the two guest domains to communicate with the service domain.
The virtual switch network interface vsw0 in the service domain can be configured by using
the Oracle Solaris 10 ifconfig command.
The virtual network device vnet0 in an Oracle Solaris 10 guest domain can be configured as
a network interface by using the ifconfig command.
The virtual network device vnet0 in an Oracle Solaris 11 guest domain might appear with a
generic link name, such as net0. It can be configured as a network interface by using the
ipadm command.
The virtual switch behaves like a regular physical network switch and switches network packets
between the different systems, such as guest domains, the service domain, and the physical
network, to which it is connected. The vsw driver provides the network device functionality that
enables the virtual switch to be configured as a network interface.
Note Running the fully qualified Oracle Solaris OS version provides you with access to new
features. See Fully Qualified Oracle Solaris OS Versions in Oracle VM Server for
SPARC 3.3 Installation Guide.
Service domain. At least the Oracle Solaris 11.1 SRU 9 OS or the Oracle Solaris 10 OS
with the 150031-03 patch.
226 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Virtual Switch
Guest domain. At least the Oracle Solaris 11.1 SRU 9 OS or the Oracle Solaris 10 OS
with the 150031-03 patch.
CPU and memory requirements. Ensure that you assign sufficient CPU and memory
resources to the service domain and the guest domains.
Service domain. Because the service domain acts as a data proxy for the guest domains,
assign at least 2 CPU cores and at least 4 Gbytes of memory to the service domain.
Guest domain. Configure each guest domain to be able to drive at least 10-Gbps
performance. Assign at least 2 CPU cores and at least 4 Gbytes of memory to each guest
domain.
Virtual Switch
A virtual switch (vsw) is a component running in a service domain and managed by the virtual
switch driver. A virtual switch can be connected to some guest domains to enable network
communications between those domains. In addition, if the virtual switch is also associated
with a physical network interface, network communication is permitted between guest
domains and the physical network over the physical network interface. When running in an
Oracle Solaris 10 service domain, a virtual switch also has a network interface, vswn, which
permits the service domain to communicate with the other domains that are connected to that
virtual switch. The virtual switch can be used like any regular network interface and configured
with the Oracle Solaris 10 ifconfig command.
Assigning a virtual network device to a domain creates an implicit dependency on the domain
providing the virtual switch. You can view these dependencies or view domains that depend on
this virtual switch by using the ldm list-dependencies command. See Listing Domain I/O
Dependencies on page 382.
In an Oracle Solaris 11 service domain, the virtual switch cannot be used as a regular network
interface. If the virtual switch is connected to a physical network interface, communication with
the service domain is possible by using this physical interface. If configured without a physical
interface, you can enable communication with the service domain by using an etherstub as the
network device (net-dev) that is connected with a VNIC.
To determine which network device to use as the back-end device for the virtual switch, search
for the physical network device in the dladm show-phys output or use the ldm list-netdev
command to list the network devices for logical domains.
Note When a virtual switch is added to an Oracle Solaris 10 service domain, its network
interface is not created. So, by default, the service domain is unable to communicate with the
guest domains connected to its virtual switch. To enable network communications between
guest domains and the service domain, the network interface of the associated virtual switch
must be created and configured in the service domain. See Enabling Networking Between the
Oracle Solaris 10 Service Domain and Other Domains on page 52 for instructions.
This situation occurs only for the Oracle Solaris 10 OS and not for the Oracle Solaris 11 OS.
You can add a virtual switch to a domain, set options for a virtual switch, and remove a virtual
switch by using the ldm add-vsw, ldm set-vsw, and ldm rm-vsw commands, respectively. See
the ldm(1M) man page.
When you create a virtual switch on a VLAN tagged instance of a NIC or an aggregation, you
must specify the NIC (nxge0), the aggregation (aggr3), or the vanity name (net0) as the value of
the net-dev property when you use the ldm add-vsw or ldm set-vsw command.
Note Starting with the Oracle Solaris 11.2 SRU 1 OS, you can dynamically update the net-dev
property value by using the ldm set-vsw command. In previous Oracle Solaris OS releases,
using the ldm set-vsw command to update the net-dev property value in the primary domain
causes the primary domain to enter a delayed reconfiguration.
You cannot add a virtual switch on top of an InfiniBand IP-over-InfiniBand (IPoIB) network
device. Although the ldm add-vsw and ldm add-vnet commands appear to succeed, no data
228 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Virtual Network Device
will flow because these devices transport IP packets by means of the InfiniBand transport layer.
The virtual switch only supports Ethernet as a transport layer.
The following command creates a virtual switch on a physical network adapter called net0:
The following example uses the ldm list-netdev -b command to show only the valid virtual
switch back-end devices for the svcdom service domain.
A virtual network device can be used as a network interface with the name vnetn, which can be
used like any regular network interface and configured with the Oracle Solaris 10 ifconfig
command or the Oracle Solaris 11 ipadm command.
Note For Oracle Solaris 11, the devices are assigned generic names, so vnetn would use a
generic name, such as net0.
You can add a virtual network device to a domain, set options for an existing virtual network
device, and remove a virtual network device by using the ldm add-vnet, ldm set-vnet, and ldm
rm-vnet commands, respectively. See the ldm(1M) man page.
See the information about Oracle VM Server for SPARC networking for Oracle Solaris 10 and
Oracle Solaris 11 in Figure 122 and Figure 121, respectively.
The inter-vnet LDC channels are configured so that virtual network devices can communicate
directly to achieve high guest-to-guest communications performance. However, as the number
of virtual network devices in a virtual switch device increases, the number of required LDC
channels for inter-vnet communications increases quadratically.
You can choose to enable or disable inter-vnet LDC channel allocation for all virtual network
devices attached to a given virtual switch device. By disabling this allocation, you can reduce the
consumption of LDC channels, which are limited in number.
By not assigning inter-vnet channels, more LDC channels are available for use to add more
virtual I/O devices to a guest domain.
You can use the ldm add-vsw and the ldm set-vsw commands to specify a value of on or off for
the inter-vnet-link property.
The following figure shows a typical virtual switch that has three virtual network devices. The
inter-vnet-link property is set to on, which means that inter-vnet LDC channels are
allocated. The guest-to-guest communications between vnet1 and vnet2 is performed directly
without going through the virtual switch.
230 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Viewing Network Device Configurations and Statistics
The following figure shows the same virtual switch configuration with the inter-vnet-link
property set to off. The inter-vnet LDC channels are not allocated. Fewer LDC channels are
used than when the inter-vnet-link property is set to on. In this configuration, guest-to-guest
communications between vnet1 and vnet2 must go through vsw1.
Note Disabling the assignment of inter-vnet LDC channels does not prevent guest-to-guest
communications. Instead, all guest-to-guest communications traffic goes through the virtual
switch rather than directly from one guest domain to another guest domain.
FIGURE 124 Virtual Switch Configuration That Does Not Use Inter-Vnet Channels
To use these commands, you must run at least the Oracle Solaris 11.2 SRU 1 OS in the guest
domain.
232 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Viewing Network Device Configurations and Statistics
MAC_ADDRS : a0:36:9f:0a:c5:d2
net5 VNET ETHER up 0 -- ldg1-vsw1/vnet1_ldg1
[/virtual-devices@100/channel-devices@200/network@1]
MTU : 1500 [1500-1500]
IPADDR : 0.0.0.0 /255.0.0.0
: fe80::214:4fff:fef8:5062/ffc0::
MAC_ADDRS : 00:14:4f:f8:50:62
net6 VNET ETHER up 0 -- ldg1-vsw1/vnet2_ldg1
[/virtual-devices@100/channel-devices@200/network@2]
MTU : 1500 [1500-1500]
IPADDR : 0.0.0.0 /255.0.0.0
: fe80::214:4fff:fef8:af92/ffc0::
MAC_ADDRS : 00:14:4f:f8:af:92
aggr2 AGGR ETHER unknown 0 net1,net3 --
MODE : TRUNK
POLICY : L2,L3
LACP_MODE : ACTIVE
MEMBER : net1 [PORTSTATE = attached]
MEMBER : net3 [PORTSTATE = attached]
MAC_ADDRS : a0:36:9f:0a:c5:d2
ldoms-vsw0.vport3 VNIC ETHER unknown 0 -- ldg1-vsw1/vnet2_ldg1
MTU : 1500 [576-1500]
MAC_ADDRS : 00:14:4f:f8:af:92
ldoms-vsw0.vport2 VNIC ETHER unknown 0 -- ldg1-vsw1/vnet1_ldg1
MTU : 1500 [576-1500]
MAC_ADDRS : 00:14:4f:f8:50:62
ldoms-vsw0.vport1 VNIC ETHER unknown 0 -- ldg1-vsw1/vnet2_ldg3
MTU : 1500 [576-1500]
MAC_ADDRS : 00:14:4f:f9:d3:88
The following example shows the default network statistics for all domains in the system.
234 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Controlling the Amount of Physical Network Bandwidth That Is Consumed by a Virtual Network Device
Use the ldm add-vnet and ldm set-vnet commands to specify the bandwidth limit by
providing a value for the maxbw property. Use the ldm list-bindings or the ldm list-domain
-o network command to view the maxbw property value for an existing virtual network device.
The minimum bandwidth limit is 10 Mbps.
Note This feature is not supported by a Hybrid I/O-enabled virtual network device. The maxbw
property is not enforced for Hybrid mode virtual networks because the Hybrid I/O assigns a
specific unit of hardware resource that cannot be changed to limit bandwidth. To limit a virtual
network device's bandwidth, you must disable the Hybrid mode.
The bandwidth resource control applies only to the traffic that goes through the virtual switch.
Thus, inter-vnet traffic is not subjected to this limit. If you do not have a physical backend
device configured, you can ignore bandwidth resource control.
The minimum supported bandwidth limit depends on the Oracle Solaris network stack in the
service domain. The bandwidth limit can be configured with any desired high value. There is no
upper limit. The bandwidth limit ensures only that the bandwidth does not exceed the
configured value. Thus, you can configure a bandwidth limit with a value greater than the link
speed of the physical network device that is assigned to the virtual switch.
Use the ldm set-vnet command to specify the bandwidth limit for an existing virtual network
device.
You can also clear the bandwidth limit by specifying a blank value for the maxbw property:
The following examples show how to use the ldm command to specify the bandwidth limit. The
bandwidth is specified as an integer with a unit. The unit is M for megabits-per-second or G for
gigabits-per-second. The unit is megabits-per-second if you do not specify a unit.
EXAMPLE 124 Setting the Bandwidth Limit When Creating a Virtual Network Device
The following command creates a virtual network device (vnet0) that has a bandwidth limit of
100 Mbps.
The following command would issue an error message when attempting to set a bandwidth
limit below the minimum value, which is 10 Mbps.
EXAMPLE 125 Setting the Bandwidth Limit on an Existing Virtual Network Device
The following commands sets the bandwidth limit to 200 Mbps on the existing vnet0 device.
Depending on the real-time network traffic pattern, the amount of bandwidth might not reach
the specified limit of 200 Mbps. For example, the bandwidth might be 95 Mbps, which does not
exceed the 200 Mbps limit.
The following command sets the bandwidth limit to 2 Gbps on the existing vnet0 device.
Because there is no upper limit on bandwidth in the MAC layer, you can still set the limit to be 2
Gbps even if the underlying physical network speed is less than 2 Gbps. In such a case, there is
no bandwidth limit effect.
EXAMPLE 126 Clearing the Bandwidth Limit on an Existing Virtual Network Device
The following command clears the bandwidth limit on the specified virtual network device
(vnet0). By clearing this value, the virtual network device uses the maximum bandwidth
available, which is provided by the underlying physical device.
236 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Controlling the Amount of Physical Network Bandwidth That Is Consumed by a Virtual Network Device
EXAMPLE 127 Viewing the Bandwidth Limit of an Existing Virtual Network Device
The ldm list-bindings command shows the value of the maxbw property for the specified
virtual network device, if defined.
The following command shows that the vnet0 virtual network device has a bandwidth limit of
15 Mbps. If no bandwidth limit is set, the MAXBW field is blank.
NETWORK
NAME SERVICE ID DEVICE
vnet0 primary-vsw0@primary 0 network@0
You can also use the dladm show-linkprop command to view the maxbw property value as
follows:
Each virtual switch and virtual network device of a domain has a unique device number that is
assigned when the domain is bound. If a virtual switch or virtual network device was added with
an explicit device number (by setting the id property), the specified device number is used.
Otherwise, the system automatically assigns the lowest device number available. In that case,
the device number assigned depends on how virtual switch or virtual network devices were
added to the system. The device number eventually assigned to a virtual switch or virtual
network device is visible in the output of the ldm list-bindings command when a domain is
bound.
The following example shows that the primary domain has one virtual switch, primary-vsw0.
This virtual switch has a device number of 0 (switch@0).
The following example shows that the ldg1 domain has two virtual network devices: vnet and
vnet1. The vnet device has a device number of 0 (network@0) and the vnet1 device has a device
number of 1 (network@1).
Similarly, when a domain with a virtual network device is running the Oracle Solaris OS, the
virtual network device has a network interface, vnetN. However, the network interface number
of the virtual network device, N, is not necessarily the same as the device number of the virtual
network device, n.
238 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Virtual Device Identifier and Network Interface Name
Note On Oracle Solaris 11 systems, generic link names in the form of netn are assigned to both
vswn and vnetn. Use the dladm show-phys command to identify which netn names map to the
vswn and vnetn devices.
Caution The Oracle Solaris OS preserves the mapping between the name of a network interface
and a virtual switch or a virtual network device based on the device number. If a device number
is not explicitly assigned to a virtual switch or virtual network device, its device number can
change when the domain is unbound and is later bound again. In that case, the network
interface name assigned by the OS running in the domain can also change and make the
existing system configuration unusable. This situation might happen, for example, when a
virtual switch or a virtual network interface is removed from the configuration of the domain.
You cannot use the ldm list-* commands to directly determine the Oracle Solaris OS network
interface name that corresponds to a virtual switch or virtual network device. However, you can
obtain this information by using a combination of the output from ldm list -l command and
from the entries under /devices on the Oracle Solaris OS.
The following example shows the ldm list-netdev and ldm list -o network commands.
The ldm list -o network command shows the virtual network devices in the NAME field. The
ldm list-netdev output shows the corresponding OS interface name in the NAME column.
To verify that the ldm list-netdev output is correct, run the dladm show-phys and dladm
show-linkprop -p mac-address commands from the ldg1:
1 Use the ldm command to find the virtual network device number for net-c.
primary# ldm list -l ldg1
...
NETWORK
NAME SERVICE DEVICE MAC
net-a primary-vsw0@primary network@0 00:14:4f:f8:91:4f
net-c primary-vsw0@primary network@2 00:14:4f:f8:dd:68
...
The virtual network device number for net-c is 2 (network@2).
To determine the network interface name of a virtual switch, find the virtual switch device
number, n, as switch@n.
2 Find the corresponding network interface on ldg1 by logging into ldg1 and finding the entry
for this device number under /devices.
ldg1# uname -n
ldg1
ldg1# find /devices/virtual-devices@100 -type c -name network@2\*
/devices/virtual-devices@100/channel-devices@200/network@2:vnet1
The network interface name is the part of the entry after the colon; that is, vnet1.
To determine the network interface name of a virtual switch, replace the argument to the -name
option with virtual-network-switch@n\*. Then, find the network interface with the name
vswN.
240 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Assigning MAC Addresses Automatically or Manually
3 Verify that vnet1 has the MAC address 00:14:4f:f8:dd:68 as shown in the ldm list -l output
for net-c in Step 1.
The advantage to having the Logical Domains Manager assign the MAC addresses is that it uses
the block of MAC addresses dedicated for use with logical domains. Also, the Logical Domains
Manager detects and prevents MAC address collisions with other Logical Domains Manager
instances on the same subnet. This behavior frees you from having to manually manage your
pool of MAC addresses.
MAC address assignment happens as soon as a logical domain is created or a network device is
configured into a domain. In addition, the assignment is persistent until the device, or the
logical domain itself, is removed.
00:14:4F:F8:00:00 ~ 00:14:4F:FF:FF:FF
The lower 256K addresses are used by the Logical Domains Manager for automatic MAC
address allocation, and you cannot manually request an address in this range:
00:14:4F:F8:00:00 - 00:14:4F:FB:FF:FF
You can use the upper half of this range for manual MAC address allocation:
00:14:4F:FC:00:00 - 00:14:4F:FF:FF:FF
Note In Oracle Solaris 11, the allocation of MAC addresses for VNICs uses addresses outside
these ranges.
To obtain this MAC address, the Logical Domains Manager iteratively attempts to select an
address and then checks for potential collisions. The MAC address is randomly selected from
the 256K range of addresses set aside for this purpose. The MAC address is selected randomly to
lessen the chance of a duplicate MAC address being selected as a candidate.
The address selected is then checked against other Logical Domains Managers on other systems
to prevent duplicate MAC addresses from actually being assigned. The algorithm employed is
described in Duplicate MAC Address Detection on page 242. If the address is already
assigned, the Logical Domains Manager iterates, choosing another address and again checking
for collisions. This process continues until a MAC address is found that is not already allocated
or a time limit of 30 seconds has elapsed. If the time limit is reached, then the creation of the
device fails, and an error message similar to the following is shown.
Automatic MAC allocation failed. Please set the vnet MAC address manually.
242 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Using Network Adapters With Domains That Run Oracle Solaris 10
that the Logical Domains Manager wants to assign to the device. The Logical Domains Manager
attempting to assign the MAC address waits for one second for a response. If a different device
on another Oracle VM Server for SPARC-enabled system has already been assigned that MAC
address, the Logical Domains Manager on that system sends a response containing the MAC
address in question. If the requesting Logical Domains Manager receives a response, it notes the
chosen MAC address has already been allocated, chooses another, and iterates.
By default, these multicast messages are sent only to other managers on the same subnet. The
default time-to-live (TTL) is 1. The TTL can be configured using the Service Management
Facilities (SMF) property ldmd/hops.
If the Logical Domains Manager on a system is shut down for any reason, duplicate MAC
addresses could occur while the Logical Domains Manager is down.
Automatic MAC allocation occurs at the time the logical domain or network device is created
and persists until the device or the logical domain is removed.
Note A detection check for duplicate MAC addresses is performed when the logical domain or
network device is created, and the logical domain is started.
For more information about using link aggregation, see Using Link Aggregation With a
Virtual Switch on page 271.
244 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Configuring a Virtual Switch and the Service Domain for NAT and Routing
way enables guest domains to communicate with zones in the service domain. Use the dladm
create-etherstub command to create an etherstub device.
The following diagram shows how virtual switches, etherstub devices, and VNICs can be used
to set up Network Address Translation (NAT) in a service domain.
You might consider using persistent routes. For more information, see Troubleshooting Issues
When Adding a Persistent Route in Troubleshooting Network Administration Issues in Oracle
Solaris 11.3 and Creating Persistent (Static) Routes in Configuring and Managing Network
Components in Oracle Solaris 11.3 .
2 Create a virtual switch that uses stub0 as the physical back-end device.
primary# ldm add-vsw net-dev=stub0 primary-stub-vsw0 primary
246 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Configuring a Virtual Switch and the Service Domain for NAT and Routing
2 Create the virtual switch as a network device in addition to the physical network device being
used by the domain.
See How to Configure the Virtual Switch as the Primary Interface on page 52 for more
information about creating the virtual switch.
5 Configure IP routing in the service domain, and set up required routing tables in all the
domains.
For more information about IP routing, see Packet Forwarding and Routing on IPv4
Networks in Oracle Solaris Administration: IP Services .
If a physical link failure occurs in the service domain, the virtual switch device that is bound to
that physical device detects the link failure. Then, the virtual switch device propagates the
failure to the corresponding virtual network device that is bound to this virtual switch. The
virtual network device sends notification of this link event to the IP layer in the guest LDom_A,
which results in failover to the other virtual network device in the IPMP group.
248 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Configuring IPMP in an Oracle VM Server for SPARC Environment
FIGURE 127 Two Virtual Networks Connected to Separate Virtual Switch Instances (Oracle Solaris 11)
Figure 128 shows that you can achieve further reliability in the logical domain by connecting
each virtual network device (vnet0 and vnet1) to virtual switch instances in different service
domains. In this case, in addition to physical network failure, LDom_A can detect virtual network
failure and trigger a failover following a service domain crash or shutdown.
FIGURE 128 Virtual Network Devices Each Connected to Different Service Domains (Oracle Solaris 11)
For more information, see Establishing an Oracle Solaris Network in the Oracle Solaris 11.3
Information Library.
If a physical link failure occurs in the service domain, the virtual switch device that is bound to
that physical device detects the link failure. Then, the virtual switch device propagates the
failure to the corresponding virtual network device that is bound to this virtual switch. The
virtual network device sends notification of this link event to the IP layer in the guest LDom_A,
which results in failover to the other virtual network device in the IPMP group.
FIGURE 129 Two Virtual Networks Connected to Separate Virtual Switch Instances (Oracle Solaris 10)
Figure 1210 shows that you can achieve further reliability in the logical domain by connecting
each virtual network device (vnet0 and vnet1) to virtual switch instances in different service
domains. In this case, in addition to physical network failure, LDom_A can detect virtual network
failure and trigger a failover following a service domain crash or shutdown.
FIGURE 1210 Virtual Network Devices Each Connected to Different Service Domains (Oracle Solaris 10)
250 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Configuring IPMP in an Oracle VM Server for SPARC Environment
FIGURE 1211 Two Physical NICs Configured as Part of an IPMP Group (Oracle Solaris 11)
FIGURE 1212 Two Virtual Switch Interfaces Configured as Part of an IPMP Group (Oracle Solaris 10)
Sometimes detecting physical network link state changes might be necessary. For instance, if a
physical device has been assigned to a virtual switch, even if the link from a virtual network
device to its virtual switch device is up, the physical network link from the service domain to the
external network might be down. In such a case, you might need to obtain and report the
physical link status to the virtual network device and its stack.
You can use the linkprop=phys-state option to configure physical link state tracking for
virtual network devices as well as for virtual switch devices. When this option is enabled, the
virtual device (virtual network or virtual switch) reports its link state based on the physical link
state while it is created as an interface in the domain. You can use standard Oracle Solaris
network administration commands such as dladm and ifconfig to check the link status. In
addition, the link status is also logged in the /var/adm/messages file.
For Oracle Solaris 10, see the dladm(1M) and ifconfig(1M) man pages. For Oracle Solaris 11,
see the dladm(1M), ipadm(1M), and ipmpstat(1M) man pages.
252 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Configuring IPMP in an Oracle VM Server for SPARC Environment
Note You can run both link-state-unaware and link-state-aware vnet and vsw drivers
concurrently on an Oracle VM Server for SPARC system. However, if you intend to configure
link-based IPMP, you must install the link-state-aware driver. If you intend to enable physical
link state updates, upgrade both the vnet and vsw drivers to the Oracle Solaris 10 1/13 OS, and
run at least version 1.3 of the Logical Domains Manager.
You can also enable physical link status updates for a virtual switch device by following similar
steps and specifying the linkprop=phys-state option to the ldm add-vsw and ldm set-vsw
commands.
Note You need to use the linkprop=phys-state option only if the virtual switch device itself is
created as an interface. If linkprop=phys-state is specified and the physical link is down, the
virtual network device reports its link status as down, even if the connection to the virtual
switch is up. This situation occurs because the Oracle Solaris OS does not currently provide
interfaces to report two distinct link states, such as virtual-link-state and physical-link-state.
1 Become an administrator.
For Oracle Solaris 11.3, see Chapter 1, About Using Rights to Control Users and Processes, in
Securing Users and Processes in Oracle Solaris 11.3 .
Note If linkprop=phys-state is specified and the physical link is down (even if the
connection to the virtual switch is up), the virtual network device reports its link status as
down. This situation occurs because the Oracle Solaris OS does not currently provide
interfaces to report two distinct link states, such as virtual-link-state and physical-link-state.
primary# ldm add-vnet linkprop=phys-state if-name vswitch-name domain-name
The following example enables physical link status updates for ldom1_vnet0 connected to
primary-vsw0 on the logical domain ldom1:
primary# ldm add-vnet linkprop=phys-state ldom1_vnet0 primary-vsw0 ldom1
Note Test addresses are not configured on these virtual network devices. Also, you do not
need to perform additional configuration when you use the ldm add-vnet command to
create these virtual network devices.
The following commands add the virtual network devices to the domain. Note that because
linkprop=phys-state is not specified, only the link to the virtual switch is monitored for
state changes.
primary# ldm add-vnet ldom1_vnet0 primary-vsw0 ldom1
primary# ldm add-vnet ldom1_vnet1 primary-vsw1 ldom1
The following commands configure the virtual network devices on the guest domain and
assign them to an IPMP group. Note that test addresses are not configured on these virtual
network devices because link-based failure detection is being used.
Oracle Solaris 10 OS: Use the ifconfig command.
# ifconfig vnet0 plumb
# ifconfig vnet1 plumb
# ifconfig vnet0 group ipmp0
# ifconfig vnet1 group ipmp0
The second and third commands configure the ipmp0 interface with the IP address, as
appropriate.
254 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Using VLAN Tagging
Note The virtual switch must have a physical network device assigned for the domain to
successfully bind. If the domain is already bound and the virtual switch does not have a
physical network device assigned, the ldm add-vnet commands will fail.
The following commands create the virtual network devices and assign them to an IPMP
group:
Oracle Solaris 10 OS: Use the ifconfig command.
# ifconfig vnet0 plumb
# ifconfig vnet1 plumb
# ifconfig vnet0 192.168.1.1/24 up
# ifconfig vnet1 192.168.1.2/24 up
# ifconfig vnet0 group ipmp0
# ifconfig vnet1 group ipmp0
Oracle Solaris 11 OS: Use the ipadm command.
Note that net0 and net1 are the vanity names for vnet0 and vnet1, respectively.
# ipadm create-ip net0
# ipadm create-ip net1
# ipadm create-ipmp ipmp0
# ipadm add-ipmp -i net0 -i net1 ipmp0
# ipadm create-addr -T static -a 192.168.1.1/24 ipmp0/v4addr1
# ipadm create-addr -T static -a 192.168.1.2/24 ipmp0/v4addr2
The virtual switch (vsw) and virtual network (vnet) devices support switching of Ethernet
packets based on the virtual local area network (VLAN) identifier (ID) and handle the necessary
tagging or untagging of Ethernet frames.
You can create multiple VLAN interfaces over a virtual network device in a guest domain. Use
the Oracle Solaris 10 ifconfig command or the Oracle Solaris 11 dladm and ipadm commands
to create a VLAN interface over a virtual network device. The creation method is the same as
the method used to configure a VLAN interface over any other physical network device. The
additional requirement in the Oracle VM Server for SPARC environment is that you must use
the ldm command to assign VLANs to a vsw or vnet virtual network device. See the ldm(1M)
man page.
Similarly, you can configure VLAN interfaces over a virtual switch device in an Oracle Solaris
10 service domain. VLAN IDs 2 through 4094 are valid. VLAN ID 1 is reserved as the
default-vlan-id. You do not have to configure VLAN IDs on a virtual switch in an Oracle
Solaris 11 service domain.
When you create a virtual network device on a guest domain, you must assign it to the required
VLANs by specifying a port VLAN ID and zero or more VLAN IDs for this virtual network
using the pvid= and vid= arguments to the ldm add-vnet command. This information
configures the virtual switch to support multiple VLANs in the Oracle VM Server for SPARC
network and switch packets using both MAC address and VLAN IDs in the network.
Any VLANs to be used by an Oracle Solaris 10 service domain must be configured on the vsw
device. Use the ldm add-vsw or ldm set-vsw command to specify the pvid and vid property
values.
You can change the VLANs to which a device belongs using ldm set-vnet or ldm set-vsw
command.
Port VLAN ID
The Port VLAN ID (PVID) specifies the VLAN of which the virtual network device must be a
member in untagged mode. In this case, the vsw device provides the necessary tagging or
untagging of frames for the vnet device over the VLAN specified by its PVID. Any outbound
frames from the virtual network that are untagged are tagged with its PVID by the virtual
switch. Inbound frames tagged with this PVID are untagged by the virtual switch, before
sending it to the vnet device. Thus, assigning a PVID to a virtual network implicitly means that
the corresponding virtual network port on the virtual switch is marked untagged for the VLAN
specified by the PVID. You can have only one PVID for a virtual network device.
The corresponding virtual network interface, when configured without a VLAN ID and using
only its device instance, results in the interface being implicitly assigned to the VLAN specified
by the virtual network's PVID.
For example, if you were to create virtual network instance 0 using one of the following
commands and if the pvid= argument for the vnet has been specified as 10, the vnet0 interface
would be implicitly assigned to belong to VLAN 10. Note that the following commands show
the vnet0 interface names, which pertain to Oracle Solaris 10. For Oracle Solaris 11, use the
generic name instead, such as net0.
256 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Using VLAN Tagging
VLAN ID
The VID ID (VID) specifies the VLAN of which a virtual network device or virtual switch must
be a member in tagged mode. The virtual network device sends and receives tagged frames over
the VLANs specified by its VIDs. The virtual switch passes any frames that are tagged with the
specified VID between the virtual network device and the external network.
258 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Using VLAN Tagging
For more information about using the Oracle Solaris JumpStart feature to install a guest
domain, see How to Use the Oracle Solaris JumpStart Feature on an Oracle Solaris 10 Guest
Domain on page 64.
2 After the installation is complete and the Oracle Solaris OS boots, configure the virtual network
in tagged mode.
primary# ldm set-vnet pvid= vid=21, 22, 23 vnet01 primary-vsw0 ldom1
You can now add the virtual network device to additional VLANs in tagged mode.
When two virtual networks use the same VLAN ID on a physical link, all broadcast traffic is
passed between the two virtual networks. However, when you create virtual networks that use
PVLAN properties, the packet-forwarding behavior might not apply to all situations.
The following table shows the broadcast packet-forwarding rules for isolated and community
PVLANs.
Isolated No No No
Community A No Yes No
Community B No No Yes
For example, when both the vnet0 and vnet1 virtual networks are isolated on the net0
network, net0 does not pass broadcast traffic between the two virtual networks. However, when
the net0 network receives traffic from an isolated VLAN, the traffic is not passed to the isolated
ports that are related to the VLAN. This situation occurs because the isolated virtual network
accepts only traffic from the primary VLAN.
Note If a target service domain does not support the PVLAN feature, the migration of a guest
domain that is configured for PVLAN might fail.
260 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Using Private VLANs
PVLAN Requirements
You can configure PVLANs by using the ldm add-vnet and ldm set-vnet commands. Use
these commands to set the pvlan property. Note that you must also specify the pvid property to
successfully configure the PVLAN.
This feature requires at least the Oracle Solaris 11.2 SRU 4 OS.
Caution The Logical Domains Manager can validate only the configuration of the virtual
networks on a particular virtual switch. If a PVLAN configuration is set up for Oracle
Solaris VNICs on the same back-end device, ensure that the same requirements are met
across all VNICs and virtual networks.
PVLAN type. You specify this information as the pvlan-type part of the pvlan value.
pvlan-type is one of the following values:
isolated. The ports that are associated with an isolated PVLAN are isolated from all of
the peer virtual networks and Oracle Solaris virtual NICs on the back-end network
device. The packets reach only the external network based on the values you specified for
the PVLAN.
community. The ports that are associated with a community PVLAN can communicate
with other ports that are in the same community PVLAN but are isolated from all other
ports. The packets reach the external network based on the values you specified for the
PVLAN.
Configuring PVLANs
This section includes tasks that describes how to create PVLANs and list information about
PVLANs.
Creating a PVLAN
You can configure a PVLAN by setting the pvlan property value by using the ldm add-vnet or
ldm set-vnet command. See the ldm(1M) man page.
The following command shows how to create a virtual network with a PVLAN that has a
primary vlan-id of 4, a secondary vlan-id of 200, and a pvlan-type of isolated.
primary# ldm add-vnet pvid=4 pvlan=200,isolated vnet1 primary-vsw0 ldg1
Use ldm set-vnet to create a PVLAN:
ldm set-vnet pvid=port-VLAN-ID pvlan=secondary-vid,pvlan-type if-name domain-name
The following command shows how to create a virtual network with a PVLAN that has a
primary vlan-id of 3, a secondary vlan-id of 300, and a pvlan-type of community.
primary# ldm add-vnet pvid=3 pvlan=300,community vnet1 primary-vsw0 ldg1
262 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Using Private VLANs
The following command removes the PVLAN configuration for the vnet0 virtual network.
The result of this command is that the specified virtual network is a regular VLAN that uses
the vlan-id that you specified when you configured the PVLAN.
primary# ldm set-vnet pvlan= vnet0 primary-vsw0 ldg1
The following examples show information about PVLAN configuration on the ldg1
domain by using the ldm list-domain -o network command.
The following ldm list-domain command shows information about the PVLAN
configuration on the ldg1 domain.
primary# ldm list-domain -o network ldg1
NAME
ldg1
MAC
00:14:4f:fa:bf:0f
NETWORK
NAME SERVICE ID DEVICE MAC
vnet0 primary-vsw0@primary 0 network@0 00:14:4f:f8:03:ed
MODE PVID VID MTU MAXBW LINKPROP
1 3 1500 1700
PVLAN : 200,community
The following ldm list-domain command shows PVLAN configuration information in
a parseable form for the ldg1 domain.
primary# ldm list-domain -o network -p ldg1
VERSION 1.13
DOMAIN|name=ldg1|
MAC|mac-addr=00:14:4f:fa:bf:0f
VNET|name=vnet0|dev=network@0|service=primary-vsw0@primary
|mac-addr=00:14:4f:f8:03:ed|mode=|pvid=1|vid=3|mtu=1500|linkprop=|id=0
|alt-mac-addrs=|maxbw=1700|protect=|priority=|cos=|pvlan=200,community
Use ldm list-bindings to list PVLAN information:
ldm list-bindings [-e] [-p] [domain-name...]
The following examples show information about PVLAN configuration on the ldg1
domain by using the ldm list-bindingsnetwork command.
The following ldm list-bindings command shows information about the PVLAN
configuration on the ldg1 domain.
primary# ldm list-bindings
...
NETWORK
NAME SERVICE ID DEVICE MAC
vnet0 primary-vsw0@primary 0 network@0 00:14:4f:f8:03:ed
MODE PVID VID MTU MAXBW LINKPROP
1 3 1500 1700
PVLAN :200,community
PEER MAC MODE PVID VID MTU MAXBW LINKPROP
primary-vsw0@primary 00:14:4f:f8:fe:5e 1
The following ldm list-bindings command shows PVLAN configuration information
in a parseable form for the ldg1 domain.
primary# ldm list-bindings -p
...
VNET|name=vnet0|dev=network@0|service=primary-vsw0@primary
|mac-addr=00:14:4f:f8:03:ed|mode=|pvid=1|vid=3|mtu=1500|linkprop=
|id=0|alt-mac-addrs=|maxbw=1700|protect=|priority=|cos=|pvlan=200,community
|peer=primary-vsw0@primary|mac-addr=00:14:4f:f8:fe:5e|mode=|pvid=1|vid=
|mtu=1500|maxbw=
Use ldm list-constraints to list PVLAN information:
ldm list-constraints [-x] [domain-name...]
The following shows the output generated by running the ldm list-constraints
command:
primary# ldm list-constraints -x ldg1
...
<Section xsi:type="ovf:VirtualHardwareSection_Type">
<Item>
<rasd:OtherResourceType>network</rasd:OtherResourceType>
<rasd:Address>auto-allocated</rasd:Address>
<gprop:GenericProperty key="vnet_name">vnet0</gprop:GenericProperty>
<gprop:GenericProperty key="service_name">primary-vsw0</gprop:GenericProperty>
<gprop:GenericProperty key="pvid">1</gprop:GenericProperty>
<gprop:GenericProperty key="vid">3</gprop:GenericProperty>
<gprop:GenericProperty key="pvlan">200,community</gprop:GenericProperty>
<gprop:GenericProperty key="maxbw">1700000000</gprop:GenericProperty>
<gprop:GenericProperty key="device">network@0</gprop:GenericProperty>
<gprop:GenericProperty key="id">0</gprop:GenericProperty>
</Item>
264 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Tuning Packet Throughput Performance
For information about valid and default property values, see the ldm(1M) man page.
The ldm list -o network command shows the data link property values on the ldg3 domain
that you set with the previous ldm set-vnet command. The protection values are
mac-nospoof, restricted, ip-nospoof for the 192.168.100,1,192.168.100.2 MAC address,
and dhcp-nospoof for
[email protected],00:14:4f:f9:d3:88,system2,00:14:4f:fb:61:6e. The priority is
set to high and the class of service (cos) is set to 7.
MAC
00:14:4f:fa:3b:49
VSW
NAME MAC NET-DEV ID DEVICE LINKPROP DEFAULT-VLAN-ID
PVID VID MTU MODE INTER-VNET-LINK
ldg3_vsw1 00:14:4f:f9:65:b5 net3 0 switch@0 1 1
1500 on
NETWORK
NAME SERVICE ID DEVICE MAC MODE PVID VID
MTU MAXBW LINKPROP
vnet3 primary-vsw0@primary 0 network@0 00:14:4f:fa:47:f4 1
1500
vnet1_ldg3 primary-vsw1@primary 1 network@1 00:14:4f:fb:90:b7 1
1500
vnet2_ldg3 ldg1_vsw1@ldg1 2 network@2 00:14:4f:fb:61:6e 1
EXAMPLE 129 Setting and Viewing Data Link Packet Properties (Continued)
1500
vnet3_ldg3 ldg1_vsw1@ldg1 3 network@3 00:14:4f:fb:25:70 1
00:14:4f:f8:7a:8d
00:14:4f:fb:6e:32
00:14:4f:fb:1a:2f
1500 1000
PROTECTION: mac-nospoof
restricted
ip-nospoof/192.168.100,1,192.168.100.2
dhcp-nospoof/[email protected],
00:14:4f:f9:d3:88,system2,00:14:4f:fb:61:6e
PRIORITY : high
COS : 7
The hybrid I/O architecture is well-suited for the Network Interface Unit (NIU) on Oracle Sun
UltraSPARC T2, SPARC T3, and SPARC T4 platforms. An NIU is a network I/O interface that is
integrated on the chip. This architecture enables the dynamic assignment of Direct Memory
Access (DMA) resources to virtual networking devices and, thereby, provides consistent
performance to applications in the domain.
Note The NIU Hybrid I/O feature is deprecated in favor of SR-IOV. Oracle VM Server for
SPARC 3.3 is the last software release to include this feature.
NIU hybrid I/O is available for Oracle Sun UltraSPARC T2, SPARC T3, and SPARC T4
platforms. This feature is enabled by an optional hybrid mode that provides for a virtual
network (vnet) device where the DMA hardware resources are loaned to a vnet device in a
guest domain for improved performance. In the hybrid mode, a vnet device in a guest domain
can send and receive unicast traffic from an external network directly into the guest domain
using the DMA hardware resources. The broadcast or multicast traffic and unicast traffic to the
other guest domains in the same system continue to be sent using the virtual I/O
communication mechanism.
266 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Using NIU Hybrid I/O
Figure 1213 and Figure 1214 show hybrid I/O configurations for Oracle Solaris 11 and Oracle
Solaris 10, respectively.
268 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Using NIU Hybrid I/O
The hybrid mode applies only for the vnet devices that are associated with a virtual switch (vsw)
configured to use an NIU network device. Because the shareable DMA hardware resources are
limited, up to only three vnet devices per vsw can have DMA hardware resources assigned at a
given time. If more than three vnet devices have the hybrid mode enabled, the assignment is
done on a first-come, first-served basis. Because there are two NIU network devices in a system,
there can be a total of six vnet devices on two different virtual switches with DMA hardware
resources assigned.
Note Set the hybrid mode only for three vnet devices per vsw so that they are guaranteed to
have DMA hardware resources assigned.
The hybrid mode option cannot be changed dynamically while the guest domain is active.
The DMA hardware resources are assigned only when an active vnet device is created in the
guest domain.
The NIU 10-gigabit Ethernet driver (nxge) is used for the NIU card. The same driver is also
used for other 10-gigabit network cards. However, the NIU hybrid I/O feature is available
for NIU network devices only.
270 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Using Link Aggregation With a Virtual Switch
2 Oracle Solaris 11 OS only: Identify the link name that corresponds to the NIU network device,
such as nxge0.
primary# dladm show-phys -L |grep nxge0
net2 nxge0 /SYS/MB
After you create a link aggregation, you can assign it to the virtual switch. Making this
assignment is similar to assigning a physical network device to a virtual switch. Use the ldm
add-vswitch or ldm set-vswitch command to set the net-dev property.
When the link aggregation is assigned to the virtual switch, traffic to and from the physical
network flows through the aggregation. Any necessary load balancing or failover is handled
transparently by the underlying aggregation framework. Link aggregation is completely
transparent to the virtual network (vnet) devices that are on the guest domains and that are
bound to a virtual switch that uses an aggregation.
Note You cannot group the virtual network devices (vnet and vsw) into a link aggregation.
You can create and use the virtual switch that is configured to use a link aggregation in the
service domain. See How to Configure the Virtual Switch as the Primary Interface on page 52.
Figure 1215 and Figure 1216 show a virtual switch configured to use an aggregation, aggr1,
over physical interfaces net0 and net1, and nxge0 and nxge1, respectively.
FIGURE 1215 Configuring a Virtual Switch to Use a Link Aggregation (Oracle Solaris 11)
272 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Configuring Jumbo Frames
FIGURE 1216 Configuring a Virtual Switch to Use a Link Aggregation (Oracle Solaris 10)
You enable jumbo frames by specifying the maximum transmission unit (MTU) for the virtual
switch device. In such cases, the virtual switch device and all virtual network devices that are
bound to the virtual switch device use the specified MTU value.
If the required MTU value for the virtual network device should be less than that supported by
the virtual switch, you can specify an MTU value directly on a virtual network device.
Note Only on the Oracle Solaris 10 5/09 OS, the MTU of a physical device must be configured
to match the MTU of the virtual switch. For information about configuring particular drivers,
see the man page that corresponds to that driver in Section 7D of the Oracle Solaris reference
manual. For example, to obtain information about the Oracle Solaris 10 nxge driver, see the
nxge(7D) man page.
In rare circumstances, you might need to use the ldm add-vnet or ldm set-vnet command to
specify an MTU value for a virtual network device that differs from the MTU value of the virtual
switch. For example, you might change the virtual network device's MTU value if you configure
VLANs over a virtual network device and the largest VLAN MTU is less than the MTU value on
the virtual switch. A vnet driver that supports jumbo frames might not be required for domains
where only the default MTU value is used. However, if the domains have virtual network
devices bound to a virtual switch that uses jumbo frames, ensure that the vnet driver supports
jumbo frames.
If you use the ldm set-vnet command to specify an mtu value on a virtual network device,
future updates to the MTU value of the virtual switch device are not propagated to that virtual
network device. To re-enable the virtual network device to obtain the MTU value from the
virtual switch device, run the following command:
Note that enabling jumbo frames for a virtual network device automatically enables jumbo
frames for any hybrid I/O resource that is assigned to that virtual network device.
On the control domain, the Logical Domains Manager updates the MTU values that are
initiated by the ldm set-vsw and ldm set-vnet commands as delayed reconfiguration
operations. To make MTU updates to domains other than the control domain, you must stop a
domain prior to running the ldm set-vsw or ldm set-vnet command to modify the MTU
value.
2 Become an administrator.
For Oracle Solaris 11.3, see Chapter 1, About Using Rights to Control Users and Processes, in
Securing Users and Processes in Oracle Solaris 11.3 .
3 Determine the value of MTU that you want to use for the virtual network.
You can specify an MTU value from 1500 to 16000 bytes. The specified MTU must match the
MTU of the physical network device that is assigned to the virtual switch.
4 Specify the MTU value of a virtual switch device or virtual network device.
Do one of the following:
Enable jumbo frames on a new virtual switch device in the service domain by specifying its
MTU as a value of the mtu property.
primary# ldm add-vsw net-dev=device mtu=value vswitch-name ldom
274 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Configuring Jumbo Frames
In addition to configuring the virtual switch, this command updates the MTU value of each
virtual network device that will be bound to this virtual switch.
Enable jumbo frames on an existing virtual switch device in the service domain by
specifying its MTU as a value of the mtu property.
primary# ldm set-vsw net-dev=device mtu=value vswitch-name
In addition to configuring the virtual switch, this command updates the MTU value of each
virtual network device that will be bound to this virtual switch.
Example 1210 Configuring Jumbo Frames on Virtual Switch and Virtual Network Devices
The following example shows how to add a new virtual switch device that uses an MTU
value of 9000. This MTU value is propagated from the virtual switch device to all of the
client virtual network devices.
First, the ldm add-vsw command creates the virtual switch device, ldg1-vsw0, with an MTU
value of 9000. Note that instance 0 of the network device net0 is specified as a value of the
net-dev property.
primary# ldm add-vsw net-dev=net0 mtu=9000 ldg1-vsw0 ldg1
Next, the ldm add-vnet command adds a client virtual network device to this virtual switch,
ldg1-vsw0. Note that the MTU of the virtual network device is implicitly assigned from the
virtual switch to which it is bound. As a result, the ldm add-vnet command does not require
that you specify a value for the mtu property.
primary# ldm add-vnet vnet01 ldg1-vsw0 ldg1
Depending on the version of the Oracle Solaris OS that is running, do the following:
Oracle Solaris 11 OS: Use the ipadm command to view the mtu property value of the
primary interface.
# ipadm show-ifprop -p mtu net0
IFNAME PROPERTY PROTO PERM CURRENT PERSISTENT DEFAULT POSSIBLE
net0 mtu ipv4 rw 9000 -- 9000 68-9000
The ipadm command creates the virtual network interface in the guest domain, ldg1.
The ipadm show-ifprop command output shows that the value of the mtu property is
9000.
ldg1# ipadm create-ip net0
ldg1# ipadm create-addr -T static -a 192.168.1.101/24 net0/ipv4
ldg1# ipadm show-ifprop -p mtu net0
IFNAME PROPERTY PROTO PERM CURRENT PERSISTENT DEFAULT POSSIBLE
net0 mtu ipv4 rw 9000 -- 9000 68-9000
Oracle Solaris 10 OS: The ifconfig command creates the virtual switch interface in the
service domain, ldg1. The ifconfig vsw0 command output shows that the value of the mtu
property is 9000.
The ifconfig command creates the virtual network interface in the guest domain, ldg1.
The ifconfig vnet0 command output shows that the value of the mtu property is 9000.
ldg1# ifconfig vnet0 plumb
ldg1# ifconfig vnet0 192.168.1.101/24 up
ldg1# ifconfig vnet0
vnet0: flags=201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 9000 index 4
inet 192.168.1.101 netmask ffffff00 broadcast 192.168.1.255
ether 0:14:4f:f9:c4:13
The following example shows how to change the MTU of the interface to 4000.
Note that the MTU of an interface can only be changed to a value that is less than the MTU
of the device that is assigned by the Logical Domains Manager. This method is useful when
VLANs are configured and each VLAN interface requires a different MTU.
Oracle Solaris 11 OS: Use the ipadm command.
primary# ipadm set-ifprop -p mtu=4000 net0
primary# ipadm show-ifprop -p mtu net0
IFNAME PROPERTY PROTO PERM CURRENT PERSISTENT DEFAULT POSSIBLE
net0 mtu ipv4 rw 4000 -- 9000 68-9000
Oracle Solaris 10 OS: Use the ifconfig command.
primary# ifconfig vnet0 mtu 4000
primary# ifconfig vnet0
vnet0: flags=1201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS,FIXEDMTU>
mtu 4000 index 4
inet 192.168.1.101 netmask ffffff00 broadcast 192.168.1.255
ether 0:14:4f:f9:c4:13
Drivers that support jumbo frames can interoperate with drivers that do not support jumbo
frames on the same system. This interoperability is possible as long as jumbo frame support is
not enabled when you create the virtual switch.
276 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Oracle Solaris 11 Networking-Specific Feature Differences
Note Do not set the mtu property if any guest or service domains that are associated with the
virtual switch do not use Oracle VM Server for SPARC drivers that support jumbo frames.
Jumbo frames can be enabled by changing the mtu property of a virtual switch from the default
value of 1500. In this instance, older driver versions ignore the mtu setting and continue to use
the default value. Note that the ldm list output will show the MTU value you specified and not
the default value. Any frames larger than the default MTU are not sent to those devices and are
dropped by the new drivers. This situation might result in inconsistent network behavior with
those guests that still use the older drivers. This limitation applies to both client guest domains
and the service domain.
Therefore, while jumbo frames are enabled, ensure that all virtual devices in the Oracle VM
Server for SPARC network are upgraded to use the new drivers that support jumbo frames. You
must be running at least Logical Domains 1.2 to configure jumbo frames.
The Oracle Solaris 11 OS assigns generic names for vswn and vnetn devices. Ensure that you
do not create a virtual switch with the back-end device that is another vsw or vnet device.
Use the dladm show-phys command to see the actual physical devices that are associated
with generic network device names.
Using an Oracle Solaris 11 VNIC to create a VLAN on an Ethernet stub
Do not configure VLANs on the virtual switch interface for Oracle Solaris 11 service
domains because this configuration is not supported. Instead, create the VLAN on the
interface that corresponds to the virtual switch's net-dev property value.
For Oracle Solaris 10, you can set the net-dev property with a null value to create a routed
virtual switch. However, this method is not supported for Oracle Solaris 11. Instead,
configure the VNIC over the Ethernet stub device to be part of the VLAN.
The following example shows how to create VNICs on an Ethernet stub. The dladm
create-etherstub command creates an Ethernet stub, estub100, which is a backing device
used by the ldm add-vsw command to create the virtual switch. The ldm add-vsw command
creates the virtual switch. The dladm create-vnic command creates a VNIC on top of the
etherstub to create the VLAN for that virtual switch.
primary# dladm create-etherstub estub100
primary# ldm add-vsw net-dev=estub100 vid=100 inter-vnet-link=off \
primary-vsw100 primary
primary# dladm create-vnic -l estub100 -m auto -v 100 vnic100
The following ldm add-vnet commands create two VNICs that enable communication
between the ldg1 and ldg2 domains over VLAN 100.
primary# ldm add-vnet vid=100 ldg1-vnet100 primary-vsw100 ldg1
primary# ldm add-vnet vid=100 ldg2-vnet100 primary-vsw100 ldg2
In the following example, the dladm commands create VLANs on the ldg1 and ldg2 guest
domains. The ipadm commands create IP addresses for the VNICs that you created on the
ldg1 and ldg2 domains.
ldg1# dladm create-vlan -l net1 -v 100 vlan100
ldg1# ipadm create-ip vlan100
ldg1# ipadm create-ipaddr -T static -a 192.168.100.10/24 vlan100/v4
ldg2# dladm create-vlan -l net1 -v 100 vlan100
ldg2# ipadm create-ip vlan100
ldg2# ipadm create-ipaddr -T static -a 192.168.100.20/24 vlan100/v4
Using generic names for the virtual switch and virtual network devices
The Oracle Solaris 11 OS assigns generic names for vswn and vnetn devices. Ensure that you
do not create a virtual switch with the back-end device that is another vsw or vnet device.
Use the dladm show-phys command to see the actual physical devices that are associated
with generic network device names.
Using VNICs on the virtual switch and virtual network devices
278 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Using Virtual NICs on Virtual Networks
You cannot use VNICs on vswn devices. An attempt to create a VNIC on vswn fails. See
Oracle Solaris 11: Zones Configured With an Automatic Network Interface Might Fail to
Start in Oracle VM Server for SPARC 3.3 Release Notes.
Using the network observability commands on Oracle Solaris 11 guest domains
You can use the ldm list-netdev and ldm list-netstat commands to obtain
information about Oracle Solaris 11 guest domains.
Oracle Solaris 11 improves on the Oracle Solaris 10 shared IP zone model in which zones
inherit network properties from the global zone and cannot set their own network address or
other properties. Now, by using zones with virtual network devices, you can configure multiple
isolated virtual NICs, associate zones with each virtual network, and establish rules for
isolation, connectivity, and quality of service (QoS).
For more information, see the networking books in the Oracle Solaris 11.3 information library
(http://docs.oracle.com/cd/E53394_01/).
A virtual network device in a logical domain can support multiple Oracle Solaris 11 virtual
NICs. The virtual network device must be configured to support multiple MAC addresses, one
for each virtual NIC it will support. Oracle Solaris zones in the logical domain connect to the
virtual NICs.
Figure 1217 shows a logical domain, domain1, that provides a single virtual network device
called vnet1 to the Oracle Solaris OS. This virtual network device can host multiple Oracle
Solaris 11 virtual network devices, each of which has its own MAC address and can be assigned
individually to a zone.
Within the domain1 domain are three Oracle Solaris 11 zones: zone1, zone2, and zone3. Each
zone is connected to the network by a virtual NIC based on the vnet1 virtual network device.
The following sections describe the configuring of virtual NICs on virtual network devices and
the creating of zones in the domain with the virtual NICs:
Configuring Virtual NICs on Virtual Network Devices on page 280
Creating Oracle Solaris 11 Zones in a Domain on page 281
For information about using virtual NICs on Ethernet SR-IOV virtual functions, see the
following sections:
Creating Ethernet Virtual Functions on page 90
Modifying Ethernet SR-IOV Virtual Functions on page 97
Creating VNICs on SR-IOV Virtual Functions on page 103
280 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Using Virtual NICs on Virtual Networks
To configure a virtual network device to host multiple MAC addresses, use the ldm add-vnet or
ldm set-vnet command to specify one or more comma-separated values for the
alt-mac-addrs property. Valid values are an octet MAC address and auto. The auto value
indicates that the system generates the MAC address.
For example, you can specify three system-generated alternate MAC addresses for a virtual
network device in either of the following ways:
By using the ldm add-vnet command. The following ldm add-vnet command creates the
vnet1 virtual network device on the domain1 domain and makes three system-generated
MAC addresses available to the device.
primary# ldm add-vnet alt-mac-addrs=auto,auto,auto vnet1 primary-vsw0 domain1
By using a combination of the ldm add-vnet and ldm set-vnet commands. The following
ldm add-vnet and ldm set-vnet commands show how to create a virtual network device
and subsequently assign more MAC addresses to the existing virtual network device.
The first command uses the ldm add-vnet command to create the vnet1 virtual network
device on the domain1 domain. The second command uses the ldm set-vnet command to
make three system-generated MAC addresses available to the vnet1 virtual network device.
primary# ldm add-vnet vnet1 primary-vsw0 domain1
primary# ldm set-vnet alt-mac-addrs=auto,auto,auto vnet1 domain1
Use the zonecfg command to specify a MAC address to use for a zone:
You can either specify a value of auto to choose one of the available MAC addresses
automatically or provide a specific alternate MAC address that you created with the ldm
set-vnet command.
Migrating Domains
1 3
This chapter describes how to migrate domains from one host machine to another host
machine.
Note To use the migration features described in this chapter, you must be running the most
recent versions of the Logical Domains Manager, system firmware, and Oracle Solaris OS. For
information about migration using previous versions of Oracle VM Server for SPARC, see
Oracle VM Server for SPARC 3.3 Release Notes and related versions of the administration guide.
283
Introduction to Domain Migration
While a migration operation is in progress, the domain to be migrated is transferred from the
source machine to the migrated domain on the target machine.
The live migration feature provides performance improvements that enable an active domain to
be migrated while it continues to run. In addition to live migration, you can migrate bound or
inactive domains, which is called cold migration.
You might use domain migration to perform tasks such as the following:
Balancing the load between machines
Performing hardware maintenance while a guest domain continues to run
To achieve the best migration performance, ensure that both the source machine and the target
machine are running the latest version of the Logical Domains Manager.
Phase 1: After the source machine connects with the Logical Domains Manager that runs in the
target machine, information about the source machine and the domain to be migrated are
transferred to the target machine. This information is used to perform a series of checks to
determine whether a migration is possible. The checks to perform are based on the state of the
domain to be migrated. For example, if the domain to be migrated is active, a different set of
checks are performed than if that domain is bound or inactive.
Phase 2: When all checks in Phase 1 have passed, the source and target machines prepare for the
migration. On the target machine, a domain is created to receive the domain to be migrated. If
the domain to be migrated is inactive or bound, the migration operation proceeds to Phase 5.
Phase 3: If the domain to be migrated is active, its runtime state information is transferred to
the target machine. The domain to be migrated continues to run, and the Logical Domains
Manager simultaneously tracks the modifications being made by the OS to this domain. This
information is retrieved from the hypervisor on the source machine and installed in the
hypervisor on the target machine.
284 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Security for Migration Operations
Phase 4: The domain to be migrated is suspended. At this time, all of the remaining modified
state information is recopied to the target machine. In this way, there is little or no perceivable
interruption to the domain. The amount of interruption depends on the workload.
Phase 5: A handoff occurs from the Logical Domains Manager on the source machine to the
Logical Domains Manager on the target machine. The handoff occurs when the migrated
domain resumes execution (if the domain to be migrated was active) and the domain on the
source machine is destroyed. From this point forward, the migrated domain is the sole version
of the domain running.
Software Compatibility
For a migration to occur, both the source and target machines must be running compatible
software, as follows:
The Logical Domains Manager version running on both machines must be either the
current version or the most recent previously released version.
Both the source and target machines must have a compatible version of firmware installed
to support live migration. Both machines must be running at least the minimum version of
the firmware supported with this release of the Oracle VM Server for SPARC software.
For more information, see Version Restrictions for Migration on page 289.
On platforms that have cryptographic units, the speed of the migration operation increases
when the primary domain on the source and target machines has cryptographic units
assigned. This increase in speed occurs because the SSL operations can be off-loaded to the
cryptographic units.
The speed of a migration operation is automatically improved on platforms that have
cryptographic instructions in the CPU. This improvement occurs because the SSL
operations can be carried out by the cryptographic instructions rather than in software.
FIPS 140-2. You can configure your system to perform domain migrations to use the Oracle
Solaris FIPS 140-2 certified OpenSSL libraries. See FIPS 140-2 Mode for Domain
Migration on page 287.
2 Securely copy the remote ldmd certificate to the local ldmd trusted certificate directory.
The remote ldmd certificate is the /var/share/ldomsmanager/server.crt on the remote host.
The local ldmd trusted certificate directory is /var/share/ldomsmanager/trust. Call the
remote certificate file remote-hostname.pem.
3 Create a symbolic link from the certificate in the ldmd trusted certificate directory to
/etc/certs/CA.
Set the REMOTE variable to remote-host.
localhost# ln -s /var/share/ldomsmanager/trust/${REMOTE}.pem /etc/certs/CA/
286 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
FIPS 140-2 Mode for Domain Migration
To successfully start the Logical Domains Manager in FIPS 140-2 mode, you must enable the
FIPS mediator. For step-by-step instructions, see How to Run Logical Domains Manager in
FIPS 140-2 Mode on page 287.
For more information and showing how to use the FIPS 140-capable OpenSSL implementation,
see How to Switch to the FIPS 140-Capable OpenSSL Implementation in Managing
Encryption and Certificates in Oracle Solaris 11.3 and Example of Enabling Two Applications
in FIPS 140 Mode on an Oracle Solaris System in Oracle SuperCluster M7-8 Owners Guide:
Overview .
Caution The OpenSSL implementation to which you switch must exist in the system. If you
switch to an implementation that is not in the system, the system might become unusable.
e. Reboot.
# reboot
2 Reboot.
# reboot
288 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Domain Migration Restrictions
Live migration is not qualified and supported on all combinations of the source and target
platforms and system firmware versions. For those combinations that cannot perform a live
migration, you can perform a cold migration instead.
However, some specific platform and firmware combinations do not support live migration.
Attempting to perform the live migration of a domain from a system that runs at least system
firmware version 8.4 or XCP2210 to a system that runs an older system firmware version fails.
The failure occurs because of a hypervisor API mismatch between the newer and older system
firmware versions. In this instance, the following message is issued:
Note that you can perform the live migration of a domain from a system that runs system
firmware version 8.3 to a system that runs at least system firmware version 8.4 unless the target
machine is a SPARC M5-32 system. For more information, see Domain Migrations From
SPARC T4 Systems That Run System Firmware 8.3 to SPARC T5, SPARC M5, or SPARC M6
Systems Are Erroneously Permitted in Oracle VM Server for SPARC 3.3 Release Notes.
The following CPU requirements and restrictions apply when you run an OS prior to the Oracle
Solaris 10 1/13 OS:
Full cores must be allocated to the migrated domain. If the number of threads in the domain
to be migrated is less than a full core, the extra threads are unavailable to any domain until
after the migrated domain is rebooted.
After a migration, CPU dynamic reconfiguration (DR) is disabled for the migrated domain
until it has been rebooted. At that time, you can use CPU DR on the migrated domain.
The target machine must have enough free full cores to provide the number of threads that
are required for the migrated domain. After the migration, if a full core is only partially used
by the migrated domain, any extra threads are unavailable to any domain until after the
migrated domain is rebooted.
These restrictions also apply when you attempt to migrate a domain that is running in
OpenBoot or in the kernel debugger. See Migrating a Domain From the OpenBoot PROM or a
Domain That Is Running in the Kernel Debugger on page 298.
Before you perform the migration of a domain that has the perf-counters property value set to
global, ensure that no other domain on the target machine has the perf-counters property set
to global.
290 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Migrating a Domain
For more information about the perf-counters property, see Using Perf-Counter Properties
on page 339 and the ldm(1M) man page.
Migrating a Domain
You can use the ldm migrate-domain command to initiate the migration of a domain from one
host machine to another host machine.
Note If you migrate a domain, any named resources that you assigned by using the cid and
mblock properties are dropped. Instead, the domain uses anonymous resources on the target
system.
For information about migrating an active domain while it continues to run, see Migrating an
Active Domain on page 292. For information about migrating a bound or inactive domain, see
Migrating Bound or Inactive Domains on page 298.
For information about the migration options and operands, see the ldm(1M) man page.
Note After a domain migration completes, save a new configuration to the SP of both the
source and target systems. As a result, the state of the migrated domain is correct if either the
source or target system undergoes a power cycle.
Note Because of the dynamic nature of logical domains, a dry run could succeed and an actual
migration fail, and vice versa.
The file name you specify as an argument to the -p option must have the following
characteristics:
The first line of the file must contain the password.
The password must be plain text.
The password must not exceed 256 characters in length.
A newline character at the end of the password and all lines that follow the first line are ignored.
The file in which you store the target machine's password must be properly secured. If you plan
to store passwords in this manner, ensure that the file permissions are set so that only the root
owner or a privileged user can read or write the file (400 or 600).
Tip You can reduce the overall migration time by adding more virtual CPUs to the primary
domain on both the source and target machines. Having at least two whole cores in each
primary domain is recommended but not required.
A domain loses time during the migration process. To mitigate this time-loss issue,
synchronize the domain to be migrated with an external time source, such as a Network Time
Protocol (NTP) server. When you configure a domain as an NTP client, the domain's date and
time are corrected shortly after the migration completes.
To configure a domain as an Oracle Solaris 10 NTP client, see Managing Network Time
Protocol (Tasks) in System Administration Guide: Network Services . To configure a domain as
an Oracle Solaris 11 NTP client, see Managing Network Time Protocol (Tasks) in
Introduction to Oracle Solaris 11 Network Services .
292 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Migrating an Active Domain
Note During the suspend phase at the end of a migration, a guest domain might experience a
slight delay. This delay should not result in any noticeable interrupt to network
communications, especially if the protocol includes a retry mechanism such as TCP or if a retry
mechanism exists at the application level such as NFS over UDP. However, if the guest domain
runs a network-sensitive application such as Routing Information Protocol (RIP), the domain
might experience a short delay or interrupt when attempting an operation. This delay would
occur during the short period when the guest network interface is being torn down and
re-created during the suspension phase.
The following isainfo -v commands show the instructions that are available on a system
when cpu-arch=generic and when cpu-arch=migration-class1.
cpu-arch=generic
# isainfo -v
64-bit sparcv9 applications
asi_blk_init vis2 vis popc
Using the generic value might result in reduced performance of the guest domain
compared to using the native value. The possible performance degradation occurs because
the guest domain uses only generic CPU features that are available on all supported CPU
types instead of using the native hardware features of a particular CPU. By not using these
features, the generic value enables the flexibility of migrating the domain between systems
that use CPUs that support different features.
Use the psrinfo -pv command when the cpu-arch property is set to native to determine
the processor type, as follows:
# psrinfo -pv
The physical processor has 2 virtual processors (0 1)
SPARC-T5 (chipid 0, clock 3600 MHz)
Note that when the cpu-arch property is set to a value other than native, the psrinfo -pv
output does not show the platform type. Instead, the command shows that the sun4v-cpu
CPU module is loaded.
# psrinfo -pv
The physical processor has 2 cores and 13 virtual processors (0-12)
The core has 8 virtual processors (0-7)
The core has 5 virtual processors (8-12)
sun4v-cpu (chipid 0, clock 3600 MHz)
294 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Migrating an Active Domain
Compatibility requirements differ for each SPARC platform. However, at a minimum the real
address and physical address alignment relative to the largest supported page size must be
preserved for each memory block in the migrated domain.
Use the pagesize commmand to determine the largest page size that is supported on the target
machine.
For a guest domain that runs at least the Oracle Solaris 11.3 OS, the migrated domain's memory
blocks might be automatically split up during the migration so that the migrated domain can fit
into smaller available free memory blocks. Memory blocks can only be split up on boundaries
aligned with the largest page size.
Other memory layout requirements for operating systems, firmware, or platforms might
prevent memory blocks from being split during a given migration. This situation could cause
the migration to fail even when the total amount of free memory available is sufficient for the
domain.
For more information, see Migration Requirements for PCIe Endpoint Devices on page 296
and Migration Requirements for PCIe SR-IOV Virtual Functions on page 296.
Caution A migration will succeed even if the paths to a virtual disk back end on the source
and target machines do not refer to the same storage. However, the behavior of the domain
on the target machine will be unpredictable, and the domain is likely to be unusable. To
remedy the situation, stop the domain, correct the configuration issue, and then restart the
domain. If you do not perform these steps, the domain might be left in an inconsistent state.
Each virtual network device in the domain to be migrated must have a corresponding virtual
network switch on the target machine. Each virtual network switch must have the same
name as the virtual network switch to which the device is attached on the source machine.
For example, if vnet0 in the domain to be migrated is attached to a virtual switch service
named switch-y, a domain on the target machine must provide a virtual switch service
named switch-y.
Note The physical network on the target machine must be correctly configured so that the
migrated domain can access the network resources it requires. Otherwise, some network
services might become unavailable on the domain after the migration completes.
For example, you might want to ensure that the domain can access the correct network
subnet. Also, you might want to ensure that gateways, routers, or firewalls are properly
configured so that the domain can reach the required remote systems from the target
machine.
MAC addresses used by the domain to be migrated that are in the automatically allocated
range must be available for use on the target machine.
A virtual console concentrator (vcc) service must exist on the target machine and have at
least one free port. Explicit console constraints are ignored during the migration. The
console for the migrated domain is created by using the migrated domain name as the
console group and by using any available port on any available vcc device in the control
domain. If no available ports are available in the control domain, the console is created by
using an available port on an available vcc device in a service domain. The migration fails if
there is a conflict with the default group name.
Each virtual SAN that is used by the domain to be migrated must be defined on the target
machine.
296 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Migrating an Active Domain
For information about the SR-IOV feature, see Chapter 7, Creating an I/O Domain by Using
PCIe SR-IOV Virtual Functions.
Note that the NIU Hybrid I/O feature is deprecated in favor of SR-IOV. Oracle VM Server for
SPARC 3.3 is the last software release to include this feature.
At the start of the migration, the Logical Domains Manager determines whether the domain to
be migrated supports cryptographic unit DR. If supported, the Logical Domains Manager
attempts to remove any cryptographic units from the domain. After the migration completes,
the cryptographic units are re-added to the migrated domain.
Note If the constraints for cryptographic units cannot be met on the target machine, the
migration operation will not be blocked. In such a case, the migrated domain might have fewer
cryptographic units than it had prior to the migration operation.
When a domain to be migrated is running in OpenBoot, you will see the following message:
When a domain to be migrated is running in the kernel debugger (kmdb), you will see the
following message:
The migration of a bound domain requires that the target machine is able to satisfy the CPU,
memory, and I/O constraints of the domain to be migrated. If these constraints cannot be met,
the migration will fail.
298 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Migrating Bound or Inactive Domains
Caution When you migrate a bound domain, the virtual disk back-end options and mpgroup
values are not checked because no runtime state information is exchanged with the target
machine. This check does occur when you migrate an active domain.
The migration of an inactive domain does not have such requirements. However, the target
machine must satisfy the migrated domain's constraints when a bind is later attempted or the
domain binding will fail.
Note After a domain migration completes, save a new configuration to the SP of both the
source and target systems. As a result, the state of the migrated domain is correct if either the
source or target system undergoes a power cycle.
For information about the direct I/O (DIO) feature, see Creating an I/O Domain by Assigning
PCIe Endpoint Devices on page 143.
For information about the SR-IOV feature, see Chapter 7, Creating an I/O Domain by Using
PCIe SR-IOV Virtual Functions.
The sixth column in the FLAGS field shows one of the following values:
s The domain that is the source of the migration.
+ The migrated domain that is the target of the migration.
e An error has occurred that requires user intervention.
The following command shows that the ldg-src domain is the source of the migration:
The following command shows that the ldg-tgt domain is the target of the migration:
The long form of the status output shows additional information about the migration. On the
source machine, the status output shows the completion percentage of the operation as well as
the names of the target machine and the migrated domain. Similarly, on the target machine, the
status output shows the completion percentage of the operation as well as the names of the
source machine and the domain being migrated.
The following command shows the progress of a migration operation for the ldg-src domain:
A migration operation can also be canceled externally by using the ldm cancel-operation
command. This command terminates the migration in progress and the domain being
300 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Migration Examples
migrated resumes as the active domain. The ldm cancel-operation command must be
initiated from the source machine. On a given machine, any migration-related command
affects the migration operation that was started from that machine. A target machine cannot
control a migration operation.
You must determine whether the migration completed successfully by taking the following
steps:
1. Determine whether the migrated domain has successfully resumed operations. The
migrated domain will be in one of two states:
If the migration completed successfully, the migrated domain is in the normal state.
If the migration failed, the target machine cleans up and destroys the migrated domain.
2. If the migrated domain successfully resumed operations, you can safely destroy the domain
on the source machine that is in the error state. However, if the migrated domain is not
present, the domain on the source machine is still the master version of the domain and
must be recovered. To recover this domain, run the ldm cancel-operation command on
the source machine. This command clears the error state and restores the domain to its
original condition.
Migration Examples
EXAMPLE 131 Using SSL Certificates to Perform a Guest Domain Migration
This example shows how to migrate the ldg1 domain to a machine called system2. Before the
migration operation starts, you must have configured the SSL certificates on both the source
and target machines. See How to Configure SSL Certificates for Migration on page 286.
To perform this migration without being prompted for the target machine password, use the
following command:
The -p option takes a file name as an argument. The specified file contains the superuser
password for the target machine. In this example, pfile contains the password for the target
machine, system2.
EXAMPLE 135 Obtaining the Migration Status for the Domain on the Target Machine
This example shows how to obtain the status on a migrated domain while a migration is in
progress. In this example, the source machine is t5-sys-1.
STATUS
OPERATION PROGRESS SOURCE
migration 55% t5-sys-1
302 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Migration Examples
EXAMPLE 136 Obtaining the Parseable Migration Status for the Domain on the Source Machine
This example shows how to obtain the parseable status on the domain being migrated while a
migration is in progress. In this example, the target machine is system2.
Managing Resources
1 4
Resource Reconfiguration
A system that runs the Oracle VM Server for SPARC software is able to configure resources,
such as virtual CPUs, virtual I/O devices, cryptographic units, and memory. Some resources
can be configured dynamically on a running domain, while others must be configured on a
stopped domain. If a resource cannot be dynamically configured on the control domain, you
must first initiate a delayed reconfiguration. The delayed reconfiguration postpones the
configuration activities until after the control domain has been rebooted.
305
Resource Reconfiguration
Dynamic Reconfiguration
Dynamic reconfiguration (DR) enables resources to be added or removed while the operating
system (OS) is running. The capability to perform DR of a particular resource type is dependent
on having support in the OS running in the logical domain.
To use the DR capability, the Logical Domains DR daemon, drd, must be running in the
domain that you want to change. See the drd(1M) man page.
Delayed Reconfiguration
In contrast to DR operations that take place immediately, delayed reconfiguration operations
take effect in the following circumstances:
After the next reboot of the OS
After a stop and start of a logical domain if no OS is running
In general, delayed reconfiguration operations are restricted to the control domain. For all other
domains, you must stop the domain to modify the configuration unless the resource can be
dynamically reconfigured.
Delayed reconfiguration operations are restricted to the control domain. You can run a limited
number of commands while a delayed reconfiguration on the root domain is in progress to
support operations that cannot be completed dynamically. These subcommands are add-io,
set-io, remove-io, create-vf, and destroy-vf. You can also run the ldm start-reconf
command on the root domain. For all other domains, you must stop the domain to modify the
configuration unless the resource can be dynamically reconfigured.
While a delayed reconfiguration is in progress, other reconfiguration requests for that domain
are deferred until it is rebooted or stopped and started.
306 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Resource Allocation
The ldm cancel-reconf command cancels delayed reconfiguration operations on the domain.
For more information about how to use the delayed reconfiguration feature, see the ldm(1M)
man page.
Note You cannot use the ldm cancel-reconf command if any other ldm remove-* commands
have already performed a delayed reconfiguration operation on virtual I/O devices. The ldm
cancel-reconf command fails in this circumstance.
You can use delayed reconfiguration to decrease resources on the control domain. To remove a
large number of CPUs from the control domain, see Removing a Large Number of CPUs From
a Domain Might Fail in Oracle VM Server for SPARC 3.3 Release Notes. To remove large
amounts of memory from the control domain, see Decrease the Control Domain's Memory
on page 323.
Note When the primary domain is in a delayed reconfiguration state, resources that are
managed by Oracle VM Server for SPARC are power-managed only after the primary domain
reboots. Resources that are managed directly by the OS, such as CPUs that are managed by the
Solaris Power Aware Dispatcher, are not affected by this state.
Resource Allocation
The resource allocation mechanism uses resource allocation constraints to assign resources to a
domain at bind time.
A resource allocation constraint is a hard requirement that the system must meet when you
assign a resource to a domain. If the constraint cannot be met, both the resource allocation and
the binding of the domain fail.
Caution Do not create a circular dependency between two domains in which each domain
provides services to the other. Such a configuration creates a single point of failure condition
where an outage in one domain causes the other domain to become unavailable. Circular
dependency configurations also prevent you from unbinding the domains after they have been
bound initially.
The Logical Domains Manager does not prevent the creation of circular domain dependencies.
If the domains cannot be unbound due to a circular dependency, remove the devices that cause
the dependency and then attempt to unbind the domains.
CPU Allocation
When you run threads from the same core in separate domains, you might experience
unpredictable and poor performance. The Oracle VM Server for SPARC software uses the CPU
affinity feature to optimize CPU allocation during the logical domain binding process, which
occurs before you can start the domain. This feature attempts to keep threads from the same
core allocated to the same logical domain because this type of allocation improves cache sharing
between the threads within the same core.
CPU affinity attempts to avoid the sharing of cores among domains unless there is no other
recourse. When a domain has been allocated a partial core and requests more strands, the
strands from the partial core are bound first, and then another free core is located to fulfill the
request, if necessary.
The CPU allocation mechanism uses the following constraints for CPU resources:
Whole-core constraint. This constraint specifies that CPU cores are allocated to a domain
rather than virtual CPUs. As long as the domain does not have the max-cores constraint
enabled, the whole-core constraint can be added or removed by using the ldm set-core or
ldm set-vcpu command, respectively. The domain can be inactive, bound, or active.
However, enough cores must be available to satisfy the request to apply the constraint. As a
worst-case example, if a domain that shares cores with another domain requests the
whole-core constraint, cores from the free list would need to be available to satisfy the
request. As a best-case example, all the virtual CPUs in the core are already on core
boundaries, so the constraint is applied without changes to CPU resources.
Maximum number of cores (max-cores) constraint. This constraint specifies the
maximum number of cores that can be assigned to a bound or active domain.
308 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
CPU Allocation
CONSTRAINT
cpu=whole-core
max-cores=unlimited
You can only enable, modify, or disable the max-cores constraint on an inactive domain, not on
a domain that is bound or active. Before you update the max-cores constraint on the control
domain, you must first initiate a delayed reconfiguration.
Note The cryptographic units that are associated with those cores are unaffected by core
additions. So, the system does not automatically add the associated cryptographic units to the
domain. However, a cryptographic unit is automatically removed only when the last virtual
CPU of the core is being removed. This action prevents a cryptographic unit from being
orphaned.
The following example removes the max-cores constraint from the unbound and inactive ldg1
domain, but leaves the whole-core constraint as-is.
Alternately, to remove both the max-cores constraint and the whole-core constraint from the
ldg1 domain, assign virtual CPUs instead of cores, as follows:
310 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Configuring the System With Hard Partitions
However, if a bound or active domain is not in delayed reconfiguration mode, its number of
cores cannot exceed the maximum number of cores. This maximum is set with the maximum
core constraint, which is automatically enabled when the whole-core constraint is enabled. Any
CPU DR operation that does not satisfy the maximum core constraint fails.
The expected interactions between the whole-core constraint and DRM are as follows:
While a DRM policy exists for a domain, you cannot switch the domain from being
whole-core constrained to whole-core unconstrained or from being whole-core
unconstrained to whole-core constrained. For example:
When a domain is whole-core constrained, you cannot use the ldm set-vcpu command
to specify a number of virtual CPUs and to remove the whole-core constraint.
When a domain is not whole-core constrained, you cannot use the ldm set-core
command to specify a number of whole cores and to add the whole-core constraint.
When a domain is whole-core constrained and you specify the attack, decay, vcpu-min, or
vcpu-max value, the value must be a whole-core multiple.
For information about Oracle's hard partitioning requirements for software licenses, see
Partitioning: Server/Hardware Partitioning (http://www.oracle.com/us/corporate/
pricing/partitioning-070609.pdf).
CPU cores and CPU threads. The Oracle VM Server for SPARC software runs on SPARC
T-Series and SPARC M-Series platforms, and Fujitsu M10 platforms. The processors that
are used in these systems have multiple CPU cores, each of which contains multiple CPU
threads.
Hard partitioning and CPU whole cores. Beginning with the Oracle VM Server for SPARC
2.0 release, hard partitioning is enforced by using CPU whole-core configurations. A CPU
whole-core configuration has domains that are allocated CPU whole cores instead of
individual CPU threads. By default, a domain is configured to use CPU threads.
When binding a domain in a whole-core configuration, the system provisions the specified
number of CPU cores and all its CPU threads to the domain. Using a CPU whole-core
configuration limits the number of CPU cores that can be dynamically assigned to a bound
or active domain.
Oracle hard partition licensing. To conform to the Oracle hard partition licensing
requirement, you must use at least the Oracle VM Server for SPARC 2.0 release. You must
also use CPU whole cores as follows:
A domain that runs applications that use Oracle hard partition licensing must be
configured with CPU whole cores.
A domain that does not run applications that use Oracle hard partition licensing is not
required to be configured with CPU whole cores. For example, if you do not run any
Oracle applications in the control domain, that domain is not required to be configured
with CPU whole cores.
Verify that the whole-core constraint appears in the output and that the max-cores property
specifies the maximum number of CPU cores that are configured for the domain. See the
ldm(1M) man page.
The following command shows that the ldg1 domain is configured with CPU whole cores
and a maximum of five cores:
primary# ldm list -o resmgmt ldg1
NAME
ldg1
CONSTRAINT
whole-core
max-cores=5
When a domain is bound, CPU cores are assigned to the domain. To list the CPU cores that
are assigned to a domain:
primary# ldm list -o core domain-name
The following command shows the cores that are assigned to the ldg1 domain:
primary# ldm list -o core ldg1
NAME
ldg1
312 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Configuring the System With Hard Partitions
CORE
CID PCPUSET
1 (8, 9, 10, 11, 12, 13, 14, 15)
2 (16, 17, 18, 19, 20, 21, 22, 23)
Use the following command to configure a domain to use CPU whole cores:
This command also specifies the maximum number of CPU cores for the domain, which is the
CPU cap. See the ldm(1M) man page.
The CPU cap and the allocation of CPU cores is handled by separate commands. By using these
commands, you can independently allocate CPU cores, set a cap, or both. The allocation unit
can be set to cores even when no CPU cap is in place. However, running the system in this mode
is not acceptable for configuring hard partitioning on your Oracle VM Server for SPARC
system.
Allocate the specified number of CPU cores to a domain by using the add-core, set-core,
or rm-core subcommand.
Set the CPU cap by using the create-domain or set-domain subcommand to specify the
max-cores property value.
You must set the cap if you want to configure hard partitioning on your Oracle VM Server
for SPARC system.
Note You only need to stop and unbind the domain if you optionally set the max-cores
constraint.
Example 143 Creating a New Domain With Two CPU Whole Cores
This example creates a domain, ldg1, with two CPU whole cores. The first command creates the
ldg1 domain. The second command configures the ldg1 domain with two CPU whole cores.
At this point, you can perform further configuration on the domain, subject to the restrictions
described in Step 3 in How to Create a New Domain With CPU Whole Cores on page 313.
The third and fourth commands show how to bind and start the ldg1 domain, at which time
you can use the ldg1 domain.
314 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Configuring the System With Hard Partitions
Example 144 Configuring an Existing Domain With Four CPU Whole Cores
This example updates the configuration of an existing domain, ldg1 by configuring it with four
CPU whole cores.
2 Set the number of CPU whole cores for the primary domain.
primary# ldm set-core number-of-CPU-cores primary
Example 145 Configuring the Control Domain With Two CPU Whole Cores
This example configures CPU whole cores on the primary domain. The first command initiates
delayed reconfiguration mode on the primary domain. The second command configures the
primary domain with two CPU whole cores. The third command sets the max-cores property
to 2, and the fourth command reboots the primary domain.
The optional Steps 1 and 4 are required only if you want to modify the max-cores property.
Note The max-cores property cannot be altered unless the domain is stopped and unbound.
So, to increase the maximum number of cores from the value specified at the time the
whole-core constraint was set, you must first stop and unbind the domain.
Use the following commands to dynamically add to or remove CPU whole cores from a bound
or active domain and to dynamically set the number of CPU whole cores for a bound or active
domain:
Note If the domain is not active, these commands also adjust the maximum number of CPU
cores for the domain. If the domain is bound or active, these commands do not affect the
maximum number of CPU cores for the domain.
316 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Configuring the System With Hard Partitions
EXAMPLE 146 Dynamically Adding Two CPU Whole Cores to a Domain (Continued)
CORE
CID PCPUSET
1 (8, 9, 10, 11, 12, 13, 14, 15)
2 (16, 17, 18, 19, 20, 21, 22, 23)
3 (24, 25, 26, 27, 28, 29, 30, 31)
4 (32, 33, 34, 35, 36, 37, 38, 39)
Power Management
You can set a separate power management (PM) policy for each hard-partitioned domain.
Resources that you explicitly assign are called named resources. Resources that are automatically
assigned are called anonymous resources.
Caution Do not assign named resources unless you are an expert administrator.
You can explicitly assign physical resources to the control domain and to guest domains.
Because the control domain remains active, the control domain might optionally be in a
delayed reconfiguration before you make physical resource assignments. Or, a delayed
reconfiguration is automatically triggered when you make physical assignments. See
Managing Physical Resources on the Control Domain on page 321. For information about
physical resource restrictions, see Restrictions for Managing Physical Resources on Domains
on page 321.
You can explicitly assign the following physical resources to the control domain and to guest
domains:
Physical CPUs. Assign the physical core IDs to the domain by setting the cid property.
The cid property should be used only by an administrator who is knowledgeable about the
topology of the system to be configured. This advanced configuration feature enforces
specific allocation rules and might affect the overall performance of the system.
You can set this property by running any of the following commands:
ldm add-core cid=core-ID[,core-ID[,...]] domain-name
ldm set-core cid=core-ID[,core-ID[,...]] domain-name
If you specify a core ID as the value of the cid property, core-ID is explicitly assigned to or
removed from the domain.
Note You cannot use the ldm add-core command to add named core resources to a
domain that already uses anonymous core resources.
318 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Assigning Physical Resources to Domains
The mblock property should be used only by an administrator who is knowledgeable about
the topology of the system to be configured. This advanced configuration feature enforces
specific allocation rules and might affect the overall performance of the system.
You can set this property by running any of the following commands:
ldm add-mem mblock=PA-start:size[,PA-start:size[,...]] domain-name
ldm set-mem mblock=PA-start:size[,PA-start:size[,...]] domain-name
ldm rm-mem mblock=PA-start:size[,PA-start:size[,...]] domain-name
To assign a memory block to or remove it from a domain, set the mblock property. A valid
value includes a physical memory starting address (PA-start) and a memory block size (size),
separated by a colon (:).
Note You cannot use dynamic reconfiguration (DR) to move memory or core resources
between running domains when you set the mblock or cid property. To move resources
between domains, ensure that the domains are in a bound or inactive state. For information
about managing physical resources on the control domain, see Managing Physical Resources
on the Control Domain on page 321.
Note If you migrate a domain, any named resources that you assigned by using the cid and
mblock properties are dropped. Instead, the domain uses anonymous resources on the target
system.
You can use the ldm list-constraints command to view the resource constraints for
domains. The physical-bindings constraint specifies which resource types have been
physically assigned to a domain. When a domain is created, the physical-bindings constraint
is unset until a physical resource is assigned to that domain.
320 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Assigning Physical Resources to Domains
Note When the control domain is in delayed reconfiguration mode, you can perform
unlimited memory assignments by using the ldm add-mem and ldm rm-mem commands on the
control domain. However, you can perform only one core assignment to the control domain by
using the ldm set-core command.
A domain that has partial cores assigned to it can use the whole-core semantics if the
remaining CPUs of those cores are free and available.
Memory:
Constraints: 1965 M
raddr paddr5 size
0x1000000 0x291000000 1968M
Adding Memory
If a domain is active, you can use the ldm add-memory command to dynamically add memory to
the domain. The ldm set-memory command can also dynamically add memory if the specified
memory size is greater than the current memory size of the domain.
Removing Memory
If a domain is active, you can use the ldm remove-memory command to dynamically remove
memory from the domain. The ldm set-memory command can also dynamically remove
memory if the specified memory size is smaller than the current memory size of the domain.
Memory removal can be a long-running operation. You can track the progress of an ldm
remove-memory command by running the ldm list -l command for the specified domain.
322 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Using Memory Dynamic Reconfiguration
You can cancel a removal request that is in progress by interrupting the ldm remove-memory
command (by pressing Control-C) or by issuing the ldm cancel-operation memdr command.
If you cancel a memory removal request, only the outstanding portion of the removal request is
affected; namely, the amount of memory still to be removed from the domain.
Note Memory is cleared after it is removed from a domain and before it is added to another
domain.
Using memory DR might not be appropriate for removing large amounts of memory from an
active domain because memory DR operations might be long running. In particular, during the
initial configuration of the system, you should use delayed reconfiguration to decrease the
memory in the control domain.
4. Reboot the control domain to make the reconfiguration changes take effect.
Memory Alignment
Memory reconfiguration requests have different alignment requirements that depend on the
state of the domain to which the request is applied.
324 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Using Memory Dynamic Reconfiguration
For the ldm add-memory, ldm set-memory, and ldm remove-memory commands, the
--auto-adj option rounds up the size of the resulting memory to be 256-Mbyte-aligned.
Therefore, the resulting memory might be more than you specified.
Also, any unaligned memory in the free memory pool cannot be added to an active domain by
using memory DR.
After all the aligned memory has been allocated, you can use the ldm add-memory command to
add the remaining unaligned memory to a bound or inactive domain. You can also use this
command to add the remaining unaligned memory to the control domain by means of a
delayed reconfiguration operation.
The following example shows how to add the two remaining 128-Mbyte memory blocks to the
primary and ldom1 domains. The ldom1 domain is in the bound state.
The following command initiates a delayed reconfiguration operation on the control domain.
The following command adds one of the 128-Mbyte memory blocks to the control domain.
The following command adds the other 128-Mbyte memory block to the ldom1 domain.
Memory DR Examples
The following examples show how to perform memory DR operations. For information about
the related CLI commands, see the ldm(1M) man page.
The ldm list output shows the memory for each domain in the Memory field.
The following ldm add-mem command exits with an error because you must specify memory in
multiples of 256 Mbytes. The next ldm add-mem command uses the --auto-adj option so that
even though you specify 200M as the amount of memory to add, the amount is rounded up to
256 Mbytes.
The ldm rm-mem command exits with an error because you must specify memory in multiples of
256 Mbytes. When you add the --auto-adj option to the same command, the memory removal
succeeds because the amount of memory is rounded down to the next 256-Mbyte boundary.
326 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Using Memory Dynamic Reconfiguration
The ldm list output shows the memory for each domain in the Memory field. The first ldm
add-mem command adds 100 Mbytes of memory to the ldom2 domain. The next ldm add-mem
command specifies the --auto-adj option, which causes an additional 112 Mbytes of memory
to be dynamically added to ldom2.
The ldm rm-mem command dynamically removes 100 Mbytes from the ldom2 domain. If you
specify the --auto-adj option to the same command to remove 300 Mbytes of memory, the
amount of memory is rounded down to the next 256-Mbyte boundary.
328 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Using Resource Groups
The membership of resource groups is statically defined by the hardware configuration. You
can use the ldm remove-core and ldm remove-memory commands to operate on resources
from a particular resource group.
The remove-core subcommand specifies the number of CPU cores to remove from a
domain. When you specify a resource group by using the -g option, the cores that are
selected for removal all come from that resource group.
The remove-memory subcommand removes the specified amount of memory from a logical
domain. When you specify a resource group by using the -g option, the memory that is
selected for removal all comes from that resource group.
For information about these commands, see the ldm(1M) man page.
For more information about PM features and ILOM features, see the following:
Chapter 20, Using Power Management
Monitoring Power Consumption in the Oracle Integrated Lights Out Manager (ILOM) 3.0
CLI Procedures Guide
Oracle Integrated Lights Out Manager (ILOM) 3.0 Feature Updates and Release Notes
Caution The following restrictions affect CPU dynamic resource management (DRM):
On UltraSPARC T2 and UltraSPARC T2 Plus platforms, DRM cannot be enabled when the
PM elastic policy is set.
On UltraSPARC T2 and UltraSPARC T2 Plus platforms, any change from the performance
policy to the elastic policy is delayed while DRM is enabled.
Ensure that you disable CPU DRM prior to performing a domain migration operation, or
you will see an error message.
When the PM elastic policy is set, you can use DRM only when the firmware supports
normalized utilization (8.2.0).
330 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Using Dynamic Resource Management
A resource management policy specifies the conditions under which virtual CPUs can be
automatically added to and removed from a logical domain. A policy is managed by using the
ldm add-policy, ldm set-policy, and ldm remove-policy commands:
For information about these commands and about creating resource management policies, see
the ldm(1M) man page.
A policy is in effect during the times specified by the tod-begin and tod-end properties. The
time specified by tod-begin must be earlier than the time specified by tod-end in a 24-hour
period. By default, values for the tod-begin and tod-end properties are 00:00:00 and 23:59:59,
respectively. When the default values are used, the policy is always in effect.
The policy uses the value of the priority property to specify a priority for a dynamic resource
management (DRM) policy. Priority values are used to determine the relationship between
DRM policies on a single domain and between DRM-enabled domains on a single system.
Lower numerical values represent higher (better) priorities. Valid values are between 1 and
9999. The default value is 99.
The behavior of the priority property depends on the availability of a pool of free CPU
resources, as follows:
Free CPU resources are available in the pool. In this case, the priority property
determines which DRM policy will be in effect when more than one overlapping policy is
defined for a single domain.
No free CPU resources are available in the pool. In this case, the priority property
specifies whether a resource can be dynamically moved from a lower-priority domain to a
higher-priority domain on the same system. The priority of a domain is the priority
specified by the DRM policy that is in effect for that domain.
For example, a higher-priority domain can acquire CPU resources from another domain
that has a DRM policy with a lower priority. This resource-acquisition capability pertains
only to domains that have DRM policies enabled. Domains that have equal priority values
are unaffected by this capability. So, if the default priority is used for all policies, domains
cannot obtain resources from lower-priority domains. To take advantage of this capability,
adjust the priority property values so that they have unequal values.
For example, the ldg1 and ldg2 domains both have DRM policies in effect. The priority
property for the ldg1 domain is 1, which is more favorable than the priority property value of
the ldg2 domain (2). The ldg1 domain can dynamically remove a CPU resource from the ldg2
domain and assign it to itself in the following circumstances:
The ldg1 domain requires another CPU resource.
The pool of free CPU resources has been exhausted.
The policy uses the util-high and util-low property values to specify the high and low
thresholds for CPU utilization. If the utilization exceeds the value of util-high, virtual CPUs
are added to the domain until the number is between the vcpu-min and vcpu-max values. If the
utilization drops below the util-low value, virtual CPUs are removed from the domain until
the number is between the vcpu-min and vcpu-max values. If vcpu-min is reached, no more
virtual CPUs can be dynamically removed. If the vcpu-max is reached, no more virtual CPUs
can be dynamically added.
332 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Listing Domain Resources
Uses the default values for those properties that are not specified, such as enable and
sample-rate. See the ldm(1M) man page.
The following ldm add-policy command creates the med-usage policy to be used during the
low utilization period on the ldom1 domain.
Machine-Readable Output
If you are creating scripts that use ldm list command output, always use the -p option to
produce the machine-readable form of the output.
To view syntax usage for all ldm subcommands, use the following command:
For more information about the ldm subcommands, see the ldm(1M) man page.
Flag Definitions
The following flags can be shown in the output for a domain (ldm list). If you use the long,
parseable options (-l -p) for the command, the flags are spelled out for example,
flags=normal,control,vio-service. If not, you see the letter abbreviation, for example
-n-cv-. The list flag values are position dependent. The following values can appear in each of
the six columns from left to right.
334 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Listing Domain Resources
the guest operating system except when it has been yielded to the hypervisor. If the guest
operating system does not yield virtual CPUs to the hypervisor, the utilization of CPUs in the
guest operating system will always show as 100%.
The utilization statistic reported for a logical domain is the average of the virtual CPU
utilizations for the virtual CPUs in the domain. The normalized utilization statistic (NORM) is the
percentage of time the virtual CPU spends executing on behalf of the guest OS. This value takes
into account such operations as cycle skip. Normalized virtualization is only available when
your system runs at least version 8.2.0 of the system firmware.
When PM does not perform cycle skip operations, 100% utilization equals 100% normalized
utilization. When PM adjusts the cycle skip to four eighths, 100% utilization equals 50%
utilization, which means that the CPU effectively has only half the possible number of cycles
available. So, a fully utilized CPU has a 50% normalized utilization. Use the ldm list or ldm
list -l command to show normalized utilization for both virtual CPUs and the guest OS.
disk Output contains virtual disk (vdisk) and virtual disk server (vds)
domain-name Output contains variables (var), host ID (hostid), domain state, flags,
UUID, and software state
memory Output contains memory
network Output contains media access control (mac) address , virtual network switch
(vsw), and virtual network (vnet) device
physio Physical input/output contains peripheral component interconnect (pci) and
network interface unit (niu)
resmgmt Output contains dynamic resource management (DRM) policy information,
indicates which policy is currently running, and lists constraints related to whole-core
configuration
serial Output contains virtual logical domain channel (vldc) service, virtual logical
domain channel client (vldcc), virtual data plane channel client (vdpcc), and virtual
data plane channel service (vdpcs)
stats Output contains statistics that are related to resource management policies
status Output contains status about a domain migration in progress
The following examples show various subsets of output that you can specify.
To list CPU information for the control domain:
primary# ldm list -o cpu primary
To list domain information for a guest domain:
primary# ldm list -o domain ldm2
To list memory and network information for a guest domain:
primary# ldm list -o network,memory ldm1
To list DRM policy information for a guest domain:
primary# ldm list -o resmgmt,stats ldm1
To show a variable and its value for a domain:
primary# ldm list-variable variable-name domain-name
For example, the following command shows the value for the boot-device variable on the
ldg1 domain:
primary# ldm list-variable boot-device ldg1
boot-device=/virtual-devices@100/channel-devices@200/disk@0:a
To list the resources that are bound to a domain:
primary# ldm list-bindings domain-name
To list logical domain configurations that have been stored on the SP:
336 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Listing Domain Resources
The ldm list-config command lists the logical domain configurations that are stored on
the service processor. When used with the -r option, this command lists those
configurations for which autosave files exist on the control domain.
For more information about configurations, see Managing Domain Configurations on
page 343. For more examples, see the ldm(1M) man page.
primary# ldm list-config
factory-default
3guests
foo [next poweron]
primary
reconfig-primary
The labels to the right of the configuration name mean the following:
[current] Last booted configuration, only as long as it matches the currently running
configuration; that is, until you initiate a reconfiguration. After the reconfiguration, the
annotation changes to [next poweron].
[next poweron] Configuration to be used at the next power cycle.
[degraded] Configuration is a degraded version of the previously booted
configuration.
To list all server resources, bound and unbound:
primary# ldm list-devices -a
To list the amount of memory available to be allocated:
primary# ldm list-devices mem
MEMORY
PA SIZE
0x14e000000 2848M
To determine which portions of memory are unavailable for logical domains:
primary# ldm list-devices -a mem
MEMORY
PA SIZE BOUND
0x0 57M _sys_
0x3900000 32M _sys_
0x5900000 94M _sys_
0xb700000 393M _sys_
0x24000000 192M _sys_
0x30000000 255G primary
0x3ff0000000 64M _sys_
0x3ff4000000 64M _sys_
0x3ff8000000 128M _sys_
0x80000000000 2G ldg1
0x80080000000 2G ldg2
0x80100000000 2G ldg3
0x80180000000 2G ldg4
0x80200000000 103G
0x81bc0000000 145G primary
To list the services that are available:
primary# ldm list-services
Listing Constraints
To the Logical Domains Manager, constraints are one or more resources you want to have
assigned to a particular domain. You either receive all the resources you ask to be added to a
domain or you get none of them, depending upon the available resources. The
list-constraints subcommand lists those resources you requested assigned to the domain.
To list constraints for one domain:
# ldm list-constraints domain-name
To list constraints in XML format for a particular domain:
# ldm list-constraints -x domain-name
To list constraints for all domains in a parseable format:
# ldm list-constraints -p
The following command shows information about all the resource groups:
Like the other ldm list-* commands, you can specify options to show parseable output,
detailed output, and information about particular resource groups and domains. For more
information, see the ldm(1M) man page.
The following example uses the -l option to show detailed information about the /SYS/CMU5
resource group.
338 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Using Perf-Counter Properties
IO
DEVICE PSEUDONYM BOUND
pci@900 pci_24 primary
pci@940 pci_25 primary
pci@980 pci_26 primary
pci@9c0 pci_27 primary
Use the ldm add-domain and ldm set-domain commands to specify a value for the
perf-counters property. The new perf-counters property value will be recognized by the
guest domain on the next reboot. If no perf-counters value is specified, the value is htstrand.
See the ldm(1M) man page.
You can specify the following values for the perf-counters property:
global Grants the domain access to the global performance counters that its allocated
resources can access. Only one domain at a time can have access to the global
performance counters. You can specify this value alone or with either the strand
or htstrand value.
strand Grants the domain access to the strand performance counters that exist on the
CPUs that are allocated to the domain. You cannot specify this value and the
htstrand value together.
htstrand Behaves the same as the strand value and enables instrumentation of
hyperprivilege mode events on the CPUs that are allocated to the domain. You
cannot specify this value and the strand value together.
If the hypervisor does not have the performance access capability, attempting to set the
perf-counters property fails.
The ldm list -o domain and ldm list -e commands show the value of the perf-counters
property. If the performance access capability is not supported, the perf-counters value is not
shown in the output.
EXAMPLE 1411 Creating a Domain and Specifying Its Performance Register Access
Create the new ldg0 domain with access to the global register set:
EXAMPLE 1413 Specifying that a Domain Does Not Have Access to Any Register Sets
Specify that the ldg0 domain does not have access to any of the register sets:
UUID
062200af-2de2-e05f-b271-f6200fd3eee3
HOSTID
0x84fb315d
CONTROL
failure-policy=ignore
extended-mapin-space=on
cpu-arch=native
rc-add-policy=
shutdown-group=15
perf-counters=global,htstrand
DEPENDENCY
master=
PPRIORITY 4000
VARIABLES
340 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Using Perf-Counter Properties
auto-boot?=false
boot-device=/virtual-devices@100/channel-devices@200/disk@0:a
/virtualdevices@100/channel@200/disk@0
network-boot-arguments=dhcp,hostname=solaris,
file=http://10.129.241.238:5555/cgibin/wanboot-cgi
pm_boot_policy=disabled=0;ttfc=2000;ttmr=0;
The following ldm list -p -o domain command shows the same information as in the
previous example but in the parseable form:
primary# ldm list -p -o domain ldg0
VERSION 1.12
DOMAIN|name=ldg0|state=active|flags=normal|util=|norm_util=
UUID|uuid=4e8749b9-281b-e2b1-d0e2-ef4dc2ce5ce6
HOSTID|hostid=0x84f97452
CONTROL|failure-policy=reset|extended-mapin-space=on|cpu-arch=native|rc-add-policy=|
shutdown-group=15|perf-counters=global,htstrand
DEPENDENCY|master=
VARIABLES
|auto-boot?=false
|boot-device=/virtual-devices@100/channel-devices@200/disk@0
|pm_boot_policy=disabled=0;ttfc=2500000;ttmr=0;
Caution Always save your stable configuration to the SP and save it as XML. Saving the
configuration in these ways enable you to recover your system configuration after a power
failure and save it for later use. See Saving Domain Configurations on page 347.
A local copy of the SP configuration and the Logical Domains constraint database is saved on
the control domain whenever you save a configuration to the SP. This local copy is called a
bootset. The bootset is used to load the corresponding Logical Domains constraints database
when the system undergoes a power cycle.
343
Available Configuration Recovery Methods
On the SPARC T5 server, SPARC T7 series server, SPARC M5 server, SPARC M6 server, SPARC
M7 series server, and Fujitsu M10 server, the bootsets on the control domain are the master
copies of the configurations. On startup, the Logical Domains Manager automatically
synchronizes all configurations with the SP, which ensures that the configurations on the SP are
always identical to the bootsets that are stored on the control domain.
Note Because the bootsets contain critical system data, ensure that the control domain's file
system uses technology such as disk mirroring or RAID to reduce the impact of disk failures.
A physical domain is the scope of resources that are managed by a single Oracle VM Server for
SPARC instance. A physical domain might be a complete physical system as is the case of
supported SPARC T-Series platforms. Or, it might be either the entire system or a subset of the
system as is the case of supported SPARC M-Series platforms.
344 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Available Configuration Recovery Methods
This autosave operation enables you to recover a configuration when the configurations that are
saved on the SP are lost. This operation also enables you to recover a configuration when the
current configuration was not explicitly saved to the SP after a system power cycle. In these
circumstances, the Logical Domains Manager can restore that configuration on restart if it is
newer than the configuration marked for the next boot.
Note Power management, FMA, and ASR events do not cause an update to the autosave files.
You can automatically or manually restore autosave files to new or existing configurations. By
default, when an autosave configuration is newer than the corresponding running
configuration, a message is written to the Logical Domains log. Thus, you must use the ldm
add-spconfig -r command to manually update an existing configuration or create a new one
based on the autosave data. Note that you must perform a power cycle after using this command
to complete the manual recovery.
Note When a delayed reconfiguration is pending, the configuration changes are immediately
autosaved. As a result, if you run the ldm list-config -r command, the autosave
configuration is shown as being newer than the current configuration.
For information about how to use the ldm *-spconfig commands to manage configurations
and to manually recover autosave files, see the ldm(1M) man page.
For information about how to select a configuration to boot, see Using Oracle VM Server for
SPARC With the Service Processor on page 366. You can also use the ldm set-spconfig
command, which is described on the ldm(1M) man page.
Autorecovery Policy
The autorecovery policy specifies how to handle the recovery of a configuration when one
configuration that is automatically saved on the control domain is newer than the
corresponding running configuration. The autorecovery policy is specified by setting the
autorecovery_policy property of the ldmd SMF service. This property can have the following
values:
autorecovery_policy=1 Logs warning messages when an autosave configuration is
newer than the corresponding running configuration. These messages are logged in the
ldmd SMF log file. You must manually perform any configuration recovery. This is the
default policy.
autorecovery_policy=2 Displays a notification message if an autosave configuration is
newer than the corresponding running configuration. This notification message is printed
in the output of any ldm command the first time an ldm command is issued after each restart
of the Logical Domains Manager. You must manually perform any configuration recovery.
autorecovery_policy=3 Automatically updates the configuration if an autosave
configuration is newer than the corresponding running configuration. This action
overwrites the SP configuration that will be used during the next power cycle. To make this
configuration available, you must perform another power cycle. This configuration is
updated with the newer configuration that is saved on the control domain. This action does
not affect the currently running configuration. It affects only the configuration that will be
used during the next power cycle. A message is also logged that states that a newer
configuration has been saved on the SP and that it will be booted at the next system power
cycle. These messages are logged in the ldmd SMF log file.
2 Become an administrator.
For Oracle Solaris 11.3, see Chapter 1, About Using Rights to Control Users and Processes, in
Securing Users and Processes in Oracle Solaris 11.3 .
346 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Available Configuration Recovery Methods
For example, to set the policy to perform autorecovery, set the property value to 3:
With the exception of the named physical resources, the following method does not preserve
actual bindings. However, the method does preserve the constraints used to create those
bindings. After saving and restoring the configuration, the domains have the same virtual
resources but are not necessarily bound to the same physical resources. Named physical
resources are bound as specified by the administrator.
To save the configuration for a single domain, create an XML file containing the domain's
constraints.
# ldm list-constraints -x domain-name >domain-name.xml
The following example shows how to create an XML file, ldg1.xml, which contains the ldg1
domain's constraints:
# ldm list-constraints -x ldg1 >ldg1.xml
To save the configurations for all the domains on a system, create an XML file containing the
constraints for all domains.
# ldm list-constraints -x >file.xml
The following example shows how to create an XML file, config.xml, which contains the
constraints for all the domains on a system:
1 Create the domain by using the XML file that you created as input.
# ldm add-domain -i domain-name.xml
348 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Available Configuration Recovery Methods
Caution The ldm init-system command might not correctly restore a configuration in which
physical I/O commands have been used. Such commands are ldm add-io, ldm set-io, ldm
remove-io, ldm create-vf, and ldm destroy-vf. For more information, see ldm init-system
Command Might Not Correctly Restore a Domain Configuration on Which Physical I/O
Changes Have Been Made in Oracle VM Server for SPARC 3.3 Release Notes.
Before You Begin You should have created an XML configuration file by running the ldm list-constraints -x
command. The file should describe one or more domain configurations.
3 Become an administrator.
For Oracle Solaris 11.3, see Chapter 1, About Using Rights to Control Users and Processes, in
Securing Users and Processes in Oracle Solaris 11.3 .
other than the factory default will likely result in a system that is not configured as specified by
the XML file. One or more changes might fail to be applied to the system, depending on the
combination of changes in the XML file and the initial configuration.
The primary domain is reconfigured as specified in the file. Any non-primary domains that
have configurations in the XML file are reconfigured but left inactive.
After the system reboots, the following commands bind and restart the ldg1 and ldg2
domains:
# ldm bind ldg1
# ldm start ldg1
# ldm bind ldg2
# ldm start ldg2
350 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Addressing Service Processor Connection Problems
If an attempt to use the ldm command to manage domain configurations fails because of an
error communicating with the SP, perform the following recovery step that applies to your
platform:
SPARC T7 Series Server and SPARC M7 Series Server: Check the ILOM interconnect state
and re-enable the ilomconfig-interconnect service. See How to Verify the ILOM
Interconnect Configuration on page 54 and How to Re-Enable the ILOM Interconnect
Service on page 55.
SPARC T3, SPARC T4, SPARC T5, SPARC M5, and SPARC M6 Servers: Restart the ldmd
service.
primary# svcadm enable ldmd
This chapter contains information about how Oracle VM Server for SPARC handles hardware
errors.
The Fujitsu M10 platform also supports recovery mode. While the blacklisting of faulty
resources is not supported, the Fujitsu M10 platform auto-replacement feature provides similar
functionality.
353
Using FMA to Blacklist or Unconfigure Faulty Resources
The Logical Domains Manager supports blacklisting only for CPU and memory resources, not
for I/O resources.
If a faulty resource is not in use, the Logical Domains Manager removes it from the available
resource list, which you can see in the ldm list-devices output. At this time, this resource is
internally marked as blacklisted so that it cannot be re-assigned to a domain in the future.
If the faulty resource is in use, the Logical Domains Manager attempts to evacuate the resource.
To avoid a service interruption on the running domains, the Logical Domains Manager first
attempts to use CPU or memory dynamic reconfiguration to evacuate the faulty resource. The
Logical Domains Manager remaps a faulted core if a core is free to use as a target. If this live
evacuation succeeds, the faulty resource is internally marked as blacklisted and is not shown in
the ldm list-devices output so that it will not be assigned to a domain in the future.
If the live evacuation fails, the Logical Domains Manager internally marks the faulty resource as
evacuation pending. The resource is shown as normal in the ldm list-devices output
because the resource is still in use on the running domains until the affected guest domains are
rebooted or stopped.
When the affected guest domain is stopped or rebooted, the Logical Domains Manager
attempts to evacuate the faulty resources and internally mark them as blacklisted so that the
resource cannot be assigned in the future. Such a device is not shown in the ldm output. After
the pending evacuation completes, the Logical Domains Manager attempts to start the guest
domain. However, if the guest domain cannot be started because sufficient resources are not
available, the guest domain is marked as degraded and the following warning message is
logged for the user intervention to perform the manual recovery.
primary# ldm ls
NAME STATE FLAGS CONS VCPU MEMORY UTIL NORM UPTIME
primary active -n-cv- UART 368 2079488M 0.1% 0.0% 16h 57m
gd0 bound -d---- 5000 8
warning: Could not restart domain gd0 after completing pending evacuation.
The domain has been marked degraded and should be examined to see
if manual recovery is possible.
When the system is power-cycled, FMA repeats the evacuation requests for resources that are
still faulty and the Logical Domains Manager handles those requests by evacuating the faulty
resources and internally marking them as blacklisted.
354 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Recovering Domains After Detecting Faulty or Missing Resources
Prior to support for FMA blacklisting, a guest domain that panicked because of a faulty resource
might result in a never-ending panic-reboot loop. By using resource evacuation and blacklisting
when the guest domain is rebooted, you can avoid this panic-reboot loop and prevent future
attempts to use a faulty resource.
At power on, the system firmware reverts to the factory-default configuration if the last selected
power-on configuration cannot be booted in any of the following circumstances:
The I/O topology within each PCIe switch in the configuration does not match the I/O
topology of the last selected power-on configuration
CPU resources or memory resources of the last selected power-on configuration are no
longer present in the system
When recovery mode is enabled, the Logical Domains Manager recovers all active and bound
domains from the last selected power-on configuration. The resulting running configuration is
called the degraded configuration. The degraded configuration is saved to the SP and remains
the active configuration until either a new SP configuration is saved or the physical domain is
power-cycled.
Note The physical domain does not require a power cycle to activate the degraded
configuration after recovery as it is already the running configuration.
If the physical domain is power-cycled, the system firmware first attempts to boot the last
original power-on configuration. That way, if the missing or faulty hardware was replaced in
the meantime, the system can boot the original normal configuration. If the last selected
power-on configuration is not bootable, the firmware next attempts to boot the associated
degraded configuration if it exists. If the degraded configuration is not bootable or does not
exist, the factory-default configuration is booted and recovery mode is invoked.
When possible, the same number of CPUs and amount of memory are allocated to a domain as
specified by the original configuration. If that number of CPUs or amount of memory are not
available, these resources are reduced proportionally to consume the remaining available
resources.
356 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Recovering Domains After Detecting Faulty or Missing Resources
Note When a system is in recovery mode, you can only perform ldm list-* commands. All
other ldm commands are disabled until the recovery operation completes.
The Logical Domains Manager only attempts to recover bound and active domains. The
existing resource configuration of any unbound domain is copied to the new configuration
as-is.
During a recovery operation, fewer resources might be available than in the previously booted
configuration. As a result, the Logical Domains Manager might only be able to recover some of
the previously configured domains. Also, a recovered domain might not include all of the
resources from its original configuration. For example, a recovered bound domain might have
fewer I/O resources than it had in its previous configuration. A domain might not be recovered
if its I/O devices are no longer present or if its parent service domain could not be recovered.
Recovery mode records its steps to the Logical Domains Manager SMF log,
/var/svc/log/ldoms-ldmd:default.log. A message is written to the system console when
Logical Domains Manager starts a recovery, reboots the control domain, and when the recovery
completes.
Caution A recovered domain is not guaranteed to be completely operable. The domain might
not include a resource that is essential to run the OS instance or an application. For example, a
recovered domain might only have a network resource and no disk resource. Or, a recovered
domain might be missing a file system that is required to run an application. Using multipathed
I/O for a domain reduces the impact of missing I/O resources.
Degraded Configuration
Each physical domain can have only one degraded configuration saved to the SP. If a degraded
configuration already exists, it is replaced by the newly created degraded configuration.
You cannot interact directly with degraded configurations. The system firmware transparently
boots the degraded version of the next power-on configuration, if necessary. This transparency
enables the system to boot the original configuration after a power cycle when the missing
resources reappear. When the active configuration is a degraded configuration, it is marked as
[degraded] in the ldm list-spconfig output.
Note A previously missing resource that reappears on a subsequent power cycle has no effect
on the contents of a normal configuration. However, if you subsequently select the
configuration that triggered recovery mode, the SP boots the original, non-degraded
configuration now that all its hardware is available.
When the ldmd/recovery_mode property is not present or is set to auto, recovery mode is
enabled.
When the ldmd/recovery_mode property is set to never, the Logical Domains Manager exits
recovery mode without taking any action and the physical domain runs the factory-default
configuration.
Note If the system firmware requests recovery mode while it is not enabled, issue the following
commands to enable recovery mode after the request is made:
Recovery mode is initiated immediately in this scenario only if no changes were made to the
system, that is, if it is still in the factory-default configuration.
In addition to enabling recovery mode, you can specify a timeout value for a root domain boot
during recovery. By default, the ldmd/recovery_mode_boot_timeout property value is 30
minutes. Valid values start at 5 minutes.
358 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Marking I/O Resources as Evacuated
This chapter contains information about using the Oracle VM Server for SPARC software and
tasks that are not described in the preceding chapters.
361
Entering Names in the CLI
If the property is found in one of the /etc/system.d files, update the property value in the
existing file.
362 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Using Console Groups
If the property is found in the /etc/system file or it is not found, create a file in the
/etc/system.d directory with a name such as the following example:
/etc/system.d/com.company-name:ldoms-config
Note Enabling network access to a console has security implications. Any user can connect to a
console and for this reason it is disabled by default.
A Service Management Facility manifest is an XML file that describes a service. For more
information about creating an SMF manifest, refer to the Oracle Solaris 10 System
Administrator Documentation (http://download.oracle.com/docs/cd/E18752_01/
index.html).
Note To access a non-English OS in a guest domain through the console, the terminal for the
console must be in the locale required by the OS.
2 Connect to the associated TCP port (localhost at port 5000 in this example).
# telnet localhost 5000
primary-vnts-group1: h, l, c{id}, n{name}, q:
You are prompted to select one of the domain consoles.
Note To reassign the console to a different group or vcc instance, the domain must be
unbound; that is, it has to be in the inactive state. Refer to the vntsd(1M) man page for more
information about configuring and using SMF to manage vntsd and using console groups.
However, the domain could still be processing the shutdown request. Use the ldm list-domain
command to verify the status of the domain. For example:
The preceding list shows the domain as active, but the s flag indicates that the domain is in the
process of stopping. This should be a transitory state.
The ldm stop command uses the shutdown command to stop a domain. The execution of the
shutdown sequence usually takes much longer than a quick stop, which can be performed by
running the ldm stop -q command. See the ldm(1M) man page.
364 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Operating the Oracle Solaris OS With Oracle VM Server for SPARC
domain-name stop timed out. The domain might still be in the process of shutting down.
Either let it continue, or specify -f to force it to stop.
While this shutdown sequence runs, the s flag is also shown for the domain.
To reach the ok prompt from the Oracle Solaris OS, you must halt the domain by using the
Oracle Solaris OS halt command.
To save your current domain configurations to the SP, use the following command:
When you initiate such a break, the Oracle Solaris OS issues the following prompt:
Type the letter that represents what you want the system to do after these types of breaks.
For information about the consequences of rebooting a domain that has the root domain role,
see Rebooting the Root Domain With PCIe Endpoints Configured on page 149.
This option enables you to set the configuration on the next power on to another configuration,
including the factory-default shipping configuration.
You can invoke the command regardless of whether the host is powered on or off. It takes effect
on the next host reset or power on.
To reset the logical domain configuration, you set the option to factory-default.
You also can select other configurations that have been created with the Logical Domains
Manager using the ldm add-config command and stored on the service processor (SP). The
name you specify in the Logical Domains Manager ldm add-config command can be used to
select that configuration with the ILOM bootmode command. For example, assume you stored
the configuration with the name ldm-config1.
366 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Configuring Domain Dependencies
defining a guest domain as a slave of its service domain, you can specify the behavior of the
guest domain when its service domain goes down. When no such dependency is established, a
guest domain just waits for its service domain to return to service.
Note The Logical Domains Manager does not permit you to create domain relationships that
create a dependency cycle. For more information, see Dependency Cycles on page 369.
SOFTSTATE
Solaris running
HOSTID
0x83d8b31c
368 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Configuring Domain Dependencies
CONTROL
failure-policy=ignore
DEPENDENCY
master=
------------------------------------------------------------------------------
NAME STATE FLAGS UTIL
orange bound ------
HOSTID
0x84fb28ef
CONTROL
failure-policy=ignore
DEPENDENCY
master=primary
------------------------------------------------------------------------------
NAME STATE FLAGS UTIL
tangerine bound ------
HOSTID
0x84f948e9
CONTROL
failure-policy=ignore
DEPENDENCY
master=orange,primary
Dependency Cycles
The Logical Domains Manager does not permit you to create domain relationships that create a
dependency cycle. A dependency cycle is a relationship between two or more domains that lead
to a situation where a slave domain depends on itself or a master domain depends on one of its
slave domains.
The Logical Domains Manager determines whether a dependency cycle exists before adding a
dependency. The Logical Domains Manager starts at the slave domain and searches along all
paths that are specified by the master array until the end of the path is reached. Any dependency
cycles found along the way are reported as errors.
The following example shows how a dependency cycle might be created. The first command
creates a slave domain called mohawk that specifies its master domain as primary. So, mohawk
depends on primary in the dependency chain shown in the following diagram.
The second command creates a slave domain called primary that specifies its master domain as
counter. So, mohawk depends on primary, which depends on counter in the dependency chain
shown in the following diagram.
The third command attempts to create a dependency between the counter and mohawk
domains, which would produce the dependency cycle shown in the following diagram.
The ldm set-domain command will fail with the following error message:
370 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Determining Where Errors Occur by Mapping CPU and Memory Addresses
FMA reports CPU errors in terms of physical CPU numbers and memory errors in terms of
physical memory addresses.
If you want to determine within which logical domain an error occurred and the corresponding
virtual CPU number or real memory address within the domain, then you must perform a
mapping.
CPU Mapping
You can find the domain and the virtual CPU number within the domain that correspond to a
given physical CPU number.
First, generate a long parseable list for all domains by using the following command:
Memory Mapping
You can find the domain and the real memory address within the domain that correspond to a
given physical memory address (PA).
First, generate a long parseable list for all domains.
EXAMPLE 175 Determining the Virtual CPU That Corresponds to a Physical CPU Number
The logical domain configuration is shown in Example 174. This example describes how to
determine the domain and the virtual CPU corresponding to physical CPU number 5, and the
domain and the real address corresponding to physical address 0x7e816000.
Looking through the VCPU entries in the list for the one with the pid field equal to 5, you can find
the following entry under logical domain ldg1.
|vid=1|pid=5|util=29|strand=100
Hence, the physical CPU number 5 is in domain ldg1 and within the domain it has virtual CPU
number 1.
Looking through the MEMORY entries in the list, you can find the following entry under domain
ldg2.
372 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Virtual Domain Information Command and API
EXAMPLE 175 Determining the Virtual CPU That Corresponds to a Physical CPU Number
(Continued)
ra=0x8000000|pa=0x78000000|size=1073741824
Where 0x78000000 <= 0x7e816000 <= (0x78000000 + 1073741824 - 1); that is, pa <= PA <= (pa
+ size - 1). Hence, the PA is in domain ldg2 and the corresponding real address is 0x8000000 +
(0x7e816000 - 0x78000000) = 0xe816000.
Note The UUID is lost if you use the ldm migrate-domain -f command to migrate a domain
to a target machine that runs an older version of the Logical Domains Manager. When you
migrate a domain from a source machine that runs an older version of the Logical Domains
Manager, the domain is assigned a new UUID as part of the migration. Otherwise, the UUID is
migrated.
You can obtain the UUID for a domain by running the ldm list -l, ldm list-bindings, or
ldm list -o domain command. The following examples show the UUID for the ldg1 domain:
UUID
6c908858-12ef-e520-9eb3-f1cd3dbc3a59
The following list shows some of the information that you can gather about a virtual domain by
using the command or API:
Domain type (implementation, control, guest, I/O, service, root)
Domain name determined by the Virtual Domain Manager
Universally unique identifier (UUID) of the domain
Network node name of the domain's control domain
Chassis serial number on which the domain is running
For information about the virtinfo command, see the virtinfo(1M) man page. For
information about the API, see the libv12n(3LIB) and v12n(3EXT) man pages.
This software and system firmware provide a large pool of LDC endpoints that you can use for
the control domain and guest domains. This LDC endpoint pool is available only for the SPARC
T4 servers, SPARC T5 servers, SPARC T7 series servers, SPARC M5 servers, SPARC M6 servers,
SPARC M7 series servers, and Fujitsu M10 servers. The number of LDCs in the pool is based on
the platform type as follows:
SPARC T4 Server 1984 LDC endpoints per guest domain, 98304 LDC endpoints total
SPARC T5 Server 1984 LDC endpoints per guest domain, 98304 LDC endpoints total
SPARC T7 Series Server 1984 LDC endpoints per guest domain, 98304 LDC endpoints
total
SPARC M5 Server 1984 LDC endpoints per guest domain, 98304 LDC endpoints per
physical domain
SPARC M6 Server 1984 LDC endpoints per guest domain, 98304 LDC endpoints per
physical domain
SPARC M7 Series Server 1984 LDC endpoints per guest domain, 98304 LDC endpoints
per physical domain
Fujitsu M10 server 1984 LDC endpoints per guest domain, 196608 LDC endpoints total
The required system firmware to support the LDC endpoint pool is 8.5.2 for SPARC T4 servers,
9.2.1 for SPARC T5 servers, SPARC M5 servers, and SPARC M6 servers, 9.4.3 for SPARC T7
series servers and SPARC M7 series servers, and XCP2240 for Fujitsu M10 servers.
374 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Using Logical Domain Channels
The following LDC endpoint limits still apply if you run an older version of the system firmware
on a supported platform or on an UltraSPARC T2, UltraSPARC T2 Plus, or SPARC T3
platform:
UltraSPARC T2 Server 512 LDC endpoints
UltraSPARC T2 Plus Server 768 LDC endpoints
SPARC T3 Server 768 LDC endpoints
SPARC T4 Server 768 LDC endpoints
SPARC T5 Server 768 LDC endpoints
SPARC T7 Series Server 768 LDC endpoints
SPARC M5 Server 768 LDC endpoints
SPARC M6 Server 768 LDC endpoints
SPARC M7 Series Servers 768 LDC endpoints
Fujitsu M10 servers 768 LDC endpoints
This limitation might be an issue on the control domain because of the potentially large number
of LDC endpoints that are used for both virtual I/O data communications and the Logical
Domains Manager control of the other domains.
If you attempt to add a service or bind a domain so that the number of LDC endpoints exceeds
the limit on any single domain, the operation fails with an error message similar to the
following:
The following guidelines enable you to plan properly for using LDC endpoints and explain why
you might experience an overflow of the LDC capabilities of the control domain:
The control domain uses approximately 15 LDC endpoints for various communication
purposes with the hypervisor, Fault Management Architecture (FMA), and the system
processor (SP), independent of the number of other domains configured. The number of
LDC endpoints used by the control domain depends on the platform and on the version of
the software that is used.
The Logical Domains Manager allocates an LDC endpoint to the control domain for every
domain, including itself, for control traffic.
Each virtual I/O service on the control domain uses one LDC endpoint for every connected
client of that service. Each domain needs at least a virtual network, a virtual disk, and a
virtual console.
The following equation incorporates these guidelines to determine the number of LDC
endpoints that are required by the control domain:
number-of-domains is the total number of domains including the control domain and
number-of-virtual-services is the total number of virtual I/O devices that are serviced by this
domain.
The following example shows how to use the equation to determine the number of LDC
endpoints when there is a control domain and eight additional domains:
15 + 9 + (8 x 3) = 48 LDC endpoints
The following example has 45 guest domains and each domain includes five virtual disks, two
virtual networks, and a virtual console. The calculation yields the following result:
Depending upon the number of supported LDC endpoints of your platform, the Logical
Domains Manager will either accept or reject the configuration.
If you run out of LDC endpoints on the control domain, consider creating service domains or
I/O domains to provide virtual I/O services to the guest domains. This action enables the LDC
endpoints to be created on the I/O domains and the service domains instead of on the control
domain.
A guest domain can also run out of LDC endpoints. This situation might be caused by the
inter-vnet-link property being set to on, which assigns additional LDC endpoints to guest
domains to connect directly to each other.
The following equation determines the number of LDC endpoints that are required by a guest
domain when inter-vnet-link=off:
2 + number-of-vnets + number-of-vdisks = total-LDC-endpoints
2 represents the virtual console and control traffic, number-of-vnets is the total number of
virtual network devices assigned to the guest domain, and number-of-vdisks is the total number
of virtual disks assigned to the guest domain.
The following example shows how to use the equation to determine the number of LDC
endpoints per guest domain when inter-vnet-link=off and you have two virtual disks and
two virtual networks:
2 + 2 + 2 = 6 LDC endpoints
The following equation determines the number of LDC endpoints that are required by a guest
domain when inter-vnet-link=on:
2 + [[(number-of-vnets-from-vswX x number-of-vnets-in-vswX)] ...] + number-of-vdisks =
total-LDC-endpoints
2 represents the virtual console and control traffic, number-of-vnets-from-vswX is the total
number of virtual network devices assigned to the guest domain from the vswX virtual switch,
376 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Booting a Large Number of Domains
number-of-vnets-in-vswX is the total number of virtual network devices on the vswX virtual
switch, and number-of-virtual-disks is the total number of virtual disks assigned to the guest
domain.
The following example shows how to use the equation to determine the number of LDC
endpoints per guest domain when inter-vnet-link=on and you have two virtual disks and two
virtual switches. The first virtual switch has eight virtual networks and assigns four of them to
the domain. The second virtual switch assigns all eight of its virtual networks to the domain:
To resolve the issue of running out of LDC endpoints on a guest domain, consider using the ldm
add-vsw or ldm set-vsw command to set the inter-vnet-link property to off. This action
reduces the number of LDC endpoints in the domain or domains that have the virtual network
devices. However, the off property value does not affect the service domain that has the virtual
switch because the service domain still requires an LDC connection to each virtual network
device. When this property is set to off, LDC channels are not used for inter-vnet
communications. Instead, an LDC channel is assigned only for communication between virtual
network devices and virtual switch devices. See the ldm(1M) man page.
Note Although disabling the assignment of inter-vnet links reduces the number of LDC
endpoints, it might negatively affect guest-to-guest network performance. This degradation
would occur because all guest-to-guest communications traffic goes through the virtual switch
rather than directly from one guest domain to another guest domain.
If unallocated virtual CPUs are available, assign them to the service domain to help process the
virtual I/O requests. Allocate 4 to 8 virtual CPUs to the service domain when creating more
than 32 domains. In cases where maximum domain configurations have only a single CPU in
the service domain, do not put unnecessary stress on the single CPU when configuring and
using the domain. The virtual switch (vsw) services should be spread across all the network
adapters available in the machine. For example, if booting 128 domains on a Sun SPARC
Enterprise T5240 server, create 4 vsw services, each serving 32 virtual net (vnet) instances.
Assigning more than 32 vnet instances per vsw service could cause hard hangs in the service
domain.
Memory and swap space usage increases in a guest domain when the vsw services used by the
domain provide services to many virtual networks in multiple domains. This increase is due to
the peer-to-peer links between all the vnet instances connected to the vsw. The service domain
benefits from having extra memory. The recommended minimum is four Gbytes when running
more than 64 domains. Start domains in groups of 10 or fewer and wait for them to boot before
starting the next batch. The same advice applies to installing operating systems on domains.
You can reduce the number of links by disabling inter-vnet links. See Inter-Vnet LDC
Channels on page 230.
378 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Logical Domains Variable Persistence
Logical Domains variables for a domain can be specified using any of the following methods:
At the OpenBoot prompt.
Using the Oracle Solaris OS eeprom(1M) command.
Using the Logical Domains Manager CLI (ldm).
In a limited fashion, from the system controller (SC) using the bootmode command. This
method can be used for only certain variables, and only when in the factory-default
configuration.
Variable updates that are made by using any of these methods should always persist across
reboots of the domain. The variable updates also always apply to any subsequent domain
configurations that were saved to the SC.
In Oracle VM Server for SPARC 3.3 software, variable updates do not persist as expected in a
few cases:
All methods of updating a variable persist across reboots of that domain. However, they do
not persist across a power cycle of the system unless a subsequent logical domain
configuration is saved to the SC.
However in the control domain, updates made using either OpenBoot firmware commands
or the eeprom command do persist across a power cycle of the system even without
subsequently saving a new logical domain configuration to the SC. The eeprom command
supports this behavior on SPARC T5 servers, SPARC T7 series servers, SPARC M5 servers,
SPARC M6 servers, and SPARC M7 series servers, and on SPARC T3 servers and SPARC T4
servers that run at least version 8.2.1 of the system firmware.
If you are concerned about Logical Domains variable changes, do one of the following:
Bring the system to the ok prompt and update the variables.
Update the variables while the Logical Domains Manager is disabled:
# svcadm disable ldmd
update variables
# svcadm enable ldmd
When running Live Upgrade, perform the following steps:
# svcadm disable -t ldmd
# luactivate be3
# init 6
If you modify the time or date on a logical domain, for example, using the ntpdate command,
the change persists across reboots of the domain but not across a power cycle of the host. To
ensure that time changes persist, save the configuration with the time change to the SP and boot
from that configuration.
The following Bug IDs have been filed to resolve these issues: 15375997, 15387338, 15387606,
and 15415199.
Note These limitations do not apply to the SPARC M7 series servers and SPARC T7 series
servers.
When you enable I/O virtualization on a PCIe bus, interrupt hardware resources are assigned to
each I/O domain. Each domain is allotted a finite number of those resources, which might lead
to some interrupt allocation issues. This situation affects only the UltraSPARC T2, UltraSPARC
T2 Plus, SPARC T3, SPARC T4, SPARC T5, SPARC M5, and SPARC M6 platforms.
The following warning on the Oracle Solaris console means the interrupt supply was exhausted
while attaching I/O device drivers:
380 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Adjusting the Interrupt Limit
Specifically, the limit might need adjustment if the system is partitioned into multiple logical
domains and if too many I/O devices are assigned to any guest domain. Oracle VM Server for
SPARC divides the total number of interrupts into smaller sets and assigns them to guest
domains. If too many I/O devices are assigned to a guest domain, its interrupt supply might be
too small to provide each device with the default limit of interrupts. Thus, the guest domain
exhausts its interrupt supply before it completely attaches all the drivers.
Some drivers provide an optional callback routine that permits the Oracle Solaris OS to
automatically adjust their interrupts. The default limit does not apply to these drivers.
Use the ::irmpools and ::irmreqs MDB macros to determine how interrupts are used. The
::irmpools macro shows the overall interrupt supply divided into pools. The ::irmreqs macro
shows which devices are mapped to each pool. For each device, ::irmreqs shows whether the
default limit is enforced by an optional callback routine, how many interrupts each driver
requested, and how many interrupts each driver has.
Although these macros do not show information about drivers that failed to attach, you can use
the information to calculate the extent to which you can adjust the default limit. You can force
any device that uses more than one interrupt without providing a callback routine to use fewer
interrupts by adjusting the default limit. For such devices, reduce the default limit to free
interrupts that can be used by other devices.
To adjust the default limit, set the ddi_msix_alloc_limit property to a value from 1 to 8 in the
/etc/system file. Then, reboot the system for the change to take effect.
For information about correctly creating or updating /etc/system property values, see
Updating Property Values in the /etc/system File on page 362.
To maximize performance, start by assigning larger values and decrease the values in small
increments until the system boots successfully without any warnings. Use the ::irmpools and
::irmreqs macros to measure the adjustment's impact on all attached drivers.
For example, suppose the following warnings are issued while booting the Oracle Solaris OS in a
guest domain:
The default limit in this example is 8 interrupts per device, which is not enough interrupts to
accommodate attaching the final emlxs3 device to the system. Assuming that all emlxs
instances behave in the same way, emlxs3 probably requested 8 interrupts.
By subtracting the 12 interrupts used by all of the igb devices from the total pool size of 36
interrupts, 24 interrupts are available for the emlxs devices. Dividing the 24 interrupts by 4
suggests that 6 interrupts per device would enable all emlxs devices to attach with equal
performance. So, the following adjustment is added to the /etc/system file:
set ddi_msix_alloc_limit = 6
For information about correctly creating or updating /etc/system property values, see
Updating Property Values in the /etc/system File on page 362.
When the system successfully boots without warnings, the ::irmpools and ::irmreqs macros
show the following updated information:
Be aware of these implicit I/O dependencies, as an outage in a service domain or a root domain
will result in a service interruption of the dependent domain, as well.
You can use the ldm list-dependencies command to view the I/O dependencies between
domains. In addition to listing the dependencies of a domain, you can invert the output to show
the dependents of a particular domain.
382 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Listing Domain I/O Dependencies
The following list shows the types of I/O dependencies that you can view by using the ldm
list-dependencies command:
VDISK Dependency created when a virtual disk is connected to a virtual disk backend that
has been exported by a virtual disk server
VNET Dependency created when a virtual network device is connected to a virtual switch
IOV Dependency created when an SR-IOV virtual function is associated with an SR-IOV
physical function
The following ldm list-dependencies commands show some of the ways in which you can
view domain dependency information:
To show detailed domain dependency information, use the -l option.
primary# ldm list-dependencies -l
DOMAIN DEPENDENCY TYPE DEVICE
primary
svcdom
ldg0 primary VDISK primary-vds0/vdisk0
VNET primary-vsw0/vnet0
svcdom VDISK svcdom-vds0/vdisk1
VNET svcdom-vsw0/vnet1
ldg1 primary VDISK primary-vds0/vdisk0
VNET primary-vsw0/vnet0
IOV /SYS/MB/NET0/IOVNET.PF0.VF0
svcdom VDISK svcdom-vds0/vdisk1
VNET svcdom-vsw0/vnet1
IOV /SYS/MB/NET2/IOVNET.PF0.VF0
To show detailed information about dependents grouped by their dependencies, use both
the -l and -r options.
primary# ldm list-dependencies -r -l
DOMAIN DEPENDENT TYPE DEVICE
primary ldg0 VDISK primary-vds0/vdisk0
VNET primary-vsw0/vnet0
ldg1 VDISK primary-vds0/vdisk0
VNET primary-vsw0/vnet0
IOV /SYS/MB/NET0/IOVNET.PF0.VF0
svcdom ldg0 VDISK svcdom-vds0/vdisk1
VNET svcdom-vsw0/vnet1
ldg1 VDISK svcdom-vds0/vdisk1
VNET svcdom-vsw0/vnet1
IOV /SYS/MB/NET2/IOVNET.PF0.VF0
On SPARC T7 series servers and SPARC M7 series servers, the ILOM interconnect service
enables communication between the ldmd daemon and the service processor (SP). The
ilomconfig-interconnect service is enabled by default. To verify that the ILOM interconnect
service is enabled, see How to Verify the ILOM Interconnect Configuration on page 54.
Caution Do not disable the ilomconfig-interconnect service. Disabling this service might
prevent the correct operation of logical domains and the OS.
1 Use the svcadm command to enable the Logical Domains Manager daemon, ldmd.
# svcadm enable ldmd
For more information about the svcadm command, see the svcadm(1M) man page.
384 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Saving and Restoring Autosave Configuration Data
You can use the tar or cpio command to save and restore the entire contents of the directories.
Note Each autosave directory includes a timestamp for the last SP configuration update for the
related configuration. If you restore the autosave files, the timestamp might be out of sync. In
this case, the restored autosave configurations are shown in their previous state, either [newer]
or up to date.
For more information about autosave configurations, see Managing Domain Configurations
on page 343.
2 (Optional) Remove the existing autosave directories to ensure a clean restore operation.
Sometimes an autosave directory might include extraneous files, perhaps left over from a
previous configuration, that might corrupt the configuration that was downloaded to the SP. In
such cases, clean the autosave directory prior to the restore operation as shown in this example:
# cd /
# rm -rf var/share/ldomsmanager/autosave-*
Also save and restore the /var/share/ldomsmanager/ldom-db.xml file when you perform any
other operation that is destructive to the control domain's file data, such as a disk swap.
This section describes how to remove all guest domains, remove all domain configurations, and
revert the configuration to the factory default.
Note You might be unable to unbind an I/O domain if it is providing services required by the
control domain. In this situation, skip this step.
386 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Factory Default Configuration and Disabling Domains
2 Remove all configurations (config-name) previously saved to the SP except for the
factory-default configuration.
Use the following command for each such configuration:
primary# ldm rm-config config-name
After you remove all the configurations previously saved to the SP, the factory-default
domain is the next domain to use when the control domain (primary) is rebooted.
3 Perform a power cycle of the system to load the factory default configuration.
-> stop /SYS
-> start /SYS
Caution If you disable the Logical Domains Manager, this action disables some services, such as
error reporting and power management. In the case of error reporting, if you are in the
factory-default configuration, you can reboot the control domain to restore error reporting.
However, you cannot re-enable power management. In addition, some system management or
monitoring tools rely on the Logical Domains Manager.
2 Perform a power cycle of the system to load the factory default configuration.
-> reset /SYS
388 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
P A R T I I
389
390
18
C H A P T E R 1 8
You run these commands in the control domain of an Oracle VM Server for SPARC system. In
addition to running the commands on the command line, you can run them in a fully
automated fashion or from other programs as part of a larger workflow.
391
Installing the Oracle VM Server for SPARC Template Utilities
Then, install the Oracle VM Server for SPARC template utilities software package, ovmtutils.
Note The creation and development of templates with applications and first-boot scripts is an
iterative process. Take care to synchronize all aspects of the configuration by using a source
code management system to manage the scripts and properties.
The following describes the stages of the template-creation process, the actions that are taken,
and how to use the Oracle VM Server for SPARC template utilities to assist in the process:
1. Authoring a template. While pre-built, generic templates are available, you can create a
custom template from an existing domain. This domain must have all of the operating
system components, application software, and other utilities that you want fully installed.
Typically, the environment is configured as fully as possible, with only a small number of
actions required to finalize the environment. Any domain settings such as memory, virtual
CPU, virtual networking, and disks should reflect the deployment that you want.
At this stage, you create one or more first-boot scripts. Include these scripts in the
environment that performs the final configuration based on the properties that you supply.
Be sure to record and describe these properties in a README file for each template.
392 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Oracle VM Server for SPARC Template Lifecycle
Note If any first-boot scripts access domain variables, ensure that the ovmtprop utility is
installed in the guest domain.
2. Creating a template. Before you create a template, ensure that the source domain
environment is not configured so that it can be configured later by prescribed actions that
are often part of the first-boot scripts.
For example, perform the following steps:
Remove any application-specific configurations to be re-created later.
Use default values for configuration files.
Ensure that you reset any Oracle Solaris OS configuration information such as system
name, network configuration, and passwords. This configuration information is later
supplied by property values and configuration scripts.
Export any zpools other than the root file system so that they can be recognized by new
domains.
After you take these steps, you can shut down the domain and run the ovmtcreate utility to
create a template from the domain.
3. Specifying the name of the template. Use the following format:
technology.OS.application.architecture.build.ova
For example, the following template name is for a domain that runs build 2 of the Oracle
Solaris 11.2 OS on a SPARC platform and that runs version 12.1.2 of the WebLogic server:
OVM_S11.2_WLS12.1.2_SPARC_B2.ova
4. Distributing the template. The template is a single file with a .ova extension. The file
contains the compressed disk images and the metadata that are required for deploymment.
The template also contains a manifest file of payload file checksums, which might be
combined with an overall archive checksum to validate that the content has not been
changed since distribution.
You can distribute the template by using web-based services or maintain a central repository
rather than duplicating templates.
5. Deploying the template. Because the template captures only those aspects of a system that
are seen by the source domain, you must understand which services must be present to
support the deployment of the template.
The required services include the following:
One or more virtual switches to appropriate interfaces to which virtual networks from
the template might be attached
Virtual disk services
Console services
While the ovmtdeploy utility can override many of these settings, the minimum values that
are supplied with a template represent the baseline requirements.
You can use the ovmtdeploy utility to automatically extract, decompress, and copy the
virtual disks to deployment directories, and to build the various virtual devices that the
template describes.
At this point, you could start the domain but you might need to perform some manual
configuration steps by using the domain console before the domain is fully functional.
6. Automatically configuring the domain. The configuration of a domain that is created by a
template consists of several types of actions. For example, you might specify property
name-value pairs to provide first-boot scripts with the information to configure. You might
also back-mount virtual disks to the control domain to perform actions on the domain file
systems such as copying configuration files.
The ovmtconfig utility automates these domain-configuration activities and enables you to
specify the actions to take and the properties to use to configure a domain by specifying one
or more command scripts and property files.
To configure the Oracle Solaris OS, the ovmtconfig utility back-mounts the domain's root
file system and creates an sc_profile.xml file from the supplied configuration scripts and
properties. This profile enables the Oracle Solaris OS to configure itself on first boot.
7. First configuration. Following the successful configuration of the Oracle Solaris OS and
first boot, you must configure any installed applications. During the configuration phase,
the ovmtconfig utility passes configuration information to the deployed domain by using
one of the following methods:
Direct action The ovmtconfig utility back-mounts the guest domain file systems to
the control domain and takes direct action on the files and file systems. Actions might
include the creation of configuration files or the copying of system binaries. These
actions are described in the scripts that you supply to the ovmtconfig utility.
These actions do not typically include processes that are designed to run in the guest
domain because such actions might affect the control domain. Use the ovmtconfig -c
command to specify the commands to run.
Domain variables In addition to a local properties file, you can set domain variables by
running the ovmtconfig utility in the control domain that can then be used by the
ovmtprop utility in the guest domain. This method enables first-boot scripts to access the
properties directly and provides configuration information directly to the guest domain
after the configuration completes.
394 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Oracle VM Server for SPARC Template Examples
Caution Do not use unencrypted properties to pass sensitive information to the domain
such as passwords. Properties other than those that are used to configure the Oracle
Solaris OS are passed to the domain as ldm variables in clear text. These property values
are visible to a user on the control domain who is authorized to execute ldm commands
and to a user who is logged in to the deployed domain.
To work around this issue, manually remove the domain variable after you have
deployed the guest domain and the application is fully configured.
A domain variable is prefixed with the OVMTVAR_ID string, where ID is four digits. This
example shows a domain variable declaration for the system netmask:
OVMTVAR_1006=ovmt.prop.key.com.oracle.solaris.network.netmask.0=24.
From the control domain, use the ldm list-variable | grep OVMTVAR_ command to
list the domain variables. Then, remove any variables with sensitive information by
using the ldm remove-variable OVMTVAR_ID command.
For example, you might automate a change of a configuration aspect that does not have
network access by using a supervisor script that runs ovmtprop in the guest and by
running ovmtconfig -v from the control domain.
396 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Oracle VM Server for SPARC Template Examples
EXAMPLE 182 Configuring Oracle VM Server for SPARC Template Properties (Continued)
EXAMPLE 182 Configuring Oracle VM Server for SPARC Template Properties (Continued)
Use the ldm list -l command to verify that the value for the
com.oracle.solaris.system.computer-name property is test.
# ldm ls -l ldg1
NAME STATE FLAGS CONS VCPU MEMORY UTIL NORM UPTIME
ldg1 active -n---- 5000 16 16G 0.0% 0.0% 2h 7m
..
CONSTRAINT
VARIABLES
auto-boot?=true
VMAPI TO GUEST
com.oracle.solaris.fmri.count=0
com.oracle.solaris.system.computer-name=test
398 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Oracle VM Server for SPARC Template Examples
Validating deployment
Domain created:
The ldm list output shows that you have created a new domain called ldg1.
# ldm list
NAME STATE FLAGS CONS VCPU MEMORY UTIL NORM UPTIME
primary active -n-cv- UART 8 40G 1.4% 1.1% 6d 2h 18m
ldg1 active -n---- 5000 8 8G 41% 38% 28s
EXAMPLE 184 Managing the Oracle VM Server for SPARC Template Library
The ovmtlibrary command manages a database and file-system-based repository for Oracle
VM Server for SPARC templates by organizing files and by storing, retrieving, and editing
information in the database.
The following command creates a template library in
export/user1/ovmtlibrary_example:
# ovmtlibrary -c init -l /export/user1/ovmtlibrary_example
Oracle Virtual Machine for SPARC Template Library Utility
ovmtlibrary Version: 3.3.0.0.12
Copyright (c) 2015 Oracle and/or its affiliates. All rights reserved.
Init complete
The following command stores the sol-11_2-ovm-2-sparc.ova template in the
export/user1/ovmtlibrary_example library:
# ovmtlibrary -c store -d ovmtlibrary example -o http://system1.example.com/s11.2/templates/sol-11_2-ovm-2-sparc.ova -l /export/
event id is 2
********************************************************************************
converted http://system1.example.com/s11.2/templates/sol-11_2-ovm-2-sparc.ova (646) ->
http://system1.example.com/s11.2/templates/sol-11_2-ovm-2-sparc.ova (UTF-8)
--2015-08-18 16:37:17-- http://system1.example.com/s11.2/templates/sol-11_2-ovm-2-sparc.ova
Resolving system1.example.com (system1.example.com)... 10.134.127.18
Connecting to system1.example.com (system1.example.com)|10.134.127.18|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1018341888 (971M) [text/plain]
Saving to: /export/user1/ovmtlibrary_example/repository/templates/1/1/sol-11_2-ovm-2-sparc.ova
/export/user1/ovmtlibrary_example/repo 100%
[==============================================================================>] 971.17M 6.05MB/s in 5m 37s
2015-08-18 16:42:55 (2.88 MB/s) - /export/user1/ovmtlibrary_example/repository/templates/1/1/sol-11_2-ovm-2-sparc.ova saved
[1018341888/1018341888]
********************************************************************************
EXAMPLE 184 Managing the Oracle VM Server for SPARC Template Library (Continued)
Download complete
Extracting the ova file...
Extract complete
Decompress file System.img.gz
Store complete
The following command lists the contents of the export/user1/ovmtlibrary_example
library:
# ovmtlibrary -c list -l /export/user1/ovmtlibrary_example
Oracle Virtual Machine for SPARC Template Library Utility
ovmtlibrary Version: 3.3.0.0.12
Copyright (c) 2015 Oracle and/or its affiliates. All rights reserved.
Templates present in path "/export/user1/ovmtlibrary_example"
ID Name Version Description Date
--------------------------------------------------------------------------------
1 sol-11_2-ovm-2-sparc 1 ovmtlibrary example 2015-08-18
The following command shows a detailed listing of the
export/user1/ovmtlibrary_example library:
# ovmtlibrary -c list -i 1 -o -l /export/user1/ovmtlibrary_example
Oracle Virtual Machine for SPARC Template Library Utility
ovmtlibrary Version: 3.3.0.0.12
Copyright (c) 2015 Oracle and/or its affiliates. All rights reserved.
Templates present in path "/export/user1/ovmtlibrary_example"
ID Name Type Path Size(bytes)
--------------------------------------------------------------------------------
1 sol-11_2-ovm-2-sparc.ova ova /export/user1/ovmtlibrary_example/repository/templates/1/1/sol-11_2-ovm-2-sparc.ova 1018341888
2 sol-11_2-ovm-sparc.ovf ovf /export/user1/ovmtlibrary_example/repository/templates/1/1/sol-11_2-ovm-sparc.ovf 3532
3 sol-11_2-ovm-sparc.mf mf /export/user1/ovmtlibrary_example/repository/templates/1/1/sol-11_2-ovm-sparc.mf 137
4 System.img img /export/user1/ovmtlibrary_example/repository/templates/1/1/System.img 21474836480
400 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
19
C H A P T E R 1 9
Note The ldmp2v utility will no longer be updated to address bug fix or enhancement requests.
This utility will no longer be supported, but will continue to be included and documented as
part of the Oracle VM Server for SPARC software.
Note The ldmp2v command does not support any SPARC based system that runs the Oracle
Solaris 10 OS with a ZFS root or the Oracle Solaris 11 OS.
401
Oracle VM Server for SPARC P2V Tool Overview
The conversion from a physical system to a virtual system is performed in the following phases:
Collection phase. Runs on the physical source system. In the collect phase, a file system
image of the source system is created based on the configuration information that it collects
about the source system.
Preparation phase. Runs on the control domain of the target system. In the prepare phase,
a logical domain is created on the target system based on the configuration information
collected in the collect phase. The file system image is restored to one or more virtual
disks. You can use the P2V tool to create virtual disks on plain files or ZFS volumes. You can
also create virtual disks on physical disks or LUNs, or on volume manager volumes that you
created. The image is modified to enable it to run as a logical domain.
Conversion phase. Runs on the control domain of the target system. In the convert phase,
the created logical domain is converted into a logical domain that runs the Oracle Solaris 10
OS by using the standard Oracle Solaris upgrade process.
For information about the P2V tool, see the ldmp2v(1M) man page.
The following sections describe how the conversion from a physical system to a virtual system is
performed.
Collection Phase
The Collection phase runs on the system to be converted. To create a consistent file system
image, ensure that the system is as quiet as possible and that all applications are stopped. The
ldmp2v command creates a backup of all mounted UFS file systems, so ensure that any file
systems to be moved to a logical domain are mounted. You can exclude mounted file systems
that you do not want to move, such as file systems on SAN storage or file systems that will be
moved by other means. Use the -x option to exclude such file systems. File systems that are
excluded by the -x option are not re-created on the guest domain. You can use the -O option to
exclude files and directories.
No changes are required on the source system. The only requirement is the ldmp2v script that
was installed on the control domain. Ensure that the flarcreate utility is present on the source
system.
Preparation Phase
The preparation phase uses the data collected during the collection phase to create a logical
domain that is comparable to the source system.
402 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Oracle VM Server for SPARC P2V Tool Overview
You can use the ldmp2v prepare command in one of the following ways:
Automatic mode. This mode automatically creates virtual disks and restores file system
data.
Creates the logical domain and the required virtual disks of the same size as on the
source system.
Partitions the disks and restores the file systems.
If the combined size of the /, /usr, and /var file systems is less than 10 Gbytes, the sizes
of these file systems are automatically adjusted to allow for the larger disk space
requirements of the Oracle Solaris 10 OS. Automatic resize can be disabled by using the
-x no-auto-adjust-fs option or by using the -m option to manually resize a file system.
Modifies the OS image of the logical domain to replace all references to physical
hardware with versions that are appropriate for a logical domain. You can then upgrade
the system to the Oracle Solaris 10 OS by using the normal Oracle Solaris upgrade
process. Modifications include updating the /etc/vfstab file to account for new disk
names. Any Oracle Solaris Volume Manager or Veritas Volume Manager (VxVM)
encapsulated boot disks are automatically unencapsulated during this process. When a
disk is unencapsulated, it is converted into plain disk slices. If VxVM is installed on the
source system, the P2V process disables VxVM on the created guest domain.
Non-automatic mode. You must create the virtual disks and restore the file system data
manually. This mode enables you to change the size and number of disks, the partitioning,
and the file system layout. The preparation phase in this mode runs only the logical domain
creation and the OS image modification steps on the file system.
Cleanup mode. Removes a logical domain and all of the underlying back-end devices that
are created by ldmp2v.
Conversion Phase
In the conversion phase, the logical domain uses the Oracle Solaris upgrade process to upgrade
to the Oracle Solaris 10 OS. The upgrade operation removes all existing packages and installs
the Oracle Solaris 10 sun4v packages, which automatically performs a sun4u-to-sun4v
conversion. The convert phase can use an Oracle Solaris DVD ISO image or a network
installation image. On Oracle Solaris 10 systems, you can also use the Oracle Solaris JumpStart
feature to perform a fully automated upgrade operation.
Back-End Devices
You can create virtual disks for a guest domain on a number of back-end types: files (file), ZFS
volumes (zvol), physical disks or LUNs (disk), or volume manager volumes (disk). The
ldmp2v command automatically creates files or ZFS volumes of the appropriate size if you
specify file or zvol as the back-end type in one of the following ways:
By using the -b option
By specifying the value of the BACKEND_TYPE parameter in the /etc/ldmp2v.conf file
The disk back-end type enables you to use a physical disk, LUN, or volume manager volume
(Oracle Solaris Volume Manager and Veritas Volume Manager (VxVM)) as a back-end device
for virtual disks. You must create the disk or volume with an appropriate size prior to beginning
the prepare phase. For a physical disk or LUN, specify the back-end device as slice 2 of the
block or character device of the disk, such as /dev/dsk/c0t3d0s2. For a volume manager
volume, specify the block or character device for the volume, such as /dev/md/dsk/d100 for
Oracle Solaris Volume Manager or /dev/vx/dsk/ldomdg/vol1 for VxVM.
Unless you specify the volume and virtual disk names with the -B backend:volume:vdisk
option, the volumes and virtual disks that you create for the guest are given default names.
backend specifies the name of the back end to use. You must specify backend for the disk
back-end type. backend is optional for the file and zvol back-end types, and can be used to
set a non-default name for the file or ZFS volume that ldmp2v creates. The default name is
$BACKEND_PREFIX/guest-name/diskN.
volume is optional for all back-end types and specifies the name of the virtual disk server
volume to create for the guest domain. If not specified, volume is guest-name-volN.
vdisk is optional for all back-end types and specifies the name of the volume in the guest
domain. If not specified, vdisk is diskN.
Note During the conversion process, the virtual disk is temporarily named guest-name-diskN
to ensure that the name in the control domain is unique.
To specify a blank value for backend, volume, or vdisk, include only the colon separator. For
example, specifying -B ::vdisk001 sets the name of the virtual disk to vdisk001 and uses the
default names for the back end and volume. If you do not specify vdisk, you can omit the trailing
colon separator. For example, -B /ldoms/ldom1/vol001:vol001 specifies the name of the
back-end file as /ldoms/ldom1/vol001 and the volume name as vol001. The default virtual disk
name is disk0.
404 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Installing and Configuring the Oracle VM Server for SPARC P2V Tool
In addition to these prerequisites, configure an NFS file system to be shared by both the source
and target systems. This file system should be writable by root. However, if a shared file system
is not available, use a local file system that is large enough to hold a file system dump of the
source system on both the source and target systems.
Support for VxVM volumes is limited to the following volumes on an encapsulated boot
disk: rootvol, swapvol, usr, var, opt, and home. The original slices for these volumes must
still be present on the boot disk. The P2V tool supports Veritas Volume Manager 5.x on the
Oracle Solaris 10 OS. However, you can also use the P2V tool to convert Solaris 8 and Solaris
9 operating systems that use VxVM.
Oracle Solaris 10 systems that have zones can be converted if the zones are detached by
using the zoneadm detach command prior to running the ldmp2v collect operation. After
the P2V conversion completes, use the zoneadm attach command to reattach the zones that
have been created on the guest domain. For information about performing these steps on a
guest domain, see Chapter 23, Moving and Migrating Non-Global Zones (Tasks), in
System Administration Guide: Oracle Solaris Containers-Resource Management and Oracle
Solaris Zones .
Note The P2V tool does not update any zone configuration, such as the zone path or
network interface, nor does the tool move or configure the storage for the zone path. You
must manually update the zone configuration and move the zone path on the guest domain.
See Chapter 23, Moving and Migrating Non-Global Zones (Tasks), in System
Administration Guide: Oracle Solaris Containers-Resource Management and Oracle Solaris
Zones .
1 Become an administrator.
For Oracle Solaris 11.3, see Chapter 1, About Using Rights to Control Users and Processes, in
Securing Users and Processes in Oracle Solaris 11.3 .
2 Create the /etc/ldmp2v.conf file and configure the following default properties:
VDS Name of the virtual disk service, such as VDS="primary-vds0"
VSW Name of the virtual switch, such as VSW="primary-vsw0"
VCC Name of the virtual console concentrator, such as VCC="primary-vcc0"
BACKEND_TYPE Back-end type of zvol, file, or disk
BACKEND_SPARSE Determines whether to create back-end devices as sparse volumes or files
(BACKEND_SPARSE="yes") or non-sparse volumes or files (BACKEND_SPARSE="no")
BACKEND_PREFIX Location to create virtual disk back-end devices
406 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Using the ldmp2v Command
The following example shows how to run the collection tool when the source and target
systems share an NFS-mounted file system:
volumia# ldmp2v collect -d /home/dana/volumia
Collecting system configuration ...
Archiving file systems ...
Determining which filesystems will be included in the archive...
Creating the archive...
895080 blocks
Archive creation complete.
Not sharing an NFS-mounted file system. When the source and target systems do not
share an NFS-mounted file system, the file system image can be written to local storage and
later copied to the control domain. The Flash Archive utility automatically excludes the
archive that it creates.
volumia# ldmp2v collect -d /var/tmp/volumia
Collecting system configuration ...
Archiving file systems ...
Determining which filesystems will be included in the archive...
Creating the archive...
895080 blocks
Archive creation complete.
Copy the flash archive and the manifest file from the /var/tmp/volumia directory to the
target system.
Tip In some cases, ldmp2v might show cpio command errors. Most commonly, these errors
generate messages such as File size of etc/mnttab has increased by 435. You can
ignore messages that pertain to log files or to files that reflect the system state. Be sure to
review all error messages thoroughly.
Skip file-system backup step. If you already create backups of the system using a
third-party backup tool such as NetBackup, you can skip the file system backup step by
using the none archiving method. When you use this option, only the system configuration
manifest is created.
volumia# ldmp2v collect -d home/dana/p2v/volumia -a none
Collecting system configuration ...
The following file system(s) must be archived manually: / /u01 /var
Note that if the directory specified by -d is not shared by the source and target systems, you
must copy the contents of that directory to the control domain. The directory contents must
be copied to the control domain prior to the preparation phase.
408 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Using the ldmp2v Command
NETWORK
NAME SERVICE DEVICE MAC MODE PVID VID
vnet0 primary-vsw0 00:03:ba:1d:7a:5a 1
DISK
NAME DEVICE TOUT MPGROUP VOLUME SERVER
disk0 volumia-vol0@primary-vds0
disk1 volumia-vol1@primary-vds0
The following example shows how to completely remove a domain and its back-end devices
by using the -C option.
# ldmp2v prepare -C volumia
Cleaning up domain volumia ...
Removing vdisk disk0 ...
Removing vdisk disk1 ...
Removing domain volumia ...
Removing volume volumia-vol0@primary-vds0 ...
Removing ZFS volume tank/ldoms/volumia/disk0 ...
Removing volume volumia-vol1@primary-vds0 ...
Removing ZFS volume tank/ldoms/volumia/disk1 ...
The following example shows how to resize one or more file systems during P2V by
specifying the mount point and the new size with the -m option.
# ldmp2v prepare -d /home/dana/p2v/volumia -m /:8g volumia
Resizing file systems ...
Creating vdisks ...
Creating file systems ...
Populating file systems ...
Modifying guest domain OS image ...
Removing SVM configuration ...
Modifying file systems on SVM devices ...
Unmounting guest file systems ...
Creating domain volumia ...
Attaching vdisks to domain volumia ...
The sysidcfg file is only used for the upgrade operation, so a configuration such as the
following should be sufficient:
name_service=NONE
root_password=uQkoXlMLCsZhI
system_locale=C
timeserver=localhost
timezone=Europe/Amsterdam
terminal=vt100
security_policy=NONE
nfs4_domain=dynamic
auto_reg=disable
network_interface=PRIMARY {netmask=255.255.255.192
default_route=none protocol_ipv6=no}
For more information about using JumpStart, see Oracle Solaris 10 1/13 Installation Guide:
JumpStart Installations .
Note The example sysidcfg file includes the auto_reg keyword, which was introduced in
the Oracle Solaris 10 9/10 release. This keyword is required only if you are running at least
the Oracle Solaris 10 9/10 release.
# ldmp2v convert -j -n vnet0 -d /p2v/volumia volumia
LDom volumia started
Waiting for Solaris to come up ...
Using Custom JumpStart
Trying 0.0.0.0...
Connected to 0.
Escape character is ^].
Connecting to console "volumia" in group "volumia" ....
Press ~? for control options ..
SunOS Release 5.10 Version Generic_137137-09 64-bit
Copyright (c) 1983-2010, Oracle and/or its affiliates. All rights reserved.
Configuring devices.
Using RPC Bootparams for network configuration information.
Attempting to configure interface vnet0...
Configured interface vnet0
410 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Using the ldmp2v Command
Processing profile
Caution A safety check is performed prior to converting the guest domain. This check
ensures that none of the original system's IP addresses are active so as to prevent duplicate
active IP addresses on the network. You can use the -x skip-ping-test option to skip this
safety check. Skipping this check speeds up the conversion process. Use this option only if
you are certain that no duplicate IP addresses exist, such as when the original host is not
active.
The answers to the sysid questions are used only for the duration of the upgrade process.
This data is not applied to the existing OS image on disk. The fastest and simplest way to run
the conversion is to select Non-networked. The root password that you specify does not
need to match the root password of the source system. The system's original identity is
preserved by the upgrade and takes effect after the post-upgrade reboot. The time required
to perform the upgrade depends on the Oracle Solaris Cluster that is installed on the original
system.
# ldmp2v convert -i /tank/iso/s10s_u5.iso -d /home/dana/p2v/volumia volumia
Testing original system status ...
LDom volumia started
Waiting for Solaris to come up ...
Select Upgrade (F2) when prompted for the installation type.
Disconnect from the console after the Upgrade has finished.
Trying 0.0.0.0...
Connected to 0.
Escape character is ^].
Connecting to console "volumia" in group "volumia" ....
Press ~? for control options ..
Configuring devices.
Using RPC Bootparams for network configuration information.
Attempting to configure interface vnet0...
Extracting windowing system. Please wait...
Beginning system identification...
Searching for configuration file(s)...
Search complete.
Discovering additional network configuration...
Configured interface vnet0
Setting up Java. Please wait...
Select a Language
0. English
1. French
2. German
3. Italian
4. Japanese
5. Korean
6. Simplified Chinese
412 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Known Issues With the Oracle VM Server for SPARC Physical-to-Virtual Conversion Tool
7. Spanish
8. Swedish
9. Traditional Chinese
Please make a choice (0 - 9), or press h or ? for help:
[...]
- Solaris Interactive Installation --------------------------------------------
This system is upgradable, so there are two ways to install the Solaris
software.
The Upgrade option updates the Solaris software to the new release, saving
as many modifications to the previous version of Solaris software as
possible. Back up the system before using the Upgrade option.
The Initial option overwrites the system disks with the new version of
Solaris software. This option allows you to preserve any existing file
systems. Back up any modifications made to the previous version of Solaris
software before starting the Initial option.
After you select an option and complete the tasks that follow, a summary of
your actions will be displayed.
-------------------------------------------------------------------------------
F2_Upgrade F3_Go Back F4_Initial F5_Exit F6_Help
PARTITION MENU:
0 - change 0 partition
1 - change 1 partition
2 - change 2 partition
3 - change 3 partition
4 - change 4 partition
5 - change 5 partition
6 - change 6 partition
7 - change 7 partition
select - select a predefined table
414 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Known Issues With the Oracle VM Server for SPARC Physical-to-Virtual Conversion Tool
partition> 0
Part Tag Flag Cylinders Size Blocks
0 unassigned wm 0 0 (0/0/0) 0
Enter partition id tag[unassigned]: root
Enter partition permission flags[wm]:
Enter new starting cyl[0]: 0
Enter partition size[0b, 0c, 0e, 0.00mb, 0.00gb]: 8g
partition> label
Ready to label disk, continue? y
partition>
This chapter contains information about using power management on Oracle VM Server for
SPARC systems.
Using Power Management on page 417
417
Using Power Management
The Logical Domains Manager also applies cycle skipping if the processor, core, core-pair,
or SCC has no bound strands.
CPU dynamic voltage and frequency scaling (DVFS). When the elastic policy is in effect,
the Logical Domains Manager automatically adjusts the clock frequency of processors or
SCCs that are bound to domains running the Oracle Solaris 10 OS. The Logical Domains
Manager also reduces the clock frequency on SPARC T5, SPARC M5, and SPARC M6
processors that have no bound strands. On SPARC T7 series servers, the clock frequency is
reduced on SCCs. This feature is available only on SPARC T5 servers, SPARC T7 series
servers, SPARC M5 servers, SPARC M6 servers, and SPARC M7 series servers.
Coherency link scaling. When the elastic policy is in effect, the Logical Domains Manager
causes the hypervisor to automatically adjust the number of coherency links that are in use.
This feature is only available on SPARC T5-2 systems.
Power limit. You can set a power limit on SPARC T3 servers, SPARC T4 servers, SPARC T5
servers, SPARC T7 series servers, SPARC M5 servers, SPARC M6 servers, and SPARC M7
series servers to restrict the power draw of a system. If the power draw is greater than the
power limit, PM uses techniques to reduce power. You can use the ILOM service processor
(SP) to set the power limit.
418 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Using Power Management
You can use the ILOM interface to set a power limit, grace period, and violation action. If
the power limit is exceeded for more than the grace period, the violation action is
performed.
If the current power draw exceeds the power limit, an attempt is made to reduce the power
state of CPUs. If the power draw drops below the power limit, the power state of those
resources is permitted to increase. If the system has the elastic policy in effect, an increase in
the power state of resources is driven by the utilization level.
Solaris Power Aware Dispatcher (PAD). A guest domain that runs the Oracle Solaris 11.1
OS uses the power-aware dispatcher (PAD) on SPARC T5 servers, SPARC T7 series servers,
SPARC M5 servers, SPARC M6 servers, and SPARC M7 series servers to minimize power
consumption from idle or under-utilized resources. PAD, instead of the Logical Domains
Manager, adjusts the CPU or SCC clock cycle skip level and DVFS level.
For instructions on configuring the power policy by using the ILOM 3.0 firmware CLI, see
Monitoring Power Consumption in the Oracle Integrated Lights Out Manager (ILOM) 3.0 CLI
Procedures Guide.
The following examples show how to enable the PM Observability Module and show ways in
which to gather power-consumption data for the CPUs that are assigned to your domains.
EXAMPLE 202 Using a Profile Shell to Obtain CPU Thread Power-Consumption Data by Using Roles and
Rights Profiles
The following example shows how to create the ldmpower role with the LDoms Power Mgmt
Observability rights profile, which permits you to run the ldmpower command.
primary# roleadd -P LDoms Power Mgmt Observability ldmpower
primary# passwd ldmpower
New Password:
Re-enter new Password:
passwd: password successfully changed for ldmpower
User sam assumes the ldmpower role and can use the ldmpower command. For example:
$ id
uid=700299(sam) gid=1(other)
$ su ldmpower
Password:
$ pfexec ldmpower
Processor Power Consumption in Watts
DOMAIN 15_SEC_AVG 30_SEC_AVG 60_SEC_AVG
primary 75 84 86
gdom1 47 24 19
gdom2 10 24 26
The following example shows how to use rights profiles to run the ldmpower command.
Assign the rights profile to a user.
primary# usermod -P +LDoms Power Mgmt Observability sam
The following commands show how to verify that the user is sam and that the All, Basic
Solaris User, and LDoms Power Mgmt Observability rights profiles are in effect.
$ id
uid=702048(sam) gid=1(other)
$ profiles
All
Basic Solaris User
LDoms Power Mgmt Observability
$ pfexec ldmpower
420 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Using Power Management
EXAMPLE 202 Using a Profile Shell to Obtain CPU Thread Power-Consumption Data by Using Roles and
Rights Profiles (Continued)
422 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
21
C H A P T E R 2 1
The Oracle VM Server for SPARC Management Information Base (MIB) enables third-party
system management applications to perform remote monitoring of domains, and to start and
stop logical domains (domains) by using the Simple Network Management Protocol (SNMP).
You can run only one instance of the Oracle VM Server for SPARC MIB software on the control
domain. The control domain should run at least the Solaris 10 11/06 OS and at least the Oracle
VM Server for SPARC 2.2 software.
423
Oracle VM Server for SPARC Management Information Base Overview
Software Components
The Oracle VM Server for SPARC MIB package, SUNWldmib.v, contains the following software
components:
SUN-LDOM-MIB.mib is an SNMP MIB in the form of a text file. This file defines the objects in
the Oracle VM Server for SPARC MIB.
ldomMIB.so is a System Management Agent extension module in the form of a shared
library. This module enables the Oracle Solaris SNMP agent to respond to requests for
information that are specified in the Oracle VM Server for SPARC MIB and to generate
traps.
The following figure shows the interaction between the Oracle VM Server for SPARC MIB, the
Oracle Solaris SNMP agent, the Logical Domains Manager, and a third-party system
management application. The interaction shown in this figure is described in System
Management Agent on page 425 and Logical Domains Manager and the Oracle VM Server for
SPARC MIB on page 426.
424 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Oracle VM Server for SPARC Management Information Base Overview
Oracle VM Server for SPARC MIB Interaction With Oracle Solaris SNMP Agent, Logical
FIGURE 211
Domains Manager, and a Third-Party System Management Application
The Oracle VM Server for SPARC MIB is exported by the Oracle Solaris OS default Oracle
Solaris SNMP agent on the control domain.
The Oracle Solaris SNMP agent supports the get, set, and trap functions of SNMP versions v1,
v2c, and v3. Most Oracle VM Server for SPARC MIB objects are read-only for monitoring
purposes. However, to start or stop a domain, you must write a value to the ldomAdminState
property of the ldomTable table. See Table 211.
Chapter 21 Using the Oracle VM Server for SPARC Management Information Base Software 425
Oracle VM Server for SPARC Management Information Base Overview
iso(1).org(3).dod(6).internet(1).private(4).enterprises(1).sun(42).products(2)
426 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Installing and Configuring the Oracle VM Server for SPARC MIB Software
The following figure shows the major subtrees under the Oracle VM Server for SPARC MIB.
Chapter 21 Using the Oracle VM Server for SPARC Management Information Base Software 427
Installing and Configuring the Oracle VM Server for SPARC MIB Software
Install the Oracle VM Server for Use the pkg install command to How to Install the Oracle VM
SPARC MIB software package on install the system/ldoms/mib Server for SPARC MIB Software
the primary domain. package. Package on page 428
Load the Oracle VM Server for Modify the SNMP configuration How to Load the Oracle VM
SPARC MIB module into the file to load the ldomMIB.so module. Server for SPARC MIB Module
Oracle Solaris SNMP agent to Into the Oracle Solaris SNMP
query the Oracle VM Server for Agent on page 429
SPARC MIB.
Remove the Oracle VM Server for Use the pkg remove command to How to Remove the Oracle VM
SPARC MIB software package from remove the system/ldoms/mib Server for SPARC MIB Software
the primary domain. package. Package on page 429
How to Install the Oracle VM Server for SPARC MIB Software Package
This procedure describes how to install the Oracle VM Server for SPARC MIB software
package. The Oracle VM Server for SPARC MIB software package is included as part of the
Oracle VM Server for SPARC 3.3 software.
The Oracle VM Server for SPARC MIB package includes the following files:
/opt/SUNWldmib/lib/mibs/SUN-LDOM-MIB.mib
/opt/SUNWldmib/lib/ldomMIB.so
Before You Begin Download and install the Oracle VM Server for SPARC 3.3 software. See Chapter 2, Installing
Software, in Oracle VM Server for SPARC 3.3 Installation Guide.
Install the Oracle VM Server for SPARC MIB software package, system/ldoms/mib.
primary# pkg install -v -g IPS-package-directory/ldoms.repo mib
Next Steps After you install this package, you can configure your system to dynamically load the Oracle
VM Server for SPARC MIB module. See How to Load the Oracle VM Server for SPARC MIB
Module Into the Oracle Solaris SNMP Agent on page 429.
428 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Installing and Configuring the Oracle VM Server for SPARC MIB Software
How to Load the Oracle VM Server for SPARC MIB Module Into the
Oracle Solaris SNMP Agent
The Oracle VM Server for SPARC MIB module, ldomMIB.so, must be loaded into the Oracle
Solaris SNMP agent to query the Oracle VM Server for SPARC MIB. The Oracle VM Server for
SPARC MIB module is dynamically loaded so that the module is included within the SNMP
agent without requiring you to recompile and relink the agent binary.
This procedure describes how to configure your system to dynamically load the Oracle VM
Server for SPARC MIB module. Instructions for dynamically loading the module without
restarting the Oracle Solaris SNMP agent are provided in Solaris System Management Agent
Developer's Guide. For more information about the Oracle Solaris SNMP agent, see Solaris
System Management Administration Guide.
How to Remove the Oracle VM Server for SPARC MIB Software Package
This procedure describes how to remove the Oracle VM Server for SPARC MIB software
package and unload the Oracle VM Server for SPARC MIB module.
2 Remove the Oracle VM Server for SPARC MIB software package from the primary domain.
primary# pkg uninstall system/ldoms/mib
Chapter 21 Using the Oracle VM Server for SPARC Management Information Base Software 429
Managing Security
Managing Security
This section describes how to create new Simple Network Management Protocol (SNMP)
version 3 (v3) users to provide secure access to the Oracle Solaris SNMP agent. For SNMP
version 1 (v1) and version 2 (v2c), the access control mechanism is the community string, which
defines the relationship between an SNMP server and its clients. This string controls the client
access to the server similar to a password controlling a user's access to a system. See Solaris
System Management Agent Administration Guide.
Note Creating snmpv3 users enables you to use the Oracle Solaris SNMP agent in SNMP with
the Oracle VM Server for SPARC MIB. This type of user does not interact with or conflict with
users that you might have configured by using the rights feature of Oracle Solaris for the Logical
Domains Manager.
You can create additional users by cloning this initial user. Cloning enables subsequent users to
inherit the initial user's authentication and security types. You can change these types later.
When you clone the initial user, you set secret key data for the new user. You must know the
passwords for the initial user and for the subsequent users that you configure. You can only
clone one user at a time from the initial user. See To Create Additional SNMPv3 Users with
Security in Solaris System Management Agent Administration Guide for your version of the
Oracle Solaris OS.
430 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Monitoring Domains
Monitoring Domains
This section describes how to monitor logical domains (domains) by querying the Oracle VM
Server for SPARC MIB. This section also provides descriptions of the various types of MIB
output.
You can also use the -t option to specify the timeout value for the snmpget and snmptable
commands.
To retrieve a single MIB object:
# snmpget -v version -c community-string host MIB-object
To retrieve an array of MIB objects:
Use the snmpwalk or snmptable command.
# snmpwalk -v version -c community-string host MIB-object
# snmptable -v version -c community-string host MIB-object
Chapter 21 Using the Oracle VM Server for SPARC Management Information Base Software 431
Monitoring Domains
Note You receive empty SNMP tables if you query the Oracle VM Server for SPARC MIB 2.1
software using the snmptable command with the -v2c or -v3 option. The snmptable command
with the -v1 option works as expected.
To work around this issue, use the -CB option to use only GETNEXT, not GETBULK, requests to
retrieve data. See Querying the Oracle VM Server for SPARC MIB on page 431.
EXAMPLE 211 Retrieving a Single Oracle VM Server for SPARC MIB Object (snmpget)
The following snmpget command queries the value of the ldomVersionMajor object. The
command specifies snmpv1 (-v1) and a community string (-c public) for the localhost host.
432 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Monitoring Domains
SUN-LDOM-MIB::ldomWholeCore.2 = INTEGER: 0
SUN-LDOM-MIB::ldomCpuArch.1 = STRING: native
SUN-LDOM-MIB::ldomCpuArch.2 = STRING: native
SUN-LDOM-MIB::ldomShutdownGroup.1 = INTEGER: 0
SUN-LDOM-MIB::ldomShutdownGroup.2 = INTEGER: 15
SUN-LDOM-MIB::ldomPerfCounters.1 = STRING: htstrand
SUN-LDOM-MIB::ldomPerfCounters.2 = STRING: global,htstrand
SUN-LDOM-MIB::ldomNumCMI.1 = INTEGER: 0
SUN-LDOM-MIB::ldomNumCMI.2 = INTEGER: 0
The following snmpwalk commands use snmpv2c and snmpv3 to retrieve the contents of
ldomTable:
# snmpwalk -v2c -c public localhost SUN-LDOM-MIB::ldomTable
# snmpwalk -v 3 -u test -l authNoPriv -a MD5 -A testpassword localhost \
SUN-LDOMMIB::ldomTable
EXAMPLE 213 Retrieving Object Values From ldomTable in Tabular Form (snmptable)
The following examples show how to use the snmptable command to retrieve object values
from ldomTable in tabular form.
The following snmptable -v1 command shows the contents of ldomTable in tabular form.
# snmptable -v1 -c public localhost SUN-LDOM-MIB::ldomTable
The following snmptable command shows the contents of ldomTable in tabular form by
using snmpv2c.
Note that for the v2c or v3 snmptable command, use the -CB option to specify only
GETNEXT, not GETBULK, requests to retrieve data.
# snmptable -v2c -CB -c public localhost SUN-LDOM-MIB::ldomTable
Chapter 21 Using the Oracle VM Server for SPARC Management Information Base Software 433
Monitoring Domains
ldomIndex Integer Not accessible Integer that is used as an index of this table
434 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Monitoring Domains
ldomFailurePolicy Display string Read-only Master domain's failure policy, which can
be one of ignore, panic, reset, or stop
ldomExtMapinSpace Display string Read-only Extended mapin space for a domain. The
extended mapin space refers to the
additional LDC shared memory space.
This memory space is required to support
a large number of virtual I/O devices that
use direct-mapped shared memory. This
space is also used by virtual network
devices to improve performance and
scalability. The default value is off.
ldomCpuArch Display string Read-only CPU architecture for a domain. The CPU
architecture specifies whether the domain
can be migrated to another sun4v CPU
architecture. Valid values are:
native, which means that the domain
is permitted to be migrated only to
platforms of the same sun4v CPU
architecture (default value)
generic, which means that the
domain is permitted to be migrated to
all compatible sun4v CPU
architectures
Chapter 21 Using the Oracle VM Server for SPARC Management Information Base Software 435
Monitoring Domains
ldomPolicyIndex Integer Not accessible Integer that is used to index the DRM
policy in this table
436 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Monitoring Domains
Chapter 21 Using the Oracle VM Server for SPARC Management Information Base Software 437
Monitoring Domains
The following scalar MIB variables are used to represent resource pools and their properties.
438 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Monitoring Domains
Chapter 21 Using the Oracle VM Server for SPARC Management Information Base Software 439
Monitoring Domains
440 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Monitoring Domains
ldomVcpuPhysBindUsage Integer Read-only Indicates how much (in MHz) of the total
capacity of the thread is used by this
virtual CPU. For example, assume a thread
can run at a maximum of one GHz. If only
half of that capacity is allocated to this
virtual CPU (50% of the thread), the
property value is 500.
The following example shows that the requested memory size can be split between two memory
blocks instead of being assigned to a single large memory block. Assume that a domain requests
521 Mbytes of real memory. The memory can be assigned two 256-Mbyte blocks on the host
system as physical memory by using the {physical-address, real-address, size} format.
A domain can have up to 64 physical memory segments assigned to a guest domain. Therefore,
an auxiliary table is used to hold each memory segment instead of a display string. A display
string has a 255-character limit.
Chapter 21 Using the Oracle VM Server for SPARC Management Information Base Software 441
Monitoring Domains
ldomVmemIndex Integer Not accessible Integer that is used to index the virtual
memory in this table
One or more disks, disk slices, and file systems can be bound to a single disk service. Each disk
has a unique name and volume name. The volume name is used when the disk is bound to the
service. The Logical Domains Manager creates virtual disk clients (vdisk) from the virtual disk
service and its logical volumes.
442 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Monitoring Domains
ldomVdsIndex Integer Not accessible Integer that is used to index the virtual
disk service in this table
ldomVdsServiceName Display string Read-only Service name for the virtual disk service.
The property value is the service-name
specified by the ldm add-vds command.
ldomVdsdevIndex Integer Not accessible Integer that is used to index the virtual
disk service device in this table
ldomVdsdevVolumeName Display string Read-only Volume name for the virtual disk service
device. This property specifies a unique
name for the device that is being added to
the virtual disk service. This name is
exported by the virtual disk service to the
clients for the purpose of adding this
device. The property value is the
volume-name specified by the ldm
add-vdsdev command.
ldomVdsdevDevPath Display string Read-only Path name of the physical disk device. The
property value is the backend specified by
the ldm add-vdsdev command.
Chapter 21 Using the Oracle VM Server for SPARC Management Information Base Software 443
Monitoring Domains
ldomVdsdevOptions Display string Read-only One or more of the options for the disk
device, which are ro, slice, or excl
ldomVdsdevMPGroup Display string Read-only Multipath group name for the disk device
ldomVdiskIndex Integer Not accessible Integer that is used to index the virtual
disk in this table
ldomVdiskName Display string Read-only Name of the virtual disk. The property
value is the disk-name specified by the ldm
add-vdisk command.
The following figure shows how indexes are used to define relationships among the virtual disk
tables and the domain table. The indexes are used as follows:
ldomIndex in ldomVdsTable and ldomVdiskTable points to ldomTable.
VdsIndex in ldomVdsdevTable points to ldomVdsTable.
VdsDevIndex in ldomVdiskTable points to ldomVdsdevTable.
444 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Monitoring Domains
FIGURE 213 Relationship Among Virtual Disk Tables and the Domain Table
After you create a virtual switch on a service domain, you can bind a physical network device to
the virtual switch. After that, you can create a virtual network device for a domain that uses the
virtual switch service for communication. The virtual switch service communicates with other
domains by connecting to the same virtual switch. The virtual switch service communicates
with external hosts if a physical device is bound to the virtual switch.
Chapter 21 Using the Oracle VM Server for SPARC Management Information Base Software 445
Monitoring Domains
ldomVswIndex Integer Not accessible Integer that is used to index the virtual
switch device in this table
ldomVswMacAddress Display string Read-only MAC address used by the virtual switch
ldomVswPhysDevPath Display string Read-only Physical device path for the virtual
network switch. The property value is null
when no physical device is bound to the
virtual switch.
ldomVswDefaultVlanID Display string Read-only Default VLAN ID for the virtual switch
ldomVswPortVlanID Display string Read-only Port VLAN ID for the virtual switch
446 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Monitoring Domains
ldomVnetIndex Integer Not accessible Integer that is used to index the virtual
network device in this table
ldomVnetDevMacAddress Display string Read-only MAC address for this network device. The
property value is the mac-addr property
specified by the ldm add-vnet command.
ldomVnetPortVlanID Display string Read-only Port VLAN ID for the virtual network
device
ldomVnetVlanID Display string Read-only VLAN ID for the virtual network device
Chapter 21 Using the Oracle VM Server for SPARC Management Information Base Software 447
Monitoring Domains
ldomVccIndex Integer Not accessible Integer that is used to index the virtual
console concentrator in this table
ldomVccPortRangeLow Integer Read-only Low number for the range of TCP ports to
be used by the virtual console
concentrator. The property value is the x
part of the port-range specified by the
ldm add-vcc command.
ldomVccPortRangeHigh Integer Read-only High number for the range of TCP ports to
be used by the virtual console
concentrator. The property value is the y
part of the port-range specified by the
ldm add-vcc command.
ldomVconsIndex Integer Not accessible Integer that is used to index a virtual group
in this table
ldomVconsGroupName Display string Read-only Group name to which to attach the virtual
console. The property value is the group
specified by the ldm set-vcons command.
ldomVconsLog Display string Read-only Console logging status. The property value
is the string on or off as specified by the
ldm set-vcons command.
When a group contains more than one
domain, this property shows the console
logging status of the domain that has most
recently been modified by the ldm
set-vcons command.
448 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Monitoring Domains
The following figure shows how indexes are used to define relationships among the virtual
console tables and the domain table. The indexes are used as follows:
ldomIndex in ldomVccTable and ldomVconsVccRelTable points to ldomTable.
VccIndex in ldomVconsVccRelTable points to ldomVccTable.
VconsIndex in ldomVconsVccRelTable points to ldomVconsTable.
Chapter 21 Using the Oracle VM Server for SPARC Management Information Base Software 449
Monitoring Domains
FIGURE 214 Relationship Among Virtual Console Tables and the Domain Table
450 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Monitoring Domains
ldomIOBusIndex Integer Not accessible Integer that is used to index the I/O bus in
this table
ldomCMIIndex Integer Not accessible Integer that is used to index the CMI
resource in this table
ldomCMICpuSet Display string Read-only List of CPUs that are mapped to the CMI
resource
ldomCMICores Display string Read-only List of cores that are mapped to the CMI
resource
Chapter 21 Using the Oracle VM Server for SPARC Management Information Base Software 451
Monitoring Domains
ldomCoreIndex Integer Not accessible Integer that is used to index a core in this
table
ldomCoreCpuSet Display string Read-only List of CPUs that is mapped to the core
cpuset
The values for ldomVersionMajor and ldomVersionMinor are equivalent to the version shown
by the ldm list -p command. For example:
$ ldm ls -p
VERSION 1.6
...
452 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Using SNMP Traps
The snmptrapd daemon does not automatically accept all incoming traps. Instead, the daemon
must be configured with authorized SNMP v1 and v2c community strings, with SNMPv3 users,
or both. Unauthorized traps or notifications are dropped. See the snmptrapd.conf(4) or
snmptrapd.conf(5) man page.
trapcommunity public
trapsink localhost
Chapter 21 Using the Oracle VM Server for SPARC Management Information Base Software 453
Using SNMP Traps
3 To receive SNMP trap messages, start the SNMP trap daemon utility, snmptrapd.
trapcommunity public
trapsink localhost
trap2sink localhost
454 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Using SNMP Traps
Chapter 21 Using the Oracle VM Server for SPARC Management Information Base Software 455
Using SNMP Traps
456 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Using SNMP Traps
ldomVdsServiceName Display string Name of the virtual disk service that has
changed
ldomVdiskName Display string Name of the virtual disk device that has
changed
Chapter 21 Using the Oracle VM Server for SPARC Management Information Base Software 457
Using SNMP Traps
ldomVswServiceName Display string Name of the virtual switch service that has
changed
ldomVnetDevName Display string Name of the virtual network device for the
domain
458 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Using SNMP Traps
ldomVconsGroupName Display string Name of the virtual console group that has
changed
Chapter 21 Using the Oracle VM Server for SPARC Management Information Base Software 459
Starting and Stopping Domains
460 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Starting and Stopping Domains
4 Verify that the domain-name domain is active by using one of the following commands:
You can also use the snmpget command to retrieve the LdomMibTest_1 domain's state instead of
using the ldm list command.
Chapter 21 Using the Oracle VM Server for SPARC Management Information Base Software 461
Starting and Stopping Domains
Note that if the domain is inactive when you use snmpset to start the domain, the domain is first
bound and then started.
462 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Starting and Stopping Domains
3 Verify that the domain-name domain is bound by using one of the following commands:
Chapter 21 Using the Oracle VM Server for SPARC Management Information Base Software 463
464
22
C H A P T E R 2 2
This chapter provides information about discovering the Logical Domains Manager running
on systems on a subnet.
Discovering Systems Running the Logical Domains Manager on page 465
Multicast Communication
This discovery mechanism uses the same multicast network that is used by the ldmd daemon to
detect collisions when automatically assigning MAC addresses. To configure the multicast
socket, you must supply the following information:
By default, only multicast packets can be sent on the subnet to which the machine is attached.
You can change the behavior by setting the ldmd/hops SMF property for the ldmd daemon.
Message Format
The discovery messages must be clearly marked so as not to be confused with other messages.
The following multicast message format ensures that discovery messages can be distinguished
by the discovery listening process:
465
Discovering Systems Running the Logical Domains Manager
#define LDMD_VERSION_LEN 32
typedef struct {
uint64_t mac_addr;
char source_ip[INET_ADDRSTRLEN];
} mac_lookup_t;
typedef struct {
char ldmd_version[LDMD_VERSION_LEN];
char hostname[MAXHOSTNAMELEN];
struct in_addr ip_address;
int port_no;
} ldmd_discovery_t;
466 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Discovering Systems Running the Logical Domains Manager
3 Listen on the multicast socket for responses from Logical Domains Managers.
The responses must be a multicast_msg_t message with the following information:
Valid value for version_no
Valid value for magic_no
msg_type set to LDMD_DISC_RESP
Payload consisting of a ldmd_discovery_t structure, which contains the following
information:
ldmd_version Version of the Logical Domains Manager running on the system
hostname Host name of the system
ip_address IP address of the system
port_no Port number being used by the Logical Domains Manager for
communications, which should be XMPP port 6482
When listening for a response from Logical Domains Managers, ensure that any auto-allocation
MAC collision-detection packets are discarded.
This chapter explains the Extensible Markup Language (XML) communication mechanism
through which external user programs can interface with Oracle VM Server for SPARC
software. These basic topics are covered:
XML Transport on page 469
XML Protocol on page 470
Event Messages on page 475
Logical Domains Manager Actions on page 479
Logical Domains Manager Resources and Properties on page 481
XML Schemas on page 497
XML Transport
External programs can use the Extensible Messaging and Presence Protocol (XMPP RFC
3920) to communicate with the Logical Domains Manager. XMPP is supported for both local
and remote connections and is on by default. To disable a remote connection, set the
ldmd/xmpp_enabled SMF property to false and restart the Logical Domains Manager.
Note Disabling the XMPP server also prevents domain migration and the dynamic
reconfiguration of memory.
469
XML Protocol
XMPP Server
The Logical Domains Manager implements an XMPP server which can communicate with
numerous available XMPP client applications and libraries. The Logical Domains Manager uses
the following security mechanisms:
Transport Layer Security (TLS) to secure the communication channel between the client
and itself.
Simple Authentication and Security Layer (SASL) for authentication. PLAIN is the only SASL
mechanism supported. You must send in a user name and password to the server so it can
authorize you before allowing monitoring or management operations.
Local Connections
The Logical Domains Manager detects whether user clients are running on the same domain as
itself and, if so, does a minimal XMPP handshake with that client. Specifically, the SASL
authentication step after the setup of a secure channel through TLS is skipped. Authentication
and authorization are done based on the credentials of the process implementing the client
interface.
Clients can choose to implement a full XMPP client or to simply run a streaming XML parser,
such as the libxml2 Simple API for XML (SAX) parser. Either way, the client has to handle an
XMPP handshake to the point of TLS negotiation. Refer to the XMPP specification for the
sequence needed.
XML Protocol
After communication initialization is complete, Oracle VM Server for SPARC-defined XML
messages are sent next. The two general types of XML messages are:
Request and response messages, which use the <LDM_interface> tag. This type of XML
message is used for communicating commands and getting results back from the Logical
Domains Manager, analogous to executing commands using the command-line interface
(CLI). This tag is also used for event registration and unregistration.
Event messages, which use the <LDM_event> tag. This type of XML message is used to
asynchronously report events posted by the Logical Domains Manager.
470 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
XML Protocol
The two formats share many common XML structures, but they are separated in this discussion
for a better understanding of the differences between them.
Request Messages
An incoming XML request to the Logical Domains Manager at its most basic level includes a
description of a single command operating on a single object. More complicated requests can
handle multiple commands and multiple objects per command. The following example shows
the structure of a basic XML command.
<LDM_interface> Tag
All commands sent to the Logical Domains Manager must start with the <LDM_interface> tag.
Any document sent into the Logical Domains Manager must have only one <LDM_interface>
tag contained within it. The <LDM_interface> tag must include a version attribute as shown in
Example 231.
Chapter 23 Using the XML Interface With the Logical Domains Manager 471
XML Protocol
The <cmd> tag can also have an <options> tag, which is used for options and flags that are
associated with some commands. The following commands use options:
The ldm remove-domain command can use the -a option.
The ldm bind-domain command can use the -f option.
The ldm add-vdsdev command can use the -f option.
The ldm cancel-operation command can use the migration or reconf option.
The ldm add-spconfig command can use the -r autosave-name option.
The ldm remove-spconfig command can use the -r option.
The ldm list-spconfig command can use the -r [autosave-name] option.
The ldm stop-domain command can use the following tags to set the command arguments:
<force> represents the -f option.
<halt> represents the -h option.
<message> represents the -m option.
<quick> represents the -q option.
<reboot> represents the -r option.
<timeout> represents the -t option.
Note that the tags must not have any content value. However, the -t and -m options must
have a non-null value, for example, <timeout>10</timeout> or <message>Shutting down
now</message>.
The following XML example fragment shows how to pass a reboot request with a reboot
message to the ldm stop-domain command:
<action>stop-domain</action>
<arguments>
<reboot/>
<message>my reboot message</message>
</arguments>
472 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
XML Protocol
For Oracle VM Server for SPARC, the <Content> section is used to identify and describe a
particular domain. The domain name in the id= attribute of the <Content> node identifies the
domain. Within the <Content> section are one or more <Section> sections describing
resources of the domain as needed by the particular command.
If you need to identify only a domain name, then you do not need to use any <Section> tags.
Conversely, if no domain identifier is needed for the command, then you need to provide a
<Section> section describing the resources needed for the command, placed outside a
<Content> section but still within the <Envelope> section.
A <data> section does not need to contain an <Envelope> tag in cases where the object
information can be inferred. This situation mainly applies to requests for monitoring all objects
applicable to an action, and event registration and unregistration requests.
Two additional OVF types enable the use of the OVF specification's schema to properly define
all types of objects:
<gprop:GenericProperty> tag
<Binding> tag
The <gprop:GenericProperty> tag handles any object's property for which the OVF
specification does not have a definition. The property name is defined in the key= attribute of
the node and the value of the property is the contents of the node. The <binding> tag is used in
the ldm list-bindings command output to define resources that are bound to other resources.
Response Messages
An outgoing XML response closely matches the structure of the incoming request in terms of
the commands and objects included, with the addition of a <Response> section for each object
and command specified, as well as an overall <Response> section for the request. The
<Response> sections provide status and message information. The following example shows the
structure of a response to a basic XML request.
Chapter 23 Using the XML Interface With the Logical Domains Manager 473
XML Protocol
EXAMPLE 232 Format of a Response to a Single Command Operating on a Single Object (Continued)
Property Value
</gprop:GenericProperty>
</Item>
</Section>
<!-- Note: More <Section>
</Content>
</Envelope>
<response>
<status>success or failure</status>
<resp_msg>Reason for failure</resp_msg>
</response>
</data>
<response>
<status>success or failure</status>
<resp_msg>Reason for failure</resp_msg>
</response>
</cmd>
<response>
<status>success or failure</status>
<resp_msg>Reason for failure</resp_msg>
</response>
</LDM_interface>
Overall Response
This <response> section, which is the direct child of the <LDM_interface> section, indicates
overall success or failure of the entire request. Unless the incoming XML document is
malformed, the <response> section includes only a <status> tag. If this response status
indicates success, all commands on all objects have succeeded. If this response status is a failure
and there is no <resp_msg> tag, then one of the commands included in the original request
failed. The <resp_msg> tag is used only to describe some problem with the XML document
itself.
Command Response
The <response> section under the <cmd> section alerts the user to success or failure of that
particular command. The <status> tag shows whether that command succeeds or fails. As with
the overall response, if the command fails, the <response> section includes only a <resp_msg>
tag if the contents of the <cmd> section of the request is malformed. Otherwise, the failed status
means one of the objects that the command ran against caused a failure.
Object Response
Finally, each <data> section in a <cmd> section also has a <response> section. This section
shows whether the command being run on this particular object passes or fails. If the status of
the response is SUCCESS, there is no <resp_msg> tag in the <response> section. If the status is
474 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Event Messages
FAILURE, there are one or more <resp_msg> tags in the <response> field depending on the
errors encountered when running the command against that object. Object errors can result
from problems found when running the command, or a malformed or unknown object.
In addition to the <response> section, the <data> section can contain other information. This
information is in the same format as an incoming <data> field, describing the object that caused
a failure. See The <data> Tag on page 472. This additional information is especially useful in
the following cases:
When a command fails against a particular <data> section but passes for any additional
<data> sections
When an empty <data> section is passed into a command and fails for some domains but
passes for others
Event Messages
In lieu of polling, you can subscribe to receive event notifications of certain state changes that
occur. There are three types of events to which you can subscribe individually or collectively.
See Event Types on page 476 for complete details.
The Logical Domains Manager responds with an <LDM_interface> response message stating
whether the registration or unregistration was successful.
Chapter 23 Using the XML Interface With the Logical Domains Manager 475
Event Messages
</response>
</data>
<response>
<status>success</status>
</response>
</cmd>
<response>
<status>success</status>
</response>
</LDM_interface>
The action string for each type of event is listed in the events subsection.
<LDM_event> Messages
Event messages have the same format as an incoming <LDM_interface> message with the
exception that the start tag for the message is <LDM_event>. The <action> tag of the message is
the action that was performed to trigger the event. The <data> section of the message describes
the object associated with the event; the details depend on the type of event that occurred.
Event Types
You can subscribe to the following event types:
Domain events
Hardware events
Progress events
476 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Event Messages
Resource events
Domain Events
Domain events describe which actions can be performed directly to a domain. The following
domain events can be specified in the <action> tag in the <LDM_event> message:
add-domain
bind-domain
domain-reset
migrate-domain
panic-domain
remove-domain
start-domain
stop-domain
unbind-domain
These events always contain only a <Content> tag in the OVF <data> section that describes the
domain to which the event happened. To register for the domain events, send an
<LDM_interface> message with the <action> tag set to reg-domain-events. To unregister for
these events, send an <LDM_interface> message with the <action> tag set to
unreg-domain-events.
Hardware Events
Hardware events pertain to changing the physical system hardware. In the case of Oracle VM
Server for SPARC software, the only hardware changes are those to the service processor (SP)
when you add, remove, or set an SP configuration. Currently, the only three events of this type
are:
add-spconfig
set-spconfig
remove-spconfig
The hardware events always contain only a <Section> tag in the OVF <data> section which
describes which SP configuration to which the event is happening. To register for these events,
send an <LDM_interface> message with the <action> tag set to reg-hardware-events. To
unregister for these events, send an <LDM_interface> message with the <action> tag set to
unreg-hardware-events.
Progress Events
Progress events are issued for long-running commands, such as a domain migration. These
events report the amount of progress that has been made during the life of the command. At
this time, only the migration-process event is reported.
Chapter 23 Using the XML Interface With the Logical Domains Manager 477
Event Messages
Progress events always contain only a <Section> tag in the OVF <data> section that describes
the SP configuration affected by the event. To register for these events, send an
<LDM_interface> message with the <action> tag set to reg-hardware-events. To unregister
for these events, send an <LDM_interface> message with the <action> tag set to
unreg-hardware-events.
The <data> section of a progress event consists of a <content> section that describes the
affected domain. This <content> section uses an ldom_info <Section> tag to update progress.
The following generic properties are shown in the ldom_info section:
--progress Percentage of the progress made by the command
--status Command status, which can be one of ongoing, failed, or done
--source Machine that is reporting the progress
Resource Events
Resource events occur when resources are added, removed, or changed in any domain. The
<data> section for some of these events contains the <Content> tag with a <Section> tag
providing a service name in the OVF <data> section.
The following events can be specified in the <action> tag in the <LDM_event> message:
add-vdiskserverdevice
remove-vdiskserverdevice
set-vdiskserverdevice
remove-vdiskserver
set-vconscon
remove-vconscon
set-vswitch
remove-vswitch
remove-vdpcs
The following resource events always contain only the <Content> tag in the OVF <data>
section that describes the domain to which the event happened:
add-vcpu
add-crypto
add-memory
add-io
add-variable
add-vconscon
add-vdisk
add-vdiskserver
add-vnet
add-vswitch
add-vdpcs
add-vdpcc
478 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Logical Domains Manager Actions
set-vcpu
set-crypto
set-memory
set-variable
set-vnet
set-vconsole
set-vdisk
remove-vcpu
remove-crypto
remove-memory
remove-io
remove-variable
remove-vdisk
remove-vnet
remove-vdpcc
To register for the resource events, send an <LDM_interface> message with the <action> tag
set to reg-resource-events. Unregistering for these events requires an <LDM_interface>
message with the <action> tag set to unreg-resource-events.
All Events
You can also register to listen for all three type of events without having to register for each one
individually. To register for all three types of events simultaneously, send an <LDM_interface>
message with the <action> tag set to reg-all-events. Unregistering for these events require
an <LDM_interface> message with the <action> tag set to unreg-all-events.
Note The XML interface does not support the verb or command aliases supported by the
Logical Domains Manager CLI.
Chapter 23 Using the XML Interface With the Logical Domains Manager 479
Logical Domains Manager Actions
add-variable
add-vconscon
add-vcpu
add-vdisk
add-vdiskserver
add-vdiskserverdevice
add-vdpcc
add-vdpcs
add-vnet
add-vswitch
bind-domain
cancel-operation
list-bindings
list-constraints
list-devices
list-domain
list-services
list-spconfig
list-variable
migrate-domain
reg-all-events
reg-domain-events
reg-hardware-events
reg-resource-events
remove-domain
remove-io
remove-mau
remove-memory
remove-reconf
remove-spconfig
remove-variable
remove-vconscon
remove-vcpu
remove-vdisk
remove-vdiskserver
remove-vdiskserverdevice
remove-vdpcc
remove-vdpcs
remove-vnet
remove-vswitch
set-domain
set-mau
set-memory
set-spconfig
480 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Logical Domains Manager Resources and Properties
set-variable
set-vconscon
set-vconsole
set-vcpu
set-vnet
set-vswitch
start-domain
stop-domain
unbind-domain
unreg-all-events
unreg-domain-events
unreg-hardware-events
unreg-resource-events
<Envelope>
<References/>
<Content xsi:type="ovf:VirtualSystem_Type" id="primary">
<Section xsi:type="ovf:ResourceAllocationSection_type">
<Item>
<rasd:OtherResourceType>ldom_info</rasd:OtherResourceType>
<uuid>c2c3d93b-a3f9-60f6-a45e-f35d55c05fb6</uuid>
<rasd:Address>00:03:ba:d8:ba:f6</rasd:Address>
<gprop:GenericProperty key="hostid">83d8baf6</gprop:GenericProperty>
<gprop:GenericProperty key="master">plum</gprop:GenericProperty>
<gprop:GenericProperty key="failure-policy">reset</gprop:GenericProperty>
<gprop:GenericProperty key="extended-mapin-space">on</gprop:GenericProperty>
<gprop:GenericProperty key="progress">45%</gprop:GenericProperty>
<gprop:GenericProperty key="status">ongoing</gprop:GenericProperty>
Chapter 23 Using the XML Interface With the Logical Domains Manager 481
Logical Domains Manager Resources and Properties
<gprop:GenericProperty key="source">system1</gprop:GenericProperty>
<gprop:GenericProperty key="rc-add-policy"></gprop:GenericProperty>
<gprop:GenericProperty key="perf-counters">global</gprop:GenericProperty>
</Item>
</Section>
</Content>
</Envelope>
The ldom_info resource is always contained within a <Content> section. The following
properties within the ldom_info resource are optional properties:
<uuid> tag, which specifies the UUID of the domain.
<rasd:Address> tag, which specifies the MAC address to be assigned to a domain.
<gprop:GenericProperty key="extended-mapin-space"> tag, which specifies whether
extended mapin space is enabled (on) or disabled (off) for the domain. The default value is
off.
<gprop:GenericProperty key="failure-policy"> tag, which specifies how slave
domains should behave should the master domain fail. The default value is ignore.
Following are the valid property values:
ignore ignores failures of the master domain (slave domains are unaffected).
panic panics any slave domains when the master domain fails.
reset resets any slave domains when the master domain fails.
stop stops any slave domains when the master domain fails.
<gprop:GenericProperty key="hostid"> tag, which specifies the host ID to be assigned to
the domain.
<gprop:GenericProperty key="master"> tag, which specifies up to four
comma-separated master domain names.
<gprop:GenericProperty key="progress"> tag, which specifies the percentage of
progress made by the command.
<gprop:GenericProperty key="source"> tag, which specifies the machine reporting on
the progress of the command.
<gprop:GenericProperty key="status"> tag, which specifies the status of the command
(done, failed, or ongoing).
<gprop:GenericProperty key="rc-add-policy"> tag, which specifies whether to enable
or disable the direct I/O and SR-IOV I/O virtualization operations on any root complex that
might be added to the specified domain. Valid values are iov and no value
(rc-add-policy=).
<gprop:GenericProperty key="perf-counters"> tag, which specifies the performance
register sets to access (global, htstrand, strand).
If the platform does not have the performance access capability, the perf-counters
property value is ignored.
482 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Logical Domains Manager Resources and Properties
EXAMPLE 237 cpu XML Section Output from the ldm list-bindings Command
The following example shows the XML output for the <cpu> section by using the ldm
list-bindings command.
<?xml version="1.0"?>
<LDM_interface
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:ovf="./schemas/envelope"
xmlns:rasd="./schemas/CIM_ResourceAllocationSettingData"
xmlns:vssd="./schemas/CIM_VirtualSystemSettingData"
xmlns:gprop="./schemas/GenericProperty"
xmlns:bind="./schemas/Binding"
version="1.3"
xsi:noNamespaceSchemaLocation="./schemas/combined-v3.xsd">
<cmd>
<action>list-bindings</action>
<data version="3.0">
<Envelope>
<References/>
<Content xsi:type="ovf:VirtualSystem_Type" ovf:id="primary">
<Section xsi:type="ovf:ResourceAllocationSection_Type">
<Item>
<rasd:OtherResourceType>ldom_info</rasd:OtherResourceType>
<uuid>1e04cdbd-472a-e8b9-ba4c-d3eee86e7725</uuid>
<rasd:Address>00:21:28:f5:11:6a</rasd:Address>
<gprop:GenericProperty key="hostid">0x8486632a</gprop:GenericProperty>
<failure-policy>fff</failure-policy>
<wcore>0</wcore>
<extended-mapin-space>0</extended-mapin-space>
<cpu-arch>native</cpu-arch>
<rc-add-policy/>
<gprop:GenericProperty key="state">active</gprop:GenericProperty>
</Item>
</Section>
<Section xsi:type="ovf:VirtualHardwareSection_Type">
<Item>
<rasd:OtherResourceType>cpu</rasd:OtherResourceType>
<rasd:AllocationUnits>8</rasd:AllocationUnits>
<bind:Binding>
<Item>
<rasd:OtherResourceType>cpu</rasd:OtherResourceType>
<gprop:GenericProperty key="vid">0</gprop:GenericProperty>
<gprop:GenericProperty key="pid">0</gprop:GenericProperty>
<gprop:GenericProperty key="cid">0</gprop:GenericProperty>
<gprop:GenericProperty key="strand_percent">100</gprop:GenericProperty>
<gprop:GenericProperty key="util_percent">1.1%</gprop:GenericProperty>
<gprop:GenericProperty key="normalized_utilization">0.1%</gprop:GenericProperty>
</Item>
Chapter 23 Using the XML Interface With the Logical Domains Manager 483
Logical Domains Manager Resources and Properties
EXAMPLE 237 cpu XML Section Output from the ldm list-bindings Command (Continued)
</Section>
</Content>
</Envelope>
</data>
</cmd>
</LDM_interface>
EXAMPLE 238 cpu XML Section Output from the ldm list-domain Command
The following example shows the XML output for the <cpu> section by using the ldm
list-domain command.
<?xml version="1.0"?>
<LDM_interface
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:ovf="./schemas/envelope"
xmlns:rasd="./schemas/CIM_ResourceAllocationSettingData"
xmlns:vssd="./schemas/CIM_VirtualSystemSettingData"
xmlns:gprop="./schemas/GenericProperty"
xmlns:bind="./schemas/Binding"
version="1.3"
xsi:noNamespaceSchemaLocation="./schemas/combined-v3.xsd">
<cmd>
<action>list-domain</action>
<data version="3.0">
<Envelope>
<References/>
<Content xsi:type="ovf:VirtualSystem_Type" ovf:id="primary">
<Section xsi:type="ovf:ResourceAllocationSection_Type">
<Item>
<rasd:OtherResourceType>ldom_info</rasd:OtherResourceType>
<gprop:GenericProperty key="state">active</gprop:GenericProperty>
<gprop:GenericProperty key="flags">-n-cv-</gprop:GenericProperty>
<gprop:GenericProperty key="utilization">0.7%</gprop:GenericProperty>
<gprop:GenericProperty key="uptime">3h</gprop:GenericProperty>
<gprop:GenericProperty key="normalized_utilization">0.1%</gprop:GenericProperty>
</Item>
</Section>
</Content>
</Envelope>
</data>
</cmd>
</LDM_interface>
484 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Logical Domains Manager Resources and Properties
Note The mau resource is any supported cryptographic unit on a supported server. Currently,
the two cryptographic units supported are the Modular Arithmetic Unit (MAU) and the
Control Word Queue (CWQ).
Chapter 23 Using the XML Interface With the Logical Domains Manager 485
Logical Domains Manager Resources and Properties
<Section xsi:type="ovf:VirtualHardwareSection_Type">
<Item>
<rasd:OtherResourceType>vds</rasd:OtherResourceType>
<gprop:GenericProperty
key="service_name">vdstmp</gprop:GenericProperty>
</Item>
</Section>
</Content>
</Envelope>
486 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Logical Domains Manager Resources and Properties
Optionally, the disk resource can also have the timeout property, which is the timeout value in
seconds for establishing a connection between a virtual disk client (vdc) and a virtual disk
server (vds). If there are multiple virtual disk (vdisk) paths, then the vdc can try to connect to a
different vds. The timeout ensures that a connection to any vds is established within the
specified amount of time.
Optionally, the vsw resource can also have the following properties:
<rasd:Address> Assigns a MAC address to the virtual switch
default-vlan-id Specifies the default virtual local area network (VLAN) to which a
virtual network device or virtual switch needs to be a member, in tagged mode. The first
VLAN ID (vid1) is reserved for the default-vlan-id.
dev_path Path of the network device to be associated with this virtual switch
id Specifies the ID of a new virtual switch device. By default, ID values are generated
automatically, so set this property if you need to match an existing device name in the OS.
Chapter 23 Using the XML Interface With the Logical Domains Manager 487
Logical Domains Manager Resources and Properties
488 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Logical Domains Manager Resources and Properties
Optionally, the network resource can also have the following properties:
<rasd:Address> Assigns a MAC address to the virtual switch
pvid Port virtual local area network (VLAN) identifier (ID) indicates the VLAN of which
the virtual network needs to be a member, in untagged mode.
vid Virtual local area network (VLAN) identifier (ID) indicates the VLAN of which a
virtual network and virtual switch need to be a member, in tagged mode.
mode hybrid to enable hybrid I/O for that virtual network.
Note The NIU Hybrid I/O feature is deprecated in favor of SR-IOV. Oracle VM Server for
SPARC 3.3 is the last software release to include this feature.
Chapter 23 Using the XML Interface With the Logical Domains Manager 489
Logical Domains Manager Resources and Properties
<gprop:GenericProperty key="min_port">6000</gprop:GenericProperty>
<gprop:GenericProperty key="max_port">6100</gprop:GenericProperty>
</Item>
</Section>
</Content>
</Envelope>
490 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Logical Domains Manager Resources and Properties
<Envelope>
<References/>
<Content xsi:type="ovf:VirtualSystem_Type" ovf:id="ldg1">
<Section xsi:type="ovf:VirtualHardwareSection_Type">
<Item>
<rasd:OtherResourceType>physio_device</rasd:OtherResourceType>
<gprop:GenericProperty key="name">
/SYS/MB/NET0/IOVNET.PF0.VF0</gprop:GenericProperty>
</Item>
</Section>
</Content>
</Envelope>
</data>
</cmd>
</LDM_interface>
The following XML example fragment shows how to use the ldm set-io command to set
the iov_bus_enable_iov property value to on for the pci_1 root complex.
<LDM_interface version="1.3">
<cmd>
<action>set-io</action>
<data version="3.0">
<Envelope>
<References/>
<Section xsi:type="ovf:VirtualHardwareSection_Type">
<Item>
<rasd:OtherResourceType>physio_device</rasd:OtherResourceType>
<gprop:GenericProperty key="name">pci_1</gprop:GenericProperty>
<gprop:GenericProperty key="iov_bus_enable_iov">
on</gprop:GenericProperty>
</Item>
</Section>
</Envelope>
</data>
</cmd>
</LDM_interface>
The following XML example fragment shows how to use the ldm set-io command to set
the unicast-slots property value to 6 for the /SYS/MB/NET0/IOVNET.PF1 physical
function.
<LDM_interface version="1.3">
<cmd>
<action>set-io</action>
<data version="3.0">
<Envelope>
<References/>
<Section xsi:type="ovf:VirtualHardwareSection_Type">
<Item>
<rasd:OtherResourceType>physio_device</rasd:OtherResourceType>
<gprop:GenericProperty key="name">
/SYS/MB/NET0/IOVNET.PF1</gprop:GenericProperty>
<gprop:GenericProperty key="unicast-slots">6</gprop:GenericProperty>
</Item>
</Section>
Chapter 23 Using the XML Interface With the Logical Domains Manager 491
Logical Domains Manager Resources and Properties
</Envelope>
</data>
</cmd>
</LDM_interface>
The following XML example fragment shows how to use the ldm create-vf command to
create the /SYS/MB/NET0/IOVNET.PF1.VF0 virtual function with the following property
values.
unicast-slots=6
pvid=3
mtu=1600
<LDM_interface version="1.3">
<cmd>
<action>create-vf</action>
<data version="3.0">
<Envelope>
<References/>
<Section xsi:type="ovf:VirtualHardwareSection_Type">
<Item>
<rasd:OtherResourceType>vf_device</rasd:OtherResourceType>
<gprop:GenericProperty key="iov_pf_name">
/SYS/MB/NET0/IOVNET.PF1</gprop:GenericProperty>
<gprop:GenericProperty key="unicast-slots">6</gprop:GenericProperty>
<gprop:GenericProperty key="pvid">3</gprop:GenericProperty>
<gprop:GenericProperty key="mtu">1600</gprop:GenericProperty>
</Item>
</Section>
</Envelope>
</data>
</cmd>
</LDM_interface>
The following XML example fragment shows how to use the ldm create-vf command to
create the number of virtual functions specified by the iov_pf_repeat_count_str value (3)
with the /SYS/MB/NET0/IOVNET.PF1 physical function. You cannot specify other property
values when you create multiple virtual functions with the iov_pf_repeat_count_str
property.
<LDM_interface version="1.3">
<cmd>
<action>create-vf</action>
<data version="3.0">
<Envelope>
<References/>
<Section xsi:type="ovf:VirtualHardwareSection_Type">
<Item>
<rasd:OtherResourceType>vf_device</rasd:OtherResourceType>
<gprop:GenericProperty key="iov_pf_name">
/SYS/MB/NET0/IOVNET.PF1</gprop:GenericProperty>
<gprop:GenericProperty key="iov_pf_repeat_count_str">
3</gprop:GenericProperty>
</Item>
</Section>
492 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Logical Domains Manager Resources and Properties
</Envelope>
</data>
</cmd>
</LDM_interface>
Chapter 23 Using the XML Interface With the Logical Domains Manager 493
Logical Domains Manager Resources and Properties
494 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Logical Domains Manager Resources and Properties
</Envelope>
Chapter 23 Using the XML Interface With the Logical Domains Manager 495
Logical Domains Manager Resources and Properties
<rasd:OtherResourceType>console</rasd:OtherResourceType>
<gprop:GenericProperty key="port">6000</gprop:GenericProperty>
<gprop:GenericProperty key="service_name">vcc2</gprop:GenericProperty>
<gprop:GenericProperty key="group">group-name</gprop:GenericProperty>
<gprop:GenericProperty key="enable-log">on</gprop:GenericProperty>
</Item>
</Section>
</Content>
</Envelope>
Domain Migration
This example shows what is contained in the <data> section for a ldm migrate-domain
command.
First <Content> node (without an <ldom_info> section) is the source domain to migrate.
Second <Content> node (with an <ldom_info> section) is the target domain to which to
migrate. The source and target domain names can be the same.
The <ldom_info> section for the target domain describes the machine to which to migrate
and the details needed to migrate to that machine:
target-host is the target machine to which to migrate.
user-name is the login user name for the target machine, which must be SASL 64-bit
encoded.
password is the password to use for logging into the target machine, which must be
SASL 64-bit encoded.
Note The Logical Domains Manager uses sasl_decode64() to decode the target user name
and password and uses sasl_encode64() to encode these values. SASL 64 encoding is
equivalent to base64 encoding.
496 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
XML Schemas
</Envelope>
XML Schemas
The XML schemas that are used by the Logical Domains Manager are located in the
/opt/SUNWldm/bin/schemas directory. The file names are as follows:
cim-common.xsd cim-common.xsd schema
cim-rasd.xsd cim-rasd.xsd schema
cim-vssd.xsd cim-vssd.xsd schema
cli-list-constraint-v3.xsd cli-list-constraint-v3.xsd schema
combined-v3.xsd LDM_interface XML schema
event-v3.xsd LDM_Event XML schema
ldmd-binding.xsd Binding_Type XML schema
ldmd-property.xsd GenericProperty XML schema
ovf-core.xsd ovf-core.xsd schema
ovf-envelope.xsd ovf-envelope.xsd schema
ovf-section.xsd ovf-section.xsd schema
ovf-strings.xsd ovf-strings.xsd schema
ovfenv-core.xsd ovfenv-core.xsd schema
ovfenv-section.xsd ovfenv-section.xsd schema
Chapter 23 Using the XML Interface With the Logical Domains Manager 497
498
Glossary
This glossary defines terminology, abbreviations, and acronyms in the Oracle VM Server for
SPARC documentation.
authorization A way in which to determine who has permission to perform tasks and access data by using Oracle Solaris
OS rights.
bsmconv Command to enable the BSM (see the bsmconv(1M) man page).
bsmunconv Command to disable the BSM (see the bsmunconv(1M) man page).
compliance Determining whether a system's configuration is in compliance with a predefined security profile.
499
configuration
configuration Name of the logical domain configuration that is saved on the service processor.
constraints To the Logical Domains Manager, constraints are one or more resources you want to have assigned to a
particular domain. You either receive all the resources you ask to be added to a domain or you get none of
them, depending upon the available resources.
control domain A privileged domain that creates and manages other logical domains and services by using the Logical
Domains Manager.
500 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
ioctl
Gb Gigabit.
guest domain Uses services from the I/O and service domains and is managed by the control domain.
GLDv3 Generic LAN Driver version 3.
I/O domain Domain that has direct ownership of and direct access to physical I/O devices and that shares those devices
to other logical domains in the form of virtual devices.
IB Infiniband.
I/O Input/output devices, such as internal disks and PCIe controllers and their attached adapters and devices.
ioctl Input/output control call.
501
IPMP
MAC Media access control address, which Logical Domains Manager can automatically assign or you can assign
manually.
mem, memory Memory unit default size in bytes, or specify gigabytes (G), kilobytes (K), or megabytes (M). Virtualized
memory of the server that can be allocated to guest domains.
502 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
P2V
metadb Command to create and delete replicas of the Solaris Volume Manager metadevice state database (see the
metadb(1M) man page).
metaset Command to configure disk sets (see the metaset(1M) man page).
mhd Command to perform multihost disk control operations (see the mhd(7i) man page).
ndpsldcc Netra DPS Logical Domain Channel Client. See also vdpcc.
ndpsldcs Netra DPS Logical Domain Channel Service. See also vdpcs.
NIS Network Information Services.
NIU Network Interface Unit (Oracle's Sun SPARC Enterprise T5120 and T5220 servers).
NTS Network terminal server.
OID Object identifier, which is a sequence of numbers that uniquely identifies each object in a MIB.
503
PA
PA Physical address.
physical domain The scope of resources that are managed by a single Oracle VM Server for SPARC instance. A physical
domain might be a complete physical system as is the case of supported SPARC T-Series platforms. Or, it
might be either the entire system or a subset of the system as is the case of supported SPARC M-Series
platforms.
physical function A PCI function that supports the SR-IOV capabilities as defined in the SR-IOV specification. A physical
function contains the SR-IOV capability structure and is used to manage the SR-IOV functionality.
Physical functions are fully featured PCIe functions that can be discovered, managed, and manipulated
like any other PCIe device. Physical functions have full configuration resources, and can be used to
configure or control the PCIe device.
RA Real address.
RAID Redundant Array of Inexpensive Disks, which enables you to combine independent disks into a logical
unit.
RPC Remote Procedure Call.
504 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
uscsi
SAX Simple API for XML parser, which traverses an XML document. The SAX parser is event-based and used
mostly for streaming data.
unicast Network communication that takes place between a single sender and a single receiver.
uscsi User SCSI command interface (see the uscsi(7I) man page).
505
UTP
var Variable.
VBSC Virtual blade system controller.
vcc, vconscon Virtual console concentrator service with a specific port range to assign to guest domains.
vcons, vconsole Virtual console for accessing system-level messages. A connection is achieved by connecting to the
vconscon service in the control domain at a specific port.
vcpu Virtual central processing unit. Each core in a server is represented as a virtual CPU.
vdc Virtual disk client.
vdisk A virtual disk is a generic block device associated with different types of physical devices, volumes, or files.
vdpcc Virtual data plane channel client in a Netra DPS environment.
vdpcs Virtual data plane channel service in a Netra DPS environment.
vds, vdiskserver Virtual disk server enables you to import virtual disks into a logical domain.
vdsdev, Virtual disk server device is exported by the virtual disk server. The device can be an entire disk, a slice on a
vdiskserverdevice disk, a file, or a disk volume.
virtual function A PCI function that is associated with a physical function. A virtual function is a lightweight PCIe function
that shares one or more physical resources with the physical function and with other virtual functions that
are associated with the same physical function. Virtual functions are only permitted to have configuration
resources for its own behavior.
VNIC Virtual network interface card, which is a virtual instantiation of a physical network device that can be
created from the physical network device and assigned to a zone.
vldc Virtual logical domain channel service.
vnet Virtual network device implements a virtual Ethernet device and communicates with other vnet devices
in the system by using the virtual network switch (vswitch).
volfs Volume Management file system (see the volfs(7FS) man page).
506 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
ZVOL
vsw, vswitch Virtual network switch that connects the virtual network devices to the external network and also switches
packets between them.
507
508
Index
A assigning (Continued)
accessing, Fibre Channel virtual functions from a guest roles to users, 3536
domain, 134 VLANs, 257259
adding VLANs in an Oracle Solaris 10 guest
Ethernet virtual functions to an I/O domain, 99 domain, 258259
Fibre Channel virtual functions to an I/O VLANs in an Oracle Solaris 10 service domain, 258
domain, 131 VLANs in an Oracle Solaris 11 guest domain, 258
InfiniBand virtual functions to a root domain, 116 VLANs in an Oracle Solaris 11 service domain, 257
InfiniBand virtual functions to an I/O domain, 113 audit records
memory to a domain, 322 reviewing, 4345, 45
unaligned memory, 325326 auditing
virtual disks, 172 disabling, 4345
adjusting, interrupt limit, 380382 disabling Logical Domains Manager, 44
allocating enabling, 4345
CPU resources, 308311 enabling Logical Domains Manager, 43
resources, 307308 reviewing records, 4345, 45
world-wide names for Fibre Channel virtual authorization, ldm subcommands, 37
functions, 123124 autorecovery policy for domain
applying configurations, 346347
max-cores constraint, 309 autosave configuration directories
whole-core constraint, 308 restoring, 385
assigning saving, 385
endpoint device to an I/O domain, 143145
MAC addresses, 241243
automatically, 241243
manually, 241243 B
master domain, 368369 back ends
PCIe buses to a root domain, 6976 See also virtual disk backend exporting
PCIe endpoint device, 72 backward compatibility, exporting volumes, 182
physical resources to domains, 318322 blacklisting
rights profiles, 3337 Fault Management Architecture (FMA), 353
roles, 3337 faulty hardware resources, 354355
509
Index
510 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Index
511
Index
512 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Index
513
Index
I/O virtualization L
enabling, 87 LDC, See logical domain channels (LDC)
enabling for a PCIe bus, 162164 ldmd, See Logical Domains Manager daemon
identifying, InfiniBand functions, 119121 ldmp2v
ILOM interconnect configuration, verificating, 54 backend devices, 404
ILOM interconnect service, enabling, 55 collection phase, 402, 407408
InfiniBand SR-IOV, requirements, 108 configuring, 405407
installing conversion phase, 403, 410413
guest domain when install server in a VLAN, 259 installing, 405407
ldmp2v, 405407 limitations, 405406
Oracle Solaris OS from a DVD, 6163 Oracle VM Server for SPARC P2V conversion
Oracle Solaris OS from an ISO file, 6364 tool, 401403
Oracle VM Server for SPARC P2V tool, 31
Oracle Solaris OS in guest domains, 6065
preparation phase, 402403, 408409
Oracle VM Server for SPARC MIB, 427429
prerequisites, 405
Oracle VM Server for SPARC MIB
ldmp2v(1M) command, 402
software, 428429
limitations
Oracle VM Server for SPARC template utilities, 392 direct I/O, 146147
using JumpStart (Oracle Solaris 10), 6465 Ethernet SR-IOV, 89
inter-vnet LDC channels, 230231 Fibre Channel virtual functions, 122
inter-vnet-links, PVLANs, 260 non-primary root domains, 161162
interrupt limit, adjusting, 380382 physical network bandwidth, 235
IPMP SR-IOV, 8384
configuring in a service domain, 251 link aggregation, using with a virtual switch, 271272
configuring in an Oracle VM Server for SPARC link-based IPMP, using, 252255
environment, 248255 listing
configuring virtual network devices into a domain resources, 333339
group, 248249, 249250 InfiniBand virtual functions, 117118
ISO images PVLAN information, 263264
exporting, 187190 resource constraints, 338
exporting from service domain to guest resources as machine-readable output, 333334
domain, 189190 loading
Oracle VM Server for SPARC MIB
module, 428429
Oracle VM Server for SPARC MIB module in to
J Oracle Solaris SNMP agent, 429
jumbo frames lofi, exporting files and volumes as virtual disks, 182
compatibility with jumbo-unaware versions of the logical domain channels (LDC), 28
Oracle Solaris 10 vnet and vsw drivers, 276277 inter-vnet, 230231
configuring, 273277 logical domain channels (LDCs), 374377
JumpStart, using to install the Oracle Solaris 10 OS on a Logical Domains constraints database
guest domain, 6465 restoring, 386
saving, 386
Logical Domains Manager, 26, 28
514 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Index
515
Index
network booting, I/O domain by using an Ethernet Oracle VM Server for SPARC
SR-IOV virtual functions, 102 troubleshooting, 32
network configuration, Ethernet SR-IOV, 101102 using with the service processor, 366367
network device configurations, viewing, 231234 Oracle VM Server for SPARC Management Information
network device statistics, viewing, 231234 Base (MIB), 423
network devices See Oracle VM Server for SPARC MIB
network bandwidth limit, setting, 235237 Oracle VM Server for SPARC MIB, 32, 423
using, 243244 configuring, 427429
network interface name, 238241 domain resource pool, 438439
finding, 240 domain scalar variables, 438439
NIU hybrid I/O for Oracle VM Server for SPARC, 32
disabling, 271 installing, 427429
enabling, 271 Logical Domains Manager and, 426
using, 266271 object tree, 426427
non-interactive domain migration, 301 overview, 423427
non-physically bound resources, removing, 320 querying, 431433
non-primary root domain, restrictions, 160161 related products and features, 424
non-primary root domains, 159160 software components, 424
assigning a PCIe endpoint device, 159160 starting domains, 460463
assigning a PCIe SR-IOV virtual function, 159160 stopping domains, 460463
limitations, 161162 system management agent, 425
managing direct I/O devices, 164165 virtual console tables, 447449
managing SR-IOV virtual functions, 165167
virtual disk tables, 442444
overview, 159160
virtual memory tables, 441442
virtual network tables, 445447
virtual network terminal service (vNTS), 447449
O XML-based control interface
obtaining, domain migration status, 302 parsing, 426
Oracle Solaris 10 networking, 225226 Oracle VM Server for SPARC MIB module
Oracle Solaris 11 networking, 222225 loading, 428429
Oracle Solaris 11 networking-specific feature loading in to Oracle Solaris SNMP agent, 429
differences, 277279 Oracle VM Server for SPARC MIB module traps
Oracle Solaris OS receiving, 453454
breaks, 365 sending, 453454
installing on a guest domain, 6065 Oracle VM Server for SPARC MIB object
from a DVD, 6163 retrieving an
from an ISO file, 6364 snmpget, 432
network interface name (Oracle Solaris 11) Oracle VM Server for SPARC MIB object values
finding, 239 retrieving
operating with Oracle VM Server for snmptable, 433
SPARC, 365366 snmpwalk, 432433
Oracle Solaris SNMP agent, loading Oracle VM Server Oracle VM Server for SPARC MIB security,
for SPARC MIB module in to, 429 managing, 430431
516 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Index
Oracle VM Server for SPARC MIB software Oracle VM Server for SPARC MIB traps (Continued)
configuring, 428429 domain destroy (ldomDestroy), 455
installing, 428429 domain state change (ldomStateChange), 455
removing, 428429, 429 receiving, 454
Oracle VM Server for SPARC MIB tables sending, 453454
CMI table (ldomCMITable), 451452 virtual console concentrator change
core table (ldomCoreTable), 452 (ldomVccChange), 458459
cryptographic units table virtual console group change
(ldomCryptoTable), 450451 (ldomVconsChange), 459
domain policy table (ldomPolicyTable), 436437 virtual CPU change (ldomVCpuChange), 455456
domain table (ldomTable), 433436 virtual disk change (ldomVdiskChange), 457
environment variables table virtual disk service change
(ldomEnvVarsTable), 436 (ldomVdsChange), 456457
I/O bus table (ldomIOBusTable), 451 virtual memory change (ldomVMemChange), 456
scalar variables for CMI resource pool, 439 virtual network change (ldomVnetChange), 458
scalar variables for CPU resource pool, 438 virtual switch change (ldomVswChange), 457458
scalar variables for cryptographic resource Oracle VM Server for SPARC P2V tool
pool, 439 configuring, 406407
scalar variables for domain version ldmp2v, 31, 401403
information, 452 limitations, 405406
scalar variables for I/O bus resource pool, 439 Oracle VM Server for SPARC template utilities,
service processor configuration table installing, 392
(ldomSPConfigTable), 437438
virtual console concentrator table
(ldomVccTable), 447448
virtual console group table P
(ldomVconsTable), 448449 parseable domain listing
virtual console relationship table showing, 369
(ldomVconsVccRelTable), 449 viewing, 369
virtual CPU table (ldomVcpuTable), 439441 parsing
virtual disk service device table XML-based control interface
(ldomVdsdevTable), 443444 Oracle VM Server for SPARC MIB, 426
virtual disk service table (ldomVdsTable), 442443 PCIe bus, 6768
virtual disk table (ldomVdiskTable), 444 changing the hardware, 150152
virtual memory physical binding table enabling I/O virtualization, 87
(ldomVmemPhysBindTable), 442 PCIe SR-IOV virtual functions
virtual memory table (ldomVmemTable), 441442 See virtual functions
virtual network device table planning for, 8889
(ldomVnetTable), 446447 performance
virtual switch service device table maximizing for virtual networks, 226227, 227
(ldomVswTable), 446 requirements for maximizing virtual
Oracle VM Server for SPARC MIB traps, 454460 networks, 226227
CMI resource change (ldomCMIChange), 459460 performance register access, setting, 339341
domain creation (ldomCreate), 454455 physical-bindings constraint, removing, 319
517
Index
518 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Index
519
Index
520 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Index
521
Index
522 Oracle VM Server for SPARC 3.3 Administration Guide October 2015
Index
XML tags
<cmd>, 472
<data>, 472473
<LDM_interface>, 471
XML transport frames, 469470
XMPP
local connections, 470
server, 470
Z
ZFS
storing disk images with, 192194
using with virtual disks, 198
virtual disks and, 192196
ZFS file, storing a disk image by using a, 193194
ZFS pool, configuring in a service domain, 192
ZFS volume
exporting as a full disk, 180181
exporting as a single-slice disk, 181
storing a disk image by using a, 193
ZFS volumes, exporting a virtual disk back end multiple
times, 172173
523
524