PowerVM Processor Virtualization Concepts and Configuration
PowerVM Processor Virtualization Concepts and Configuration
Agenda
Power Systems processor partitioning capabilities by platform The shared processor pool The difference between physical, virtual, and logical processors SPLPAR processor minimum, desired, and maximum settings Recommendations for SPLPAR processor settings Multiple Shared Processor Pools Suggestions for shared processor pool settings
Introduction
IBM
Charlie Cler Executive I/T Specialist Systems & Technology Group [email protected]
St. Louis, MO, USA
Manufacturing engineer, specialized in robotic assembly lines Manufacturing software specialist Unix systems specialist: RS/6000, eServer pSeries, System p, Power Systems
PowerVM components
The scalable virtualization platform for your mission-critical UNIX, Linux and i5/OS applications!
Feature
Express Edition
Standard Edition
Enterprise Edition
Lx86
PowerVM
AIX, IBM i, Linux
Workload Partitions
AIX 6.1
- Multiple partitions per server - SPLPARS using Micro-Partitioning technology - Virtual and/or dedicated I/O - Includes Dynamic LPARs
- Multiple workspaces per AIX image - Runs inside an LPAR or SPLPAR (Micro-Partition)
Dynamic LPARs
AIX, IBM I, Linux
Hardware Partitioning
OS Based Partitioning
POWER4
Dynamic LPAR
POWER5
POWER6
The shared processor pool was introduced with POWER5 and AIX 5.3 POWER6 adds Multiple Shared Processor Pools which can be used with both AIX 5.3 and 6.1
8
CPUs 2 3
CPUs 2 3
Time
Time
CPUs 2 3
Time
Guarantee
LPAR #1
SMT=On
LPAR #2
SMT=Off
L L L L L L L L V V V V
L
2 Cores (dedicated)
1 Core (dedicated)
Weight = 10 Weight = 100 Weight = 100 PU = 0.1 PU = 0.8 PU = 0.8 Pool #2 MaxPU = 2 ReservedPU = 0.3
Hypervisor
Core Core Core Core Core Core Core Core Core Core Core Core
Physical
10
1 Core (dedicated)
2 Cores (dedicated)
Pool 0
Hypervisor
Core Core Core Core Core Core Core Core Core Core Core Core
Physical
Learning points: (1) All activated, non-dedicated cores are automatically used by the shared processor pool. (2) The shared processor pool size can change as dedicated LPARs are started/stopped.
11
Virtual Processors
Physical processing cycles are presented to AIX through Virtual Processors
LPAR #1 LPAR #2 SPLPAR SPLPAR SPLPAR SPLPAR SPLPAR SPLPAR #3 #4 #5 #6 #7 #8
1 Core (dedicated)
2 Cores (dedicated)
Virtual
Pool # 0
Hypervisor
Core Core Core Core Core Core Core Core Core Core Core Core
Physical
Learning points: (1) Each virtual processor can represent 0.1 to 1 of a physical processor. (2) The number of virtual processors specified for an LPAR represents the maximum number of physical processors that the LPAR can access. (3) You will not be sharing pooled processors until the number of virtual processors exceeds the size of the shared pool.
12
Processing Units
Processing Units allow physical processors to be allocated in fractional increments
LPAR #1 LPAR #2 SPLPAR SPLPAR SPLPAR SPLPAR SPLPAR SPLPAR #3 #4 #5 #6 #7 #8
V
1 Core (dedicated) 2 Cores (dedicated)
V
PU = 0.5
V
PU = 0.1
Virtual
PU = 1.2
PU = 1.5 Pool # 0
PU = 0.8
PU = 0.8
Physical
Hypervisor
Core Core Core Core Core Core Core Core Core Core Core Core
Physical
Learning points: (1) One processing unit is equivalent to one cores worth of compute cycles. (2) The specified Processing Units is guaranteed to each LPAR no matter how busy the shared pool is. (3) The sum total of assigned processing units cannot exceed the size of the shared pool.
13
0.1 - 1
Example: An SPLPAR has two virtual processors. This means that the assigned processing units must be somewhere between 0.2 and 2. The maximum processing units that the SPLPAR can utilize is two. If we want this SPLPAR to be able to use more than two processing units worth of cycles, we need to add more virtual processors, perhaps 2 more. Assigned processing units must now be at least 0.4 and the maximum utilization will be 4.
0.2 - 2
0.3 - 3
0.4 - 4
x
Learning point:
14
0.1x - x
The number of virtual processors establishes the maximum number of processing units that an SPLPAR can access.
2009 IBM Corporation
V
1 Core (dedicated) 2 Cores (dedicated)
Virtual
Physical
Hypervisor
Core Core Core Core Core Core Core Core Core Core Core Core
Physical
Learning points: (1) Capped LPARs are limited to their PU setting and cannot access extra cycles (2) Uncapped LPARs have a weight factor which is a share based mechanism for the distribution of excess processor cycles.
15
Proc. Units
Proc. Units
Proc. Units
Time
Time
Time
Desired Processing Units Establishes a guaranteed amount of processor cycles for each LPAR Uncapped = yes .. LPAR can utilize excess cycles Uncapped = no LPAR is limited to the Desired Processing Units
Desired Virtual Processors Establishes an upper limit for possible processor consumption by an LPAR (when uncapped =yes)
16
V V
Desired Desired
If all four virtual processors have work to be done, each will receive 0.4 processing units. The maximum processing units possible to handle peak workload is 4. Individual processes/threads may run slower
If both virtual processors have work to be done, each will receive 0.8 processing units. The maximum processing units possible to handle peak workload is 2. Individual processes/threads may run faster Workloads with a lot of processes/threads may run slower
Learning point: You need to consider peak processing requirements and the job stream (single or multithreaded) when setting the desired number of virtual processors.
17
V V
Each virtual processor will receive 1.0 processing units from the 5.8 available. Max processing units that can be consumed is 4 because we have 4 virtual processors.
Each virtual processor will receive 1.0 processing units from the 5.8 available. Max processing units that can be consumed is 2 because we only have 2 virtual processors.
Learning point:
In the presence of excess processing units, SPLPARs with a higher desired virtual processor count will be able to access more excess processing units.
18
AIX 5.3
LPAR V V V V
Processor Folding
V V
Desired Desired
If all four virtual processors have work to be done, each will receive 0.4 processing units
If only two virtual processors have work to be done, the hypervisor will temporarily direct all processing units to the two busy virtual processors. Each will receive 0.8 processing units.
Processor folding can be disabled: #schedo o vpm_xvcpus=-1 See section 5.8.3 Processor Folding of IBM Redbook AIX 5L Differences Guide Version 5.3 Addendum SG24-7414
Learning point: Size the number of desired virtual processors for the peak workload. The hypervisor will automatically allocate resources to the virtual processors with work to be done
19
cores
2 1 0 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10
Prev very busy, receives full allocation Reduced, did not use prev allocation Running steady workload
Reduced, did not use prev allocation Waiting on I/O, cedes cycles Prev busy, receives excess cycles
Learning point: The hypervisor automatically adjusts allocations based on each SPLPARs use of cycles during the previous dispatch cycle.
20
V
1 Core (dedicated) 2 Cores (dedicated)
V
PU = 0.5
V
PU = 0.1
PU = 1.2
PU = 1.5
PU = 0.8
PU = 0.8
Pool # 0
Hypervisor
Core Core Core Core Core Core Core Core Core Core Core Core
8 7 6
PU = 1.5
HMC Setting
5 4 3 2 1
Learning point: The min/max settings have nothing to do with resource allocation during normal operation. Min/max are limits applied only when making a dynamic change to PU or VP via the HMC.
* Min also allows an LPAR to start with less than the desired resource allocations. 22
System throughput
ST
SMT
Utilizes unused execution unit cycles Dispatch two threads per processor: Its like doubling the number of processors.
Learning point:
23
POWER6, SMT = On
POWER5, SMT = On
Throughput
2 2
SMT=Off
Users
24
~ 30%
~ 50%
2009 IBM Corporation
SMT = Off
SMT = On
SMT = Off
SMT = On
SMT does not improve system throughput on a lightly loaded system SMT does not make a single thread run faster
25
SMT does improve system throughput on a heavily loaded system SMT does not make a single thread run faster (unless it is waiting in the queue)
2009 IBM Corporation
Logical Processors
Simultaneous Multi-Threading (SMT) threads are represented by logical processors
LPAR #1
SMT=On
LPAR #2
SMT=Off
L L L L L L L L V V V V
Logical Virtual
L
2 Cores (dedicated)
1 Core (dedicated)
Hypervisor
Core Core Core Core Core Core Core Core Core Core Core Core
Physical
Learning point: SMT requires a minimum of POWER5 hardware and AIX 5.3 (or supported Linux ver.) SMT can be dynamically enable/disable via an AIX command.
26
cores
Note that SMT is off for SPLPAR1, therefore it only runs thread0 during its dispatch window.
10 millisecond dispatch cycle Prev very busy, receives full allocation Reduced, did not use prev allocation Running steady workload
10 millisecond dispatch cycle Reduced, did not use prev allocation Waiting on I/O, cedes cycles Prev busy, receives excess cycles
Learning point: For SPLPARs with SMT enabled, each allocated processor presents two threads (logical processors) to the operating system.
27
L L L L L L
V V V
Check Shared to have this LPARs processor resources come from the Shared Processor Pool.
29
Specify minimum, desired, and maximum Virtual Processors, in whole processor increments. Select Uncapped if you want this SPLPAR to utilize spare processor cycles.
30
31
32
33
Shared processor pool - subset (or all) of the physical CPUs in a system
Physical processors shared among all of the SPLPARs within the shared processor pool
Uncapped:
No Yes Processing unit usage is limited to desired setting. Processing unit usage is allowed to exceed the desired processing unit setting.
34
Memory configuration
Real Memory
Physical memory consumed by the HPT
HMC Settings
32
Maximum 16*
256 MB
Maximum - 8* Desired - 6 0
128 MB
6 GB
LPAR #1
0 GB
Maximum - 8
5 GB 0 GB Desired - 5
128 MB
LPAR #2
Learning Points: (1) The HPT presents a contiguous range of memory, starting at address 0, to each LPAR (2) Larger maximum memory settings increase the physical memory consumed by the HPT
* Only one maximum memory setting is permitted per LPAR. Multiple maximums are shown here to demonstrate the corresponding HPT physical memory size.
35
Memory configuration
Memory Configuration
Desired Minimum Maximum Amount of memory normally allocated to the LPAR Minimal amount of memory that must be available for the LPAR to start. Also sets a low water mark for DLPAR changes to desired memory setting. Sets the high water mark for DLPAR changes to the desired memory setting
Learning Points: (1) Set maximum memory to 15%-30% greater than desired memory to allow for some DLPAR increase, with minimal waste in the HPT. (2) Use powers of 2 for Max Memory setting (2GB, 4GB, 8GB, 16GB, 32GB, 64GB,etc)
36
Memory configuration
37
38
P6 Reqd
1 Core (dedicated)
2 Cores (dedicated)
Pool 0
Hypervisor
Core Core Core Core Core Core Core Core Core Core Core Core
When this Dedicated CPU LPAR is deactivated, allow unallocated CPUs to be used by the Shared Processor Pool?
Allow excess processor cycles from this Dedicated CPU LPAR to be donated to the Shared Processor Pool?
39
P6 Reqd
SPLAR DB.
SPLPAR SPLPAR SPLPAR SPLPAR Appl Server Appl Server Web Server Web Server
DB.
Appl Server
Appl Server
DB Server
DB Server
Pool-0
Pool-1
Pool-2
Sets an upper limit on processor resources accessible to a group of SPLPARs This is not a hard division of the shared processor pool into smaller sub-pools Up to 64 pools can be configured per server Can help with software licensing Can help balance Prod/Dev on the same server
40
P6 Reqd
LPAR #2
SMT=Off
L L L L L L L L
L
2 Cores (dedicated)
1 Core (dedicated)
Weight = 30 PU = 1.5
Pool #1
Pool #2
Hypervisor
Core Core Core Core Core Core Core Core Core Core Core Core
Physical
MaxPU A whole number which specifies maximum processing units that can be consumed by all of the SPLPARs running in this pool, ReservedPU = Additional, guaranteed Processing Units for each pool (could be 0) Default Pool ID = 0 (cannot specify MaxPU or ReservedPU for the Default Pool)
41
P6 Reqd
Pool IDs are fixed and numbered 0...63 SPLPARs can dynamically be moved to a different pool Disable a pool by setting its Maximum processing units to zero Default Pool ID = 0 You cannot set reserved processing units or Maximum processing units for the Default Pool
42
43
Virt. Procs
Virt. Procs
Virt. Procs
Proc. Units
Proc. Units
Proc. Units
Time
Time
Time
Desired Virtual Processors Find the peak, move up to the next whole number Desired Processing Units More subjective, no best answer Need a mix of users and donors to have processor sharing High priority applications: Set processing units higher Low priority applications: Set processing units lower
44
Example: Normal requirement is 0.9 processing units Peak requirement is 3.8 processing units
Time
LPAR Settings: Virtual Processor Sizing: Set desired virtual processors large enough to handle peak load Desired = 4 (round 3.8, peak requirement up to next whole number) Minimum* = Starting point might be 2, or of desired. Maximum* = Number of CPUs in the shared pool Processing Units: Set desired processing units to address non-peak, normal workload Desired = 0.9 (set to match 0.9 processing units, normal requirement) Minimum* = Good starting point might be 0.5, or approx. of Desired. Maximum* = Number of CPUs in the shared pool * These only come into play if we make dynamic changes using the HMC
45
LPAR CPU utilization > 100% is a good thing (using spare cycles!)
Plan to measure utilization at the server level Consolidate like software onto the same server for improved software utilization and reduced software license costs.
46
47
Tool
Processor Family
All
Yes
Yes
Yes
Yes
Yes
POWER4 Static LPAR POWER5 POWER6 POWER4 Dynamic LPAR (PLM) POWER5 POWER6 POWER5 POWER6 POWER4 Dynamic allocation of whole CPUs No No Yes Yes Yes Static allocation of whole CPUs No Yes Yes Yes Yes
No
No
No
Yes
Yes
POWER5
POWER6
No
No
No
No
Yes
Learning point: Ability to deploy tools is dependent upon the OS version and processor model.
48
Legend
Processor OS Level
POWER6 AIX 5.3 Yes Yes Yes Yes AIX 6.1 Yes Yes Yes Yes
No
Yes
No
Yes
No
Yes
Learning point: Most of the new features introduced with POWER6 are supported with AIX 5.3
49
Additional Information
www.ibm.com/redbooks PowerVM Virtualization on IBM System p Introduction and Configuration (SG24-7940) PowerVM Virtualization on IBM System p Managing and Monitoring (SG24-7590)
50
Special Notices
This document was developed for IBM offerings in the United States as of the date of publication. IBM may not make these offerings available in other countries, and the information is subject to change without notice. Consult your local IBM business contact for information on the IBM offerings available in your area. Information in this document concerning non-IBM products was obtained from the suppliers of these products or other public sources. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. IBM may have patents or pending patent applications covering subject matter in this document. The furnishing of this document does not give you any license to these patents. Send license inquires, in writing, to IBM Director of Licensing, IBM Corporation, New Castle Drive, Armonk, NY 10504-1785 USA. All statements regarding IBM future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only. The information contained in this document has not been submitted to any formal IBM test and is provided "AS IS" with no warranties or guarantees either expressed or implied. All examples cited or described in this document are presented as illustrations of the manner in which some IBM products can be used and the results that may be achieved. Actual environmental costs and performance characteristics will vary depending on individual client configurations and conditions. IBM Global Financing offerings are provided through IBM Credit Corporation in the United States and other IBM subsidiaries and divisions worldwide to qualified commercial and government clients. Rates are based on a client's credit rating, financing terms, offering type, equipment type and options, and may vary by country. Other restrictions may apply. Rates and offerings are subject to change, extension or withdrawal without notice. IBM is not responsible for printing errors in this document that result in pricing or information inaccuracies. All prices shown are IBM's United States suggested list prices and are subject to change without notice; reseller prices may vary. IBM hardware products are manufactured from new parts, or new and serviceable used parts. Regardless, our warranty terms apply. Any performance data contained in this document was determined in a controlled environment. Actual results may vary significantly and are dependent on many factors including system hardware configuration and software design and configuration. Some measurements quoted in this document may have been made on development-level systems. There is no guarantee these measurements will be the same on generallyavailable systems. Some measurements quoted in this document may have been estimated through extrapolation. Users of this document should verify the applicable data for their specific environment.
51
52
Trademarks
The following are trademarks of the International Business Machines Corporation in the United States, other countries, or both.
Not all common law marks used by IBM are listed on this page. Failure of a mark to appear does not mean that IBM does not use the mark nor does it mean that the product is not actively marketed or is not significant within its relevant market. Those trademarks followed by are registered trademarks of IBM in the United States; all others are trademarks or common law marks of IBM in the United States.
53