Intel Programming Reference
Intel Programming Reference
June 2023
319433-049
Notices & Disclaimers
This document contains information on products in the design phase of development. The information here is
subject to change without notice. Do not finalize a design with this information.
Intel technologies may require enabled hardware, software or service activation.
No product or component can be absolutely secure.
Your costs and results may vary.
You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning
Intel products described herein. You agree to grant Intel a non-exclusive, royalty-free license to any patent claim thereafter
drafted which includes subject matter disclosed herein.
All product plans and roadmaps are subject to change without notice.
The products described may contain design defects or errors known as errata which may cause the product to deviate from
published specifications. Current characterized errata are available on request.
Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability,
fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of
dealing, or usage in trade.
Code names are used by Intel to identify products, technologies, or services that are in development and not publicly
available. These are not “commercial” names and not intended to function as trademarks.
No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document, with
the sole exception that a) you may publish an unmodified copy and b) code included in this document is licensed subject to
the Zero-Clause BSD open source license (0BSD), https://opensource.org/licenses/0BSD. You may create software
implementations based on this document and in compliance with the foregoing that are intended to execute on the Intel
product(s) referenced in this document. No rights are granted to create modifications or derivatives of this document.
© Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other
names and brands may be claimed as the property of others.
ii Ref. # 319433-049
Revision History
iv Ref. # 319433-049
Revision Description Date
Ref. # 319433-049 v
Revision Description Date
vi Ref. # 319433-049
Revision Description Date
Ref. # 319433-049 ix
Revision Description Date
x Ref. # 319433-049
REVISION HISTORY
CHAPTER 1
FUTURE INTEL® ARCHITECTURE INSTRUCTION EXTENSIONS AND FEATURES
1.1 About This Document. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-1
1.2 DisplayFamily and DisplayModel for Future Processors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-1
1.3 Instruction Set Extensions and Feature Introduction in Intel® 64 and IA-32 Processors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-2
1.4 Detection of Future Instructions and Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-3
1.5 CPUID Instruction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-3
CPUID—CPU Identification. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-4
1.6 Compressed Displacement (disp8*N) Support in EVEX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-51
1.7 bfloat16 Floating-Point Format. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-52
CHAPTER 2
INSTRUCTION SET REFERENCE, A-Z
2.1 Instruction Set Reference. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-1
AADD—Atomically Add. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-2
AAND—Atomically AND . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-4
AOR—Atomically OR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-6
AXOR—Atomically XOR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2-8
CMPccXADD—Compare and Add if Condition is Met . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-10
PBNDKB—Platform Bind Key to Binary Large Object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-15
PCONFIG—Platform Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-19
RDMSRLIST—Read List of Model Specific Registers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-30
VBCSTNEBF162PS—Load BF16 Element and Convert to FP32 Element With Broadcast . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-33
VBCSTNESH2PS—Load FP16 Element and Convert to FP32 Element with Broadcast. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-34
VCVTNEEBF162PS—Convert Even Elements of Packed BF16 Values to FP32 Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-35
VCVTNEEPH2PS—Convert Even Elements of Packed FP16 Values to FP32 Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-36
VCVTNEOBF162PS—Convert Odd Elements of Packed BF16 Values to FP32 Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-37
VCVTNEOPH2PS—Convert Odd Elements of Packed FP16 Values to FP32 Values. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-38
VCVTNEPS2BF16—Convert Packed Single-Precision Floating-Point Values to BF16 Values . . . . . . . . . . . . . . . . . . . . . . . . 2-39
VPDPB[SU,UU,SS]D[,S]—Multiply and Add Unsigned and Signed Bytes With and Without Saturation . . . . . . . . . . . . . . . . 2-41
VPDPW[SU,US,UU]D[,S]—Multiply and Add Unsigned and Signed Words With and Without Saturation . . . . . . . . . . . . . . . 2-44
VPMADD52HUQ—Packed Multiply of Unsigned 52-Bit Integers and Add the High 52-Bit Products to Qword
Accumulators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-47
VPMADD52LUQ—Packed Multiply of Unsigned 52-Bit Integers and Add the Low 52-Bit Products to Qword
Accumulators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-48
VSHA512MSG1—Perform an Intermediate Calculation for the Next Four SHA512 Message Qwords . . . . . . . . . . . . . . . 2-49
VSHA512MSG2—Perform a Final Calculation for the Next Four SHA512 Message Qwords . . . . . . . . . . . . . . . . . . . . . . . . 2-50
VSHA512RNDS2—Perform Two Rounds of SHA512 Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-51
VSM3MSG1—Perform Initial Calculation for the Next Four SM3 Message Words. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-53
VSM3MSG2—Perform Final Calculation for the Next Four SM3 Message Words . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-55
VSM3RNDS2—Perform Two Rounds of SM3 Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-57
VSM4KEY4—Perform Four Rounds of SM4 Key Expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-59
VSM4RNDS4—Performs Four Rounds of SM4 Encryption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-62
WRMSRLIST—Write List of Model Specific Registers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-64
WRMSRNS—Non-Serializing Write to Model Specific Register. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-67
CHAPTER 3
INTEL® AMX INSTRUCTION SET REFERENCE, A-Z
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-1
3.1.1 Tile Architecture Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-3
3.1.2 TMUL Architecture Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-4
3.1.3 Handling of Tile Row and Column Limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-5
3.1.4 Exceptions and Interrupts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3-5
3.2 Operand Restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-5
3.3 Implementation Parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-5
Ref. # 319433-049 xi
3.4 Helper Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-6
3.5 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-7
3.6 Exception Classes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-7
3.7 Instruction Set Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-9
TCMMIMFP16PS/TCMMRLFP16PS—Matrix Multiplication of Complex Tiles Accumulated into Packed Single Precision
Tile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-10
TDPFP16PS—Dot Product of FP16 Tiles Accumulated into Packed Single Precision Tile. . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-13
CHAPTER 4
UC-LOCK DISABLE
4.1 Features to Disable Bus Locks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-1
4.2 UC-Lock Disable. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-1
CHAPTER 5
INTEL® RESOURCE DIRECTOR TECHNOLOGY FEATURE UPDATES
5.1 Intel® RDT Feature Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-1
5.1.1 Intel® RDT on the 3rd generation Intel® Xeon® Scalable Processor Family. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-1
5.1.2 Intel® RDT on Intel Atom® Processors, Including the P5000 Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-1
5.1.3 Intel® RDT in Future Processors Based on Sapphire Rapids Server Microarchitecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-1
5.1.4 Intel® RDT in Processors Based on Emerald Rapids Server Microarchitecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-2
5.1.5 Future Intel® RDT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-2
5.2 Enumerable Memory Bandwidth Monitoring Counter Width. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-2
5.2.1 Memory Bandwidth Monitoring (MBM) Enabling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-2
5.2.2 Augmented MBM Enumeration and MSR Interfaces for Extensible Counter Width . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-2
5.3 Second Generation Memory Bandwidth Allocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-3
5.3.1 Second Generation MBA Advantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-3
5.3.2 Second Generation MBA Software-Visible Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-4
5.4 Third Generation Memory Bandwidth Allocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-5
5.4.1 Third Generation MBA Hardware Changes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-5
5.4.2 Third Generation MBA Software-Visible Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-5
5.5 Future MBA Enhancements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-5
CHAPTER 6
LINEAR ADDRESS MASKING (LAM)
6.1 Enumeration, Enabling, and Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-1
6.2 Treatment of Data Accesses with LAM Active for User Pointers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-1
6.3 Treatment of Data Accesses with LAM Active for Supervisor Pointers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-3
6.4 Canonicality Checking for Data Addresses Written to Control Registers and MSRs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-4
6.5 Paging Interactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-4
6.6 VMX Interactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-4
6.6.1 Guest Linear Address . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-4
6.6.2 VM-Entry Checking of Values of CR3 and CR4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-5
6.6.3 CR3-Target Values. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-5
6.6.4 Hypervisor-Managed Linear Address Translation (HLAT) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-5
6.7 Debug and Tracing Interactions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-5
6.7.1 Debug Registers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-5
6.7.2 Intel® Processor Trace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-5
6.8 Intel® SGX Interactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-5
6.9 System Management Mode (SMM) Interactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-6
CHAPTER 7
CODE PREFETCH INSTRUCTION UPDATES
PREFETCHh—Prefetch Data or Code Into Caches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-1
CHAPTER 8
NEXT GENERATION PERFORMANCE MONITORING UNIT (PMU)
8.1 New Enumeration Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-1
CHAPTER 9
LINEAR ADDRESS SPACE SEPARATION (LASS)
9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-1
9.2 Enumeration and Enabling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-1
9.3 Operation of Linear-Address Space Separation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-1
9.3.1 Data Accesses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-2
9.3.2 Instruction Fetches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-2
CHAPTER 10
REMOTE ATOMIC OPERATIONS IN INTEL ARCHITECTURE
10.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-1
10.2 Instructions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-1
10.3 Alignment Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-1
10.4 Memory Ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-2
10.5 Memory Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-2
10.6 Write Combining Behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-2
10.7 Performance Expectations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-2
10.7.1 Interaction Between RAO and Other Accesses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-3
10.7.2 Updates of Contended Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-3
10.7.3 Updates of Uncontended Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-3
10.8 Examples. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-4
10.8.1 Histogram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-4
10.8.2 Interrupt/Event Handler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-4
CHAPTER 11
TOTAL STORAGE ENCRYPTION IN INTEL ARCHITECTURE
11.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-1
11.1.1 Key Programming Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-1
11.1.1.1 Key Wrapping Support: PBNDKB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-1
11.1.2 Unwrapping and Hardware Key Programming Support: PCONFIG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-1
11.2 Enumeration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-1
11.2.1 CPUID Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-1
11.2.1.1 PCONFIG CPUID Leaf Extended to Support Total Storage Encryption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-1
11.2.2 Total Storage Encryption Capability MSR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-2
11.3 VMX Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-2
11.3.1 Changes to VMCS Fields. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-2
11.3.2 Changes to VMX Capability MSRs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-2
11.3.3 Changes to VM Entry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-2
Ref. # 319433-049 xv
Ref. # 319433-049 xvi
FIGURES
PAGE
Figure 1-1. Version Information Returned by CPUID in EAX. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-30
Figure 1-2. Feature Information Returned in the ECX Register . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-32
Figure 1-3. Feature Information Returned in the EDX Register . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-34
Figure 1-4. Determination of Support for the Processor Brand String . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-42
Figure 1-5. Algorithm for Extracting Maximum Processor Frequency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-43
Figure 1-6. Comparison of BF16 to FP16 and FP32 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-52
Figure 3-1. Intel® AMX Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-2
Figure 3-2. The TMUL Unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-3
Figure 3-3. Matrix Multiply C+= A*B. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-4
Figure 5-1. Second Generation MBA, Including a Fast-Responding Hardware Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-4
Figure 6-1. Canonicality Check When LAM48 is Enabled for User Pointers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-2
Figure 6-2. Canonicality Check When LAM57 is Enabled for User Pointers with 5-Level Paging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-2
Figure 6-3. Canonicality Check When LAM57 is Enabled for User Pointers with 4-Level Paging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6-3
Figure 6-4. Canonicality Check When LAM57 is Enabled for Supervisor Pointers with 5-Level Paging . . . . . . . . . . . . . . . . . . . . . . . . 6-3
Figure 6-5. Canonicality Check When LAM48 is Enabled for Supervisor Pointers with 4-Level Paging . . . . . . . . . . . . . . . . . . . . . . . . 6-4
Figure 8-1. Layout of the MSR_PEBS_DATA_CFG Register . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-6
CHAPTER 1
FUTURE INTEL® ARCHITECTURE INSTRUCTION EXTENSIONS AND
FEATURES
Table 1-2. Recent Instruction Set Extensions / Features Introduction in Intel® 64 and IA-32 Processors1
Instruction Set Architecture / Feature Introduction
Direct stores: MOVDIRI, MOVDIR64B Tremont, Tiger Lake, Sapphire Rapids
AVX512_BF16 Cooper Lake, Sapphire Rapids
CET: Control-flow Enforcement Technology Tiger Lake, Sapphire Rapids
AVX512_VP2INTERSECT Tiger Lake (not currently supported in any other processors)
Enqueue Stores: ENQCMD and ENQCMDS Sapphire Rapids, Sierra Forest, Grand Ridge
CLDEMOTE Tremont, Sapphire Rapids
PTWRITE Goldmont Plus, Alder Lake, Sapphire Rapids
User Wait: TPAUSE, UMONITOR, UMWAIT Tremont, Alder Lake, Sapphire Rapids
Architectural LBRs Alder Lake, Sapphire Rapids, Sierra Forest, Grand Ridge
HLAT Alder Lake, Sapphire Rapids, Sierra Forest, Grand Ridge
SERIALIZE Alder Lake, Sapphire Rapids, Sierra Forest, Grand Ridge
Intel® TSX Suspend Load Address Tracking (TSXLDTRK) Sapphire Rapids
Intel® Advanced Matrix Extensions (Intel® AMX) Sapphire Rapids
Includes CPUID Leaf 1EH, “TMUL Information Main Leaf,” and
CPUID bits AMX-BF16, AMX-TILE, and AMX-INT8.
AVX-VNNI Alder Lake2, Sapphire Rapids, Sierra Forest, Grand Ridge
User Interrupts (UINTR) Sapphire Rapids, Sierra Forest, Grand Ridge, Arrow Lake, Lunar Lake
Intel® Trust Domain Extensions (Intel® TDX)3 Future Processors
Supervisor Memory Protection Keys (PKS)4 Alder Lake, Sapphire Rapids
Linear Address Masking (LAM) Sierra Forest, Grand Ridge, Arrow Lake, Lunar Lake
IPI Virtualization Sapphire Rapids, Sierra Forest, Grand Ridge, Arrow Lake, Lunar Lake
RAO-INT Grand Ridge
PREFETCHIT0/1 Granite Rapids
AMX-FP16 Granite Rapids
CMPCCXADD Sierra Forest, Grand Ridge, Arrow Lake, Lunar Lake
AVX-IFMA Sierra Forest, Grand Ridge, Arrow Lake, Lunar Lake
AVX-NE-CONVERT Sierra Forest, Grand Ridge, Arrow Lake, Lunar Lake
AVX-VNNI-INT8 Sierra Forest, Grand Ridge, Arrow Lake, Lunar Lake
RDMSRLIST/WRMSRLIST/WRMSRNS Sierra Forest, Grand Ridge
Table 1-2. Recent Instruction Set Extensions / Features Introduction in Intel® 64 and IA-32 Processors1(Continued)
Instruction Set Architecture / Feature Introduction
Linear Address Space Separation (LASS) Sierra Forest, Grand Ridge, Arrow Lake, Lunar Lake
Virtualization of the IA32_SPEC_CTRL MSR Sapphire Rapids, Sierra Forest, Grand Ridge
UC-Lock Disable via CPUID Enumeration Sierra Forest, Grand Ridge
LBR Event Logging Sierra Forest, Grand Ridge, Arrow Lake S (06_C6H), Lunar Lake
AMX-COMPLEX Granite Rapids D (06_AEH)
AVX-VNNI-INT16 Arrow Lake S (06_C6H), Lunar Lake
SHA512 Arrow Lake S (06_C6H), Lunar Lake
SM3 Arrow Lake S (06_C6H), Lunar Lake
SM4 Arrow Lake S (06_C6H), Lunar Lake
UIRET flexibly updates UIF Sierra Forest, Grand Ridge, Arrow Lake, Lunar Lake
Total Storage Encryption (TSE) and the PBNDKB instruction Lunar Lake
NOTES:
1. Visit for Intel® product specifications, features and compatibility quick reference guide, and code name decoder, visit:
https://ark.intel.com/content/www/us/en/ark.html
2. Alder Lake Intel Hybrid Technology will not support Intel® AVX-512. ISA features such as Intel® AVX, AVX-VNNI, Intel® AVX2, and
UMONITOR/UMWAIT/TPAUSE are supported.
3. Details on Intel® Trust Domain Extensions can be found here:
https://www.intel.com/content/www/us/en/developer/articles/technical/intel-trust-domain-extensions.html.
4. Details on Supervisor Memory Protection Keys (PKS) can be found in the Intel® 64 and IA-32 Architectures Software Developer’s
Manual, Volume 3A.
CPUID—CPU Identification
64-Bit Compat/
Opcode Instruction Description
Mode Leg Mode
0F A2 CPUID Valid Valid Returns processor identification and feature information to the EAX, EBX, ECX, and
EDX registers, as determined by input entered in EAX (in some cases, ECX as well).
Description
The ID flag (bit 21) in the EFLAGS register indicates support for the CPUID instruction. If a software procedure can
set and clear this flag, the processor executing the procedure supports the CPUID instruction. This instruction oper-
ates the same in non-64-bit modes and 64-bit mode.
CPUID returns processor identification and feature information in the EAX, EBX, ECX, and EDX registers.1 The
instruction’s output is dependent on the contents of the EAX register upon execution (in some cases, ECX as well).
For example, the following pseudocode loads EAX with 00H and causes CPUID to return a Maximum Return Value
and the Vendor Identification String in the appropriate registers:
1. On Intel 64 processors, CPUID clears the high 32 bits of the RAX/RBX/RCX/RDX registers in all modes.
2. CPUID leaf 1FH is a preferred superset to leaf 0BH. Intel recommends first checking for the existence of CPUID leaf 1FH before using
leaf 0BH.
EDX Bits 03-00: Number of C0* sub C-states supported using MWAIT
Bits 07-04: Number of C1* sub C-states supported using MWAIT
Bits 11-08: Number of C2* sub C-states supported using MWAIT
Bits 15-12: Number of C3* sub C-states supported using MWAIT
Bits 19-16: Number of C4* sub C-states supported using MWAIT
Bits 23-20: Number of C5* sub C-states supported using MWAIT
Bits 27-24: Number of C6* sub C-states supported using MWAIT
Bits 31-28: Number of C7* sub C-states supported using MWAIT
NOTE:
* The definition of C0 through C7 states for MWAIT extension are processor-specific C-states, not
ACPI C-states.
EAX Bits 04-00: Length of the capacity bit mask for the corresponding ResID using minus-one notation.
Bits 31-05: Reserved.
EBX Bits 31-00: Bit-granular map of isolation/contention of allocation units.
EAX Bits 11-00: Reports the maximum MBA throttling value supported for the corresponding ResID using
minus-one notation.
Bits 31-12: Reserved.
EBX Bits 31-00: Reserved.
ECX Bit 00: Per-thread MBA controls are supported.
Bit 01: Reserved.
Bit 02: Reports whether the response of the delay values is linear.
Bits 31-03: Reserved.
EDX Bits 15-00: Highest COS number supported for this ResID.
Bits 31-16: Reserved.
Intel® Software Guard Extensions Capability Enumeration Leaf, Sub-leaf 0 (Initial EAX Value = 12H, ECX = 0)
12H NOTES:
Leaf 12H sub-leaf 0 (ECX = 0) is supported if CPUID.(EAX=07H, ECX=0H):EBX[SGX] = 1.
EAX Bit 00: SGX1. If 1, indicates Intel SGX supports the collection of SGX1 leaf functions.
Bit 01: SGX2. If 1, indicates Intel SGX supports the collection of SGX2 leaf functions.
Bits 04-02: Reserved.
Bit 05: If 1, indicates Intel SGX supports ENCLV instruction leaves EINCVIRTCHILD, EDECVIRTCHILD,
and ESETCONTEXT.
Bit 06: If 1, indicates Intel SGX supports ENCLS instruction leaves ETRACKC, ERDINFO, ELDBC, and
ELDUC.
Bit 07: If 1, indicates Intel SGX supports ENCLU instruction leaf EVERIFYREPORT2.
Bits 09-08: Reserved.
Bit 10: If 1, indicates Intel SGX supports ENCLS instruction leaf EUPDATESVN.
Bit 11: If 1, indicates Intel SGX supports ENCLU instruction leaf EDECCSSA.
Bits 31-12: Reserved.
EBX Bits 31-00: MISCSELECT. Bit vector of supported extended Intel SGX features.
ECX Bits 31-00: Reserved.
EDX Bits 07-00: MaxEnclaveSize_Not64. The maximum supported enclave size in non-64-bit mode is
2^(EDX[7:0]).
Bits 15-08: MaxEnclaveSize_64. The maximum supported enclave size in 64-bit mode is
2^(EDX[15:8]).
Bits 31-16: Reserved.
Intel® SGX Attributes Enumeration Leaf, Sub-leaf 1 (Initial EAX Value = 12H, ECX = 1)
12H NOTES:
Leaf 12H sub-leaf 1 (ECX = 1) is supported if CPUID.(EAX=07H, ECX=0H):EBX[SGX] = 1.
EAX Bit 31-00: Reports the valid bits of SECS.ATTRIBUTES[31:0] that software can set with ECREATE.
EBX Bit 31-00: Reports the valid bits of SECS.ATTRIBUTES[63:32] that software can set with ECREATE.
ECX Bit 31-00: Reports the valid bits of SECS.ATTRIBUTES[95:64] that software can set with ECREATE.
EBX[19:00]: Bits 51:32 of the physical address of the base of the EPC section.
EBX[31:20]: Reserved.
EDX[19:00]: Bits 51:32 of the size of the corresponding EPC section within the Processor
Reserved Memory.
EDX[31:20]: Reserved.
Intel® Processor Trace Enumeration Main Leaf (Initial EAX Value = 14H, ECX = 0)
14H NOTES:
Leaf 14H main leaf (ECX = 0).
EAX Bits 31-00: Reports the maximum sub-leaf supported in leaf 14H.
While a processor may support the Processor Frequency Information leaf, fields that return a value
of zero are not supported.
System-On-Chip Vendor Attribute Enumeration Main Leaf (Initial EAX Value = 17H, ECX = 0)
17H NOTES:
Leaf 17H main leaf (ECX = 0).
Leaf 17H output depends on the initial value in ECX.
Leaf 17H sub-leaves 1 through 3 reports SOC Vendor Brand String.
Leaf 17H is valid if MaxSOCID_Index >= 3.
Leaf 17H sub-leaves 4 and above are reserved.
EAX Bits 31-00: MaxSOCID_Index. Reports the maximum input value of supported sub-leaf in leaf 17H.
EBX Bits 15-00: SOC Vendor ID.
Bit 16: IsVendorScheme. If 1, the SOC Vendor ID field is assigned via an industry standard
enumeration scheme. Otherwise, the SOC Vendor ID field is assigned by Intel.
Bits 31-17: Reserved = 0.
ECX Bits 31-00: Project ID. A unique number an SOC vendor assigns to its SOC projects.
EDX Bits 31-00: Stepping ID. A unique number within an SOC project that an SOC vendor assigns.
System-On-Chip Vendor Attribute Enumeration Sub-leaf (Initial EAX Value = 17H, ECX = 1..3)
17H EAX Bit 31-00: SOC Vendor Brand String. UTF-8 encoded string.
EBX Bit 31-00: SOC Vendor Brand String. UTF-8 encoded string.
ECX Bit 31-00: SOC Vendor Brand String. UTF-8 encoded string.
EDX Bit 31-00: SOC Vendor Brand String. UTF-8 encoded string.
NOTES:
Leaf 17H output depends on the initial value in ECX.
SOC Vendor Brand String is a UTF-8 encoded string padded with trailing bytes of 00H.
The complete SOC Vendor Brand String is constructed by concatenating in ascending order of
EAX:EBX:ECX:EDX and from the sub-leaf 1 fragment towards sub-leaf 3.
System-On-Chip Vendor Attribute Enumeration Sub-leaves (Initial EAX Value = 17H, ECX > MaxSOCID_Index)
17H NOTES:
Leaf 17H output depends on the initial value in ECX.
** CPUID leaf 04H provides details of deterministic cache parameters, including the L2 cache in sub-
leaf 2
80000007H EAX Reserved = 0
EBX Reserved = 0
ECX Reserved = 0
EDX Bits 07-00: Reserved = 0
Bit 08: Invariant TSC available if 1
Bits 31-09: Reserved = 0
80000008H EAX Virtual/Physical Address size
Bits 07-00: #Physical Address Bits*
Bits 15-08: #Virtual Address Bits
Bits 31-16: Reserved = 0
EBX Bits 08-00: Reserved = 0
Bit 09: WBNOINVD is available if 1
Bits 31-10: Reserved = 0
ECX Reserved = 0
EDX Reserved = 0
NOTES:
* If CPUID.80000008H:EAX[7:0] is supported, the maximum physical address number supported
should come from this field.
INPUT EAX = 0H: Returns CPUID’s Highest Value for Basic Processor Information and the Vendor Identification
String
When CPUID executes with EAX set to 0H, the processor returns the highest value the CPUID recognizes for
returning basic processor information. The value is returned in the EAX register and is processor specific.
A vendor identification string is also returned in EBX, EDX, and ECX. For Intel processors, the string is “Genu-
ineIntel” and is expressed:
EBX := 756e6547h (* “Genu”, with G in the low 4 bits of BL *)
EDX := 49656e69h (* “ineI”, with i in the low 4 bits of DL *)
ECX := 6c65746eh (* “ntel”, with n in the low 4 bits of CL *)
INPUT EAX = 80000000H: Returns CPUID’s Highest Value for Extended Processor Information
When CPUID executes with EAX set to 0H, the processor returns the highest value the processor recognizes for
returning extended processor information. The value is returned in the EAX register and is processor specific.
31 28 27 20 19 16 15 14 13 12 11 8 7 4 3 0
Reserved
NOTE
See "Caching Translation Information" in Chapter 4, “Paging,” in the Intel® 64 and IA-32 Architec-
tures Software Developer’s Manual, Volume 3A, and Chapter 20 in the Intel® 64 and IA-32 Archi-
tectures Software Developer’s Manual, Volume 1, for information on identifying earlier IA-32
processors.
The Extended Family ID needs to be examined only when the Family ID is 0FH. Integrate the fields into a display
using the following rule:
IF Family_ID ≠ 0FH
THEN Displayed_Family = Family_ID;
ELSE Displayed_Family = Extended_Family_ID + Family_ID;
FI;
NOTE
Software must confirm that a processor feature is present using feature flags returned by CPUID
prior to using the feature. Software should not depend on future offerings retaining all features.
31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0
ECX
0
RDRAND
F16C
AVX
OSXSAVE
XSAVE
AES
TSC-Deadline
POPCNT
MOVBE
x2APIC
SSE4_2 — SSE4.2
SSE4_1 — SSE4.1
DCA — Direct Cache Access
PCID — Process-context Identifiers
PDCM — Perf/Debug Capability MSR
xTPR Update Control
CMPXCHG16B
FMA — Fused Multiply Add
SDBG
CNXT-ID — L1 Context ID
SSSE3 — SSSE3 Extensions
TM2 — Thermal Monitor 2
EST — Enhanced Intel SpeedStep® Technology
SMX — Safer Mode Extensions
VMX — Virtual Machine Extensions
DS-CPL — CPL Qualified Debug Store
MONITOR — MONITOR/MWAIT
DTES64 — 64-bit DS Area
PCLMULQDQ — Carryless Multiplication
SSE3 — SSE3 Extensions
Reserved
31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0
EDX
Reserved
INPUT EAX = 02H: Cache and TLB Information Returned in EAX, EBX, ECX, EDX
When CPUID executes with EAX set to 02H, the processor returns information about the processor’s internal caches
and TLBs in the EAX, EBX, ECX, and EDX registers.
The encoding is as follows:
• The least-significant byte in register EAX (register AL) indicates the number of times the CPUID instruction
must be executed with an input value of 02H to get a complete description of the processor’s caches and TLBs.
The first member of the family of Pentium 4 processors will return a 01H.
• The most significant bit (bit 31) of each register indicates whether the register contains valid information (set
to 0) or is reserved (set to 1).
• If a register contains valid information, the information is contained in 1 byte descriptors. Table 1-7 shows the
encoding of these descriptors. Note that the order of descriptors in the EAX, EBX, ECX, and EDX registers is not
defined; that is, specific bytes are not designated to contain descriptors for specific cache or TLB types. The
descriptors may appear in any order.
Table 1-7. Encoding of Cache and TLB Descriptors
Descriptor Value Cache or TLB Description
00H Null descriptor
01H Instruction TLB: 4 KByte pages, 4-way set associative, 32 entries
02H Instruction TLB: 4 MByte pages, 4-way set associative, 2 entries
03H Data TLB: 4 KByte pages, 4-way set associative, 64 entries
04H Data TLB: 4 MByte pages, 4-way set associative, 8 entries
05H Data TLB1: 4 MByte pages, 4-way set associative, 32 entries
06H 1st-level instruction cache: 8 KBytes, 4-way set associative, 32 byte line size
08H 1st-level instruction cache: 16 KBytes, 4-way set associative, 32 byte line size
0AH 1st-level data cache: 8 KBytes, 2-way set associative, 32 byte line size
0BH Instruction TLB: 4 MByte pages, 4-way set associative, 4 entries
0CH 1st-level data cache: 16 KBytes, 4-way set associative, 32 byte line size
22H 3rd-level cache: 512 KBytes, 4-way set associative, 64 byte line size, 2 lines per sector
23H 3rd-level cache: 1 MBytes, 8-way set associative, 64 byte line size, 2 lines per sector
25H 3rd-level cache: 2 MBytes, 8-way set associative, 64 byte line size, 2 lines per sector
EAX 66 5B 50 01H
EBX 0H
ECX 0H
EDX 00 7A 70 00H
Which means:
• The least-significant byte (byte 0) of register EAX is set to 01H. This indicates that CPUID needs to be executed
once with an input value of 2 to retrieve complete information about caches and TLBs.
• The most-significant bit of all four registers (EAX, EBX, ECX, and EDX) is set to 0, indicating that each register
contains valid 1-byte descriptors.
• Bytes 1, 2, and 3 of register EAX indicate that the processor has:
— 50H - a 64-entry instruction TLB, for mapping 4-KByte and 2-MByte or 4-MByte pages.
— 5BH - a 64-entry data TLB, for mapping 4-KByte and 4-MByte pages.
— 66H - an 8-KByte 1st level data cache, 4-way set associative, with a 64-Byte cache line size.
• The descriptors in registers EBX and ECX are valid, but contain NULL descriptors.
• Bytes 0, 1, 2, and 3 of register EDX indicate that the processor has:
— 00H - NULL descriptor.
— 70H - Trace cache: 12 K-μop, 8-way set associative.
— 7AH - a 256-KByte 2nd level cache, 8-way set associative, with a sectored, 64-byte cache line size.
— 00H - NULL descriptor.
INPUT EAX = 04H: Returns Deterministic Cache Parameters for Each Level
When CPUID executes with EAX set to 04H and ECX contains an index value, the processor returns encoded data
that describe a set of deterministic cache parameters (for the cache level associated with the input in ECX). Valid
index values start from 0.
Software can enumerate the deterministic cache parameters for each level of the cache hierarchy starting with an
index value of 0, until the parameters report the value associated with the cache type field is 0. The architecturally
defined fields reported by deterministic cache parameters are documented in Table 1-3.
The CPUID leaf 4 also reports data that can be used to derive the topology of processor cores in a physical package.
This information is constant for all valid index values. Software can query the raw data reported by executing
CPUID with EAX=04H and ECX=0H and use it as part of the topology enumeration algorithm described in Chapter
9, “Multiple-Processor Management,” in the Intel® 64 and IA-32 Architectures Software Developer’s Manual,
Volume 3A.
When CPUID executes with EAX set to 07H and ECX = n (n ≥ 1 and less than the number of non-zero bits in
CPUID.(EAX=07H, ECX= 0H).EAX), the processor returns information about extended feature flags. See Table
1-3. In sub-leaf 0, only EAX has the number of sub-leaves. In sub-leaf 0, EBX, ECX & EDX all contain extended
feature flags.
See Table 1-3. Software can use the forward-extendable technique depicted below to query the valid sub-leaves
and obtain size and offset information for each processor extended state save area:
INPUT EAX = 0FH: Returns Intel Resource Director Technology (Intel RDT) Monitoring Enumeration Information
When CPUID executes with EAX set to 0FH and ECX = 0, the processor returns information about the bit-vector
representation of QoS monitoring resource types that are supported in the processor and maximum range of RMID
values the processor can use to monitor of any supported resource types. Each bit, starting from bit 1, corresponds
to a specific resource type if the bit is set. The bit position corresponds to the sub-leaf index (or ResID) that soft-
ware must use to query QoS monitoring capability available for that type. See Table 1-3.
When CPUID executes with EAX set to 0FH and ECX = n (n >= 1, and is a valid ResID), the processor returns infor-
mation software can use to program IA32_PQR_ASSOC, IA32_QM_EVTSEL MSRs before reading QoS data from the
IA32_QM_CTR MSR.
INPUT EAX = 10H: Returns Intel Resource Director Technology (Intel RDT) Allocation Enumeration Information
When CPUID executes with EAX set to 10H and ECX = 0, the processor returns information about the bit-vector
representation of QoS Enforcement resource types that are supported in the processor. Each bit, starting from bit
1, corresponds to a specific resource type if the bit is set. The bit position corresponds to the sub-leaf index (or
ResID) that software must use to query QoS enforcement capability available for that type. See Table 1-3.
When CPUID executes with EAX set to 10H and ECX = n (n >= 1, and is a valid ResID), the processor returns infor-
mation about available classes of service and range of QoS mask MSRs that software can use to configure each
class of services using capability bit masks in the QoS Mask registers, IA32_resourceType_Mask_n.
INPUT EAX = 15H: Returns Time Stamp Counter and Nominal Core Crystal Clock Information
When CPUID executes with EAX set to 15H and ECX = 0H, the processor returns information about Time Stamp
Counter and Core Crystal Clock. See Table 1-3.
Input: EAX=
0x80000000
CPUID
CPUID
True =
Function
Extended
Supported
"zHM", or
Match
"zHG", or
Substring
"zHT"
False
IF Substring Matched Report Error
True If "zHM"
Multiplier = 1 x 106
If "zHG"
Multiplier = 1 x 109
Determine "Multiplier" If "zHT"
Multiplier = 1 x 1012
Scan Digits
Until Blank Reverse Digits
Determine "Freq"
In Reverse Order To Decimal Value
Max. Qualified
Frequency =
"Freq" = X.YZ if
"Freq" x "Multiplier"
Digits = "ZY.X"
NOTE
When a frequency is given in a brand string, it is the maximum qualified frequency of the processor,
not the frequency at which the processor is currently running.
Operation
CASE (EAX) OF
EAX = 0:
EAX := Highest basic function input value understood by CPUID;
EBX := Vendor identification string;
EDX := Vendor identification string;
ECX := Vendor identification string;
BREAK;
EAX = 1H:
EAX[3:0] := Stepping ID;
EAX[7:4] := Model;
EAX[11:8] := Family;
EAX[13:12] := Processor type;
EAX[15:14] := Reserved;
EAX[19:16] := Extended Model;
EAX[27:20] := Extended Family;
EAX[31:28] := Reserved;
EBX[7:0] := Brand Index; (* Reserved if the value is zero. *)
EBX[15:8] := CLFLUSH Line Size;
EBX[16:23] := Reserved; (* Number of threads enabled = 2 if MT enable fuse set. *)
EBX[24:31] := Initial APIC ID;
ECX := Feature flags; (* See Figure 1-2. *)
EDX := Feature flags; (* See Figure 1-3. *)
BREAK;
EAX = 2H:
EAX := Cache and TLB information;
EBX := Cache and TLB information;
ECX := Cache and TLB information;
EDX := Cache and TLB information;
BREAK;
EAX = 3H:
EAX := Reserved;
EBX := Reserved;
ECX := ProcessorSerialNumber[31:0];
(* Pentium III processors only, otherwise reserved. *)
EDX := ProcessorSerialNumber[63:32];
(* Pentium III processors only, otherwise reserved. *
BREAK
EAX = 4H:
EAX := Deterministic Cache Parameters Leaf; (* See Table 1-3. *)
EBX := Deterministic Cache Parameters Leaf;
ECX := Deterministic Cache Parameters Leaf;
EDX := Deterministic Cache Parameters Leaf;
BREAK;
EAX = 5H:
EAX := MONITOR/MWAIT Leaf; (* See Table 1-3. *)
EBX := MONITOR/MWAIT Leaf;
ECX := Reserved = 0;
EDX := Reserved = 0;
BREAK;
EAX = FH:
EAX := Platform Quality of Service Monitoring Enumeration Leaf; (* See Table 1-3. *)
EBX := Platform Quality of Service Monitoring Enumeration Leaf;
ECX := Platform Quality of Service Monitoring Enumeration Leaf;
EDX := Platform Quality of Service Monitoring Enumeration Leaf;
BREAK;
EAX = 10H:
EAX := Platform Quality of Service Enforcement Enumeration Leaf; (* See Table 1-3. *)
EBX := Platform Quality of Service Enforcement Enumeration Leaf;
ECX := Platform Quality of Service Enforcement Enumeration Leaf;
EDX := Platform Quality of Service Enforcement Enumeration Leaf;
BREAK;
EAX = 12H:
EAX := Intel SGX Enumeration Leaf; (* See Table 1-3. *)
EBX := Intel SGX Enumeration Leaf;
ECX := Intel SGX Enumeration Leaf;
EDX := Intel SGX Enumeration Leaf;
BREAK;
EAX = 14H:
EAX := Intel Processor Trace Enumeration Leaf; (* See Table 1-3. *)
EBX := Intel Processor Trace Enumeration Leaf;
ECX := Intel Processor Trace Enumeration Leaf;
EDX := Intel Processor Trace Enumeration Leaf;
BREAK;
EAX = 15H:
EAX := Time Stamp Counter and Core Crystal Clock Information Leaf; (* See Table 1-3. *)
EBX := Time Stamp Counter and Core Crystal Clock Information Leaf;
ECX := Time Stamp Counter and Core Crystal Clock Information Leaf;
EDX := Time Stamp Counter and Core Crystal Clock Information Leaf;
BREAK;
EAX = 16H:
EAX := Processor Frequency Information Enumeration Leaf; (* See Table 1-3. *)
EBX := Processor Frequency Information Enumeration Leaf;
ECX := Processor Frequency Information Enumeration Leaf;
EDX := Processor Frequency Information Enumeration Leaf;
BREAK;
EAX = 17H:
EAX := System-On-Chip Vendor Attribute Enumeration Leaf; (* See Table 1-3. *)
EBX := System-On-Chip Vendor Attribute Enumeration Leaf;
ECX := System-On-Chip Vendor Attribute Enumeration Leaf;
EDX := System-On-Chip Vendor Attribute Enumeration Leaf;
BREAK;
EAX = 18H:
EAX := Deterministic Address Translation Parameters Enumeration Leaf; (* See Table 1-3. *)
EBX := Deterministic Address Translation Parameters Enumeration Leaf;
ECX :=Deterministic Address Translation Parameters Enumeration Leaf;
EDX := Deterministic Address Translation Parameters Enumeration Leaf;
BREAK;
EAX = 19H:
EAX := Key Locker Enumeration Leaf; (* See Table 1-3. *)
EBX := Key Locker Enumeration Leaf;
ECX := Reserved;
EDX := Reserved;
BREAK;
EAX = 80000001H:
EAX := Reserved;
EBX := Reserved;
ECX := Extended Feature Bits (* See Table 1-3.*);
EDX := Extended Feature Bits (* See Table 1-3. *);
BREAK;
EAX = 80000002H:
EAX := Processor Brand String;
EBX := Processor Brand String, continued;
ECX := Processor Brand String, continued;
EDX := Processor Brand String, continued;
BREAK;
EAX = 80000003H:
EAX := Processor Brand String, continued;
EBX := Processor Brand String, continued;
ECX := Processor Brand String, continued;
EDX := Processor Brand String, continued;
BREAK;
EAX = 80000004H:
EAX := Processor Brand String, continued;
EBX := Processor Brand String, continued;
ECX := Processor Brand String, continued;
EDX := Processor Brand String, continued;
BREAK;
EAX = 80000005H:
EAX := Reserved = 0;
EBX := Reserved = 0;
ECX := Reserved = 0;
EDX := Reserved = 0;
BREAK;
EAX = 80000006H:
EAX := Reserved = 0;
EBX := Reserved = 0;
ECX := Cache information;
EDX := Reserved = 0;
BREAK;
EAX = 80000007H:
EAX := Reserved = 0;
EBX := Reserved = 0;
ECX := Reserved = 0;
EDX := Reserved = Miscellaneous feature flags;
BREAK;
EAX = 80000008H:
EAX := Address size information;
EBX := Miscellaneous feature flags;
ECX := Reserved = 0;
EDX := Reserved = 0;
BREAK;
DEFAULT: (* EAX = Value outside of recognized range for CPUID. *)
(* If the highest basic information leaf data depend on ECX input value, ECX is honored.*)
EAX := Reserved; (* Information returned for highest basic information leaf. *)
Flags Affected
None.
1 64bit 1 {1tox} 8 8 8
0 32bit 0 none 8 16 32
Half Load+Op (Half Vector)
1 32bit 0 {1tox} 4 4 4
Table 1-11. EVEX DISP8*N for Instructions Not Affected by Embedded Broadcast
TupleType InputSize EVEX.W N (VL= 128) N (VL= 256) N (VL= 512) Comment
Full Mem N/A N/A 16 32 64 Load/store or subDword full vector
8bit N/A 1 1 1
16bit N/A 2 2 2
Tuple1 Scalar 1Tuple
32bit 0 4 4 4
64bit 1 8 8 8
32bit N/A 4 4 4 1 Tuple, memsize not affected by
Tuple1 Fixed
64bit N/A 8 8 8 EVEX.W
Table 1-11. EVEX DISP8*N for Instructions Not Affected by Embedded Broadcast(Continued)
TupleType InputSize EVEX.W N (VL= 128) N (VL= 256) N (VL= 512) Comment
Half Mem N/A N/A 8 16 32 SubQword Conversion
Quarter Mem N/A N/A 4 8 16 SubDword Conversion
Eighth Mem N/A N/A 2 4 8 SubWord Conversion
Mem128 N/A N/A 16 16 16 Shift count from memory
MOVDDUP N/A N/A 8 32 64 VMOVDDUP
NOTES:
1. Scalar
BFP10001
CHAPTER 2
INSTRUCTION SET REFERENCE, A-Z
Instructions described in this document follow the general documentation convention established in the Intel® 64
and IA-32 Architectures Software Developer’s Manual Volume 2A. Additionally, some instructions use notation
conventions as described below.
In the instruction encoding, the MODRM byte is represented several ways depending on the role it plays. The
MODRM byte has 3 fields: 2-bit MODRM.MOD field, a 3-bit MODRM.REG field and a 3-bit MODRM.RM field. When all
bits of the MODRM byte have fixed values for an instruction, the 2-hex nibble value of that byte is presented after
the opcode in the encoding boxes on the instruction description pages. When only some fields of the MODRM byte
must contain fixed values, those values are specified as follows:
• If only the MODRM.MOD must be 0b11, and MODRM.REG and MODRM.RM fields are unrestricted, this is
denoted as 11:rrr:bbb. The rrr correspond to the 3-bits of the MODRM.REG field and the bbb correspond to
the 3-bits of the MODMR.RM field.
• If the MODRM.MOD field is constrained to be a value other than 0b11, i.e., it must be one of 0b00, 0b01, or
0b10, then we use the notation !(11).
• If for example only the MODRM.REG field had a specific required value, e.g., 0b101, that would be denoted as
mm:101:bbb.
NOTE
Intel®
Historically the 64 and IA-32 Architectures Software Developer’s Manual only specified the
MODRM.REG field restrictions with the notation /0 ... /7 and did not specify restrictions on the
MODRM.MOD and MODRM.RM fields in the encoding boxes.
AADD—Atomically Add
Opcode/ Op/ 64/32 CPUID Feature Description
Instruction En bit Mode Flag
Support
NP 0F38 FC !(11):rrr:bbb A V/V RAO-INT Atomically add my with ry and store the result in
AADD my, ry my.
Description
This instruction atomically adds the destination operand (first operand) and the source operand (second operand),
and then stores the result in the destination operand.
The destination operand is a memory location and the source operand is a register. In 64-bit mode, the instruction’s
default operation size is 32 bits. Using a REX prefix in the form of REX.R permits access to additional registers (R8-
R15). Using a REX prefix in the form of REX.W promotes operation to 64 bits. The destination operand must be
naturally aligned with respect to the data size, at a 4-byte boundary, or an 8-byte boundary if used with a REX.W
prefix in 64-bit mode.
This instruction requires that the destination operand has a write-back (WB) memory type and it is implemented
using the weakly-ordered memory consistency model of write combining (WC) memory type. Before the operation,
the cache line is written-back (if modified) and invalidated from the processor cache. When the operation
completes, the processor may optimize the cacheability of the destination address by writing the result only to
specific levels of the cache hierarchy. Because this instructions uses a weakly-ordered memory consistency model,
a fencing operation implemented with LFENCE, SFENCE, or MFENCE instruction should be used in conjunction with
AADD if a stronger ordering is required. However, note that AADD is not ordered with respect to a younger LFENCE,
as this instruction is not loading data from memory into the processor.
Any attempt to execute the AADD instruction inside an Intel TSX transaction will result in a transaction abort.
Operation
AADD dest, src
Flags Affected
None.
AAND—Atomically AND
Opcode/ Op/ 64/32 CPUID Feature Description
Instruction En bit Mode Flag
Support
66 0F38 FC !(11):rrr:bbb A V/V RAO-INT Atomically AND my with ry and store the result in
AAND my, ry my.
Description
This instruction atomically performs a bitwise AND operation of the destination operand (first operand) and the
source operand (second operand), and then stores the result in the destination operand.
The destination operand is a memory location and the source operand is a register. In 64-bit mode, the instruction’s
default operation size is 32 bits. Using a REX prefix in the form of REX.R permits access to additional registers (R8-
R15). Using a REX prefix in the form of REX.W promotes operation to 64 bits. The destination operand must be
naturally aligned with respect to the data size, at a 4-byte boundary, or an 8-byte boundary if used with a REX.W
prefix in 64-bit mode.
This instruction requires that the destination operand has a write-back (WB) memory type and it is implemented
using the weakly-ordered memory consistency model of write combining (WC) memory type. Before the operation,
the cache line is written-back (if modified) and invalidated from the processor cache. When the operation
completes, the processor may optimize the cacheability of the destination address by writing the result only to
specific levels of the cache hierarchy. Because this instructions uses a weakly-ordered memory consistency model,
a fencing operation implemented with LFENCE, SFENCE, or MFENCE instruction should be used in conjunction with
AAND if a stronger ordering is required. However, note that AAND is not ordered with respect to a younger LFENCE,
as this instruction is not loading data from memory into the processor.
Any attempt to execute the AAND instruction inside an Intel TSX transaction will result in a transaction abort.
Operation
AAND dest, src
Flags Affected
None.
AOR—Atomically OR
Opcode/ Op/ 64/32 CPUID Feature Description
Instruction En bit Mode Flag
Support
F2 0F38 FC !(11):rrr:bbb A V/V RAO-INT Atomically OR my with ry and store the result in my.
AOR my, ry
Description
This instruction atomically performs a bitwise OR operation of the destination operand (first operand) and the
source operand (second operand), and then stores the result in the destination operand.
The destination operand is a memory location and the source operand is a register. In 64-bit mode, the instruction’s
default operation size is 32 bits. Using a REX prefix in the form of REX.R permits access to additional registers (R8-
R15). Using a REX prefix in the form of REX.W promotes operation to 64 bits. The destination operand must be
naturally aligned with respect to the data size, at a 4-byte boundary, or an 8-byte boundary if used with a REX.W
prefix in 64-bit mode.
This instruction requires that the destination operand has a write-back (WB) memory type and it is implemented
using the weakly-ordered memory consistency model of write combining (WC) memory type. Before the operation,
the cache line is written-back (if modified) and invalidated from the processor cache. When the operation
completes, the processor may optimize the cacheability of the destination address by writing the result only to
specific levels of the cache hierarchy. Because this instructions uses a weakly-ordered memory consistency model,
a fencing operation implemented with LFENCE, SFENCE, or MFENCE instruction should be used in conjunction with
AOR if a stronger ordering is required. However, note that AOR is not ordered with respect to a younger LFENCE, as
this instruction is not loading data from memory into the processor.
Any attempt to execute the AOR instruction inside an Intel TSX transaction will result in a transaction abort.
Operation
AOR dest, src
Flags Affected
None.
AXOR—Atomically XOR
Opcode/ Op/ 64/32 CPUID Feature Description
Instruction En bit Mode Flag
Support
F3 0F38 FC !(11):rrr:bbb A V/V RAO-INT Atomically XOR my with ry and store the result in
AXOR my, ry my.
Description
This instruction atomically performs a bitwise XOR operation of the destination operand (first operand) and the
source operand (second operand), and then stores the result in the destination operand.
The destination operand is a memory location and the source operand is a register. In 64-bit mode, the instruction’s
default operation size is 32 bits. Using a REX prefix in the form of REX.R permits access to additional registers (R8-
R15). Using a REX prefix in the form of REX.W promotes operation to 64 bits. The destination operand must be
naturally aligned with respect to the data size, at a 4-byte boundary, or an 8-byte boundary if used with a REX.W
prefix in 64-bit mode.
This instruction requires that the destination operand has a write-back (WB) memory type and it is implemented
using the weakly-ordered memory consistency model of write combining (WC) memory type. Before the operation,
the cache line is written-back (if modified) and invalidated from the processor cache. When the operation
completes, the processor may optimize the cacheability of the destination address by writing the result only to
specific levels of the cache hierarchy. Because this instructions uses a weakly-ordered memory consistency model,
a fencing operation implemented with LFENCE, SFENCE, or MFENCE instruction should be used in conjunction with
AXOR if a stronger ordering is required. However, note that AXOR is not ordered with respect to a younger LFENCE,
as this instruction is not loading data from memory into the processor.
Any attempt to execute the AXOR instruction inside an Intel TSX transaction will result in a transaction abort.
Operation
AXOR dest, src
Flags Affected
None.
VEX.128.66.0F38.W0 E6 !(11):rrr:bbb A V/N.E. CMPCCXADD Compare value in r32 (second operand) with
value in m32. If below or equal (CF=1 or ZF=1),
CMPBEXADD m32, r32, r32
add value from r32 (third operand) to m32 and
write new value in m32. The second operand is
always updated with the original value from
m32.
VEX.128.66.0F38.W1 E6 !(11):rrr:bbb A V/N.E. CMPCCXADD Compare value in r64 (second operand) with
value in m64. If below or equal (CF=1 or ZF=1),
CMPBEXADD m64, r64, r64
add value from r64 (third operand) to m64 and
write new value in m64. The second operand is
always updated with the original value from
m64.
VEX.128.66.0F38.W0 E2 !(11):rrr:bbb A V/N.E. CMPCCXADD Compare value in r32 (second operand) with
value in m32. If below (CF=1), add value from
CMPBXADD m32, r32, r32
r32 (third operand) to m32 and write new value
in m32. The second operand is always updated
with the original value from m32.
VEX.128.66.0F38.W1 E2 !(11):rrr:bbb A V/N.E. CMPCCXADD Compare value in r64 (second operand) with
value in m64. If below (CF=1), add value from
CMPBXADD m64, r64, r64
r64 (third operand) to m64 and write new value
in m64. The second operand is always updated
with the original value from m64.
VEX.128.66.0F38.W0 EE !(11):rrr:bbb A V/N.E. CMPCCXADD Compare value in r32 (second operand) with
value in m32. If less or equal (ZF=1 or SF≠OF),
CMPLEXADD m32, r32, r32
add value from r32 (third operand) to m32 and
write new value in m32. The second operand is
always updated with the original value from
m32.
VEX.128.66.0F38.W1 EE !(11):rrr:bbb A V/N.E. CMPCCXADD Compare value in r64 (second operand) with
value in m64. If less or equal (ZF=1 or SF≠OF),
CMPLEXADD m64, r64, r64
add value from r64 (third operand) to m64 and
write new value in m64. The second operand is
always updated with the original value from
m64.
VEX.128.66.0F38.W0 EC !(11):rrr:bbb A V/N.E. CMPCCXADD Compare value in r32 (second operand) with
value in m32. If less (SF≠OF), add value from
CMPLXADD m32, r32, r32
r32 (third operand) to m32 and write new value
in m32. The second operand is always updated
with the original value from m32.
VEX.128.66.0F38.W1 EC !(11):rrr:bbb A V/N.E. CMPCCXADD Compare value in r64 (second operand) with
value in m64. If less (SF≠OF), add value from
CMPLXADD m64, r64, r64
r64 (third operand) to m64 and write new value
in m64. The second operand is always updated
with the original value from m64.
VEX.128.66.0F38.W0 E7 !(11):rrr:bbb A V/N.E. CMPCCXADD Compare value in r32 (second operand) with
value in m32. If not below or equal (CF=0 and
CMPNBEXADD m32, r32, r32
ZF=0), add value from r32 (third operand) to
m32 and write new value in m32. The second
operand is always updated with the original
value from m32.
VEX.128.66.0F38.W1 E7 !(11):rrr:bbb A V/N.E. CMPCCXADD Compare value in r64 (second operand) with
value in m64. If not below or equal (CF=0 and
CMPNBEXADD m64, r64, r64
ZF=0), add value from r64 (third operand) to
m64 and write new value in m64. The second
operand is always updated with the original
value from m64.
VEX.128.66.0F38.W0 E3 !(11):rrr:bbb A V/N.E. CMPCCXADD Compare value in r32 (second operand) with
value in m32. If not below (CF=0), add value
CMPNBXADD m32, r32, r32
from r32 (third operand) to m32 and write new
value in m32. The second operand is always
updated with the original value from m32.
VEX.128.66.0F38.W1 E3 !(11):rrr:bbb A V/N.E. CMPCCXADD Compare value in r64 (second operand) with
value in m64. If not below (CF=0), add value
CMPNBXADD m64, r64, r64
from r64 (third operand) to m64 and write new
value in m64. The second operand is always
updated with the original value from m64.
VEX.128.66.0F38.W0 EF !(11):rrr:bbb A V/N.E. CMPCCXADD Compare value in r32 (second operand) with
value in m32. If not less or equal (ZF=0 and
CMPNLEXADD m32, r32, r32
SF=OF), add value from r32 (third operand) to
m32 and write new value in m32. The second
operand is always updated with the original
value from m32.
VEX.128.66.0F38.W1 EF !(11):rrr:bbb A V/N.E. CMPCCXADD Compare value in r64 (second operand) with
value in m64. If not less or equal (ZF=0 and
CMPNLEXADD m64, r64, r64
SF=OF), add value from r64 (third operand) to
m64 and write new value in m64. The second
operand is always updated with the original
value from m64.
VEX.128.66.0F38.W0 ED !(11):rrr:bbb A V/N.E. CMPCCXADD Compare value in r32 (second operand) with
value in m32. If not less (SF=OF), add value from
CMPNLXADD m32, r32, r32
r32 (third operand) to m32 and write new value
in m32. The second operand is always updated
with the original value from m32.
VEX.128.66.0F38.W1 ED !(11):rrr:bbb A V/N.E. CMPCCXADD Compare value in r64 (second operand) with
value in m64. If not less (SF=OF), add value from
CMPNLXADD m64, r64, r64
r64 (third operand) to m64 and write new value
in m64. The second operand is always updated
with the original value from m64.
VEX.128.66.0F38.W0 E1 !(11):rrr:bbb A V/N.E. CMPCCXADD Compare value in r32 (second operand) with
value in m32. If not overflow (OF=0), add value
CMPNOXADD m32, r32, r32
from r32 (third operand) to m32 and write new
value in m32. The second operand is always
updated with the original value from m32.
VEX.128.66.0F38.W1 E1 !(11):rrr:bbb A V/N.E. CMPCCXADD Compare value in r64 (second operand) with
value in m64. If not overflow (OF=0), add value
CMPNOXADD m64, r64, r64
from r64 (third operand) to m64 and write new
value in m64. The second operand is always
updated with the original value from m64.
VEX.128.66.0F38.W0 EB !(11):rrr:bbb A V/N.E. CMPCCXADD Compare value in r32 (second operand) with
value in m32. If not parity (PF=0), add value
CMPNPXADD m32, r32, r32
from r32 (third operand) to m32 and write new
value in m32. The second operand is always
updated with the original value from m32.
VEX.128.66.0F38.W1 EB !(11):rrr:bbb A V/N.E. CMPCCXADD Compare value in r64 (second operand) with
value in m64. If not parity (PF=0), add value
CMPNPXADD m64, r64, r64
from r64 (third operand) to m64 and write new
value in m64. The second operand is always
updated with the original value from m64.
VEX.128.66.0F38.W0 E9 !(11):rrr:bbb A V/N.E. CMPCCXADD Compare value in r32 (second operand) with
value in m32. If not sign (SF=0), add value from
CMPNSXADD m32, r32, r32
r32 (third operand) to m32 and write new value
in m32. The second operand is always updated
with the original value from m32.
VEX.128.66.0F38.W1 E9 !(11):rrr:bbb A V/N.E. CMPCCXADD Compare value in r64 (second operand) with
value in m64. If not sign (SF=0), add value from
CMPNSXADD m64, r64, r64
r64 (third operand) to m64 and write new value
in m64. The second operand is always updated
with the original value from m64.
VEX.128.66.0F38.W0 E5 !(11):rrr:bbb A V/N.E. CMPCCXADD Compare value in r32 (second operand) with
value in m32. If not zero (ZF=0), add value from
CMPNZXADD m32, r32, r32
r32 (third operand) to m32 and write new value
in m32. The second operand is always updated
with the original value from m32.
VEX.128.66.0F38.W1 E5 !(11):rrr:bbb A V/N.E. CMPCCXADD Compare value in r64 (second operand) with
value in m64. If not zero (ZF=0), add value from
CMPNZXADD m64, r64, r64
r64 (third operand) to m64 and write new value
in m64. The second operand is always updated
with the original value from m64.
VEX.128.66.0F38.W0 E0 !(11):rrr:bbb A V/N.E. CMPCCXADD Compare value in r32 (second operand) with
value in m32. If overflow (OF=1), add value from
CMPOXADD m32, r32, r32
r32 (third operand) to m32 and write new value
in m32. The second operand is always updated
with the original value from m32.
VEX.128.66.0F38.W1 E0 !(11):rrr:bbb A V/N.E. CMPCCXADD Compare value in r64 (second operand) with
value in m64. If overflow (OF=1), add value from
CMPOXADD m64, r64, r64
r64 (third operand) to m64 and write new value
in m64. The second operand is always updated
with the original value from m64.
VEX.128.66.0F38.W0 EA !(11):rrr:bbb A V/N.E. CMPCCXADD Compare value in r32 (second operand) with
value in m32. If parity (PF=1), add value from
CMPPXADD m32, r32, r32
r32 (third operand) to m32 and write new value
in m32. The second operand is always updated
with the original value from m32.
VEX.128.66.0F38.W1 EA !(11):rrr:bbb A V/N.E. CMPCCXADD Compare value in r64 (second operand) with
value in m64. If parity (PF=1), add value from
CMPPXADD m64, r64, r64
r64 (third operand) to m64 and write new value
in m64. The second operand is always updated
with the original value from m64.
VEX.128.66.0F38.W0 E8 !(11):rrr:bbb A V/N.E. CMPCCXADD Compare value in r32 (second operand) with
value in m32. If sign (SF=1), add value from r32
CMPSXADD m32, r32, r32
(third operand) to m32 and write new value in
m32. The second operand is always updated
with the original value from m32.
VEX.128.66.0F38.W1 E8 !(11):rrr:bbb A V/N.E. CMPCCXADD Compare value in r64 (second operand) with
value in m64. If sign (SF=1), add value from r64
CMPSXADD m64, r64, r64
(third operand) to m64 and write new value in
m64. The second operand is always updated
with the original value from m64.
VEX.128.66.0F38.W0 E4 !(11):rrr:bbb A V/N.E. CMPCCXADD Compare value in r32 (second operand) with
value in m32. If zero (ZF=1), add value from r32
CMPZXADD m32, r32, r32
(third operand) to m32 and write new value in
m32. The second operand is always updated
with the original value from m32.
VEX.128.66.0F38.W1 E4 !(11):rrr:bbb A V/N.E. CMPCCXADD Compare value in r64 (second operand) with
value in m64. If zero (ZF=1), add value from r64
CMPZXADD m64, r64, r64
(third operand) to m64 and write new value in
m64. The second operand is always updated
with the original value from m64.
Description
This instruction compares the value from memory with the value of the second operand. If the specified condition
is met, then the processor will add the third operand to the memory operand and write it into memory, else the
memory is unchanged by this instruction.
This instruction must have MODRM.MOD equal to 0, 1, or 2. The value 3 for MODRM.MOD is reserved and will cause
an invalid opcode exception (#UD).
The second operand is always updated with the original value of the memory operand. The EFLAGS conditions are
updated from the results of the comparison.The instruction uses an implicit lock. This instruction does not permit
the use of an explicit lock prefix.
Operation
CMPCCXADD srcdest1, srcdest2, src3
tmp1 := load lock srcdest1
tmp2 := tmp1 + src3
EFLAGS.CS,OF,SF,ZF,AF,PF := CMP tmp1, srcdest2
IF <condition>:
srcdest1 := store unlock tmp2
ELSE
srcdest1 := store unlock tmp1
srcdest2 :=tmp1
Flags Affected
The EFLAGS conditions are updated from the results of the comparison.
Exceptions
Exceptions Type 14; see Table 2-1.
Protected and
Compatibility
Virtual-8086
64-bit
Real
Description
The PBNDKB instruction allows software to bind information to a platform by encrypting it with a platform-specific
wrapping key. The encrypted data may later be used by the PCONFIG instruction to configure the total storage
encryption (TSE) engine.1
The instruction can be executed only in 64-bit mode. The registers RBX and RCX provide input information to the
instruction. Executions of PBNDKB may fail for platform-specific reasons. An execution reports failure by setting
the ZF flag and loading EAX with a non-zero failure reason; a successful execution clears ZF and EAX.
The instruction operates on 256-byte data structures called bind structures. It reads a bind structure at the linear
address in RBX and writes a modified bind structure to the linear address in RCX. The addresses in RBX and RCX
must be different from each other and must be 256-byte aligned.
The instruction encrypts a portion of the input bind structure and generates a MAC of parts of that structure. The
encrypted data and MAC are written out as part of the output bind structure.
The format of a bind structure is given in Table 2-1.
1.For details on Total Storage Encryption (TSE), see Chapter 11 of this document.
• BTDATA: This field contains additional control and data that are not encrypted. It has the following format:
— USER_SUPP_CHALLENGE (bytes 31:0): PBNDKB uses this value in the input bind structure to determine
the wrapping key (see below). It writes zero to this field in the output bind structure.
— KEY_GENERATION_CTRL (byte 32): PBNDKB uses this value in the input bind structure to determine
whether to randomize the keys being encrypted. The value must be 0 or 1 (otherwise, a #GP occurs).
— The remaining 95 bytes are reserved and must be zero.
PBNDKB determines a 256-bit wrapping key by computing an HMAC based on SHA-256 using 256-bit platform-
specific key and the USER_SUPP_CHALLENGE in the BTDATA field in the input bind structure.
PBNDKB then uses the wrapping key and an AES GCM authenticated encryption function to encrypt BTENCDATA
and produce a MAC. The encryption function uses the following inputs:
• The 64-byte BTENCDATA to be encrypted (which may have been randomized; see above).
• The 256-bit wrapping key.
• The 96-bit IV randomly generated by PBNDKB.
• 176 bytes of additional authenticated data that are the concatenation of 8 bytes of zeroes, the IV, 28 bytes of
zeroes, and the BTDATA in the input bind structure.
• The length of the additional authenticated data (176).
The encryption function produces a structure with 64 bytes of encrypted data and a 16-byte MAC. PBNDKB saves
these values to the corresponding fields in its output bind structure. Other fields are copied from the input bind
structure or written as zero, except the IV (which receives the randomly generated value) and the
USER_SUPP_CHALLENGE in the BTDATA, which is written as zero.
Operation
(* #UD if PBNDKB is not enumerated, CPL > 0, or not in 64-bit mode*)
IF CPUID.(EAX=07H, ECX=01H):EBX.TSE[bit 1] = 0 OR CPL > 0 OR not in 64-bit mode
THEN #UD; FI;
(* XOR the input keys with the random keys; this does not modify input bind structure in memory *)
TMP_BIND_STRUCT.BTENCDATA.DATA_KEY := RNG_DATA_KEY XOR TMP_BIND_STRUCT.BTENCDATA.DATA_KEY;
TMP_BIND_STRUCT.BTENCDATA.TWEAK_KEY := RNG_TWEAK_KEY XOR TMP_BIND_STRUCT.BTENCDATA.TWEAK_KEY;
FI;
(* Compose 176 bytes of additional authenticated data for use by authenticated decryption *)
AAD := Concatenation of bytes 63:16 and bytes 255:128 of TMP_BIND_STRUCT;
OUT_BIND_STRUCT.MAC := ENCRYPT_STRUCT.MAC;
OUT_BIND_STRUCT[bytes 23:16] := 0;
OUT_BIND_STRUCT.IV := TMP_IV;
OUT_BIND_STRUCT[bytes 63:36] := 0;
OUT_BIND_STRUCT.BTENCDATA := ENCRYPT_STRUCT.ENC_DATA;
OUT_BIND_STRUCT.BTDATA.USER_SUPP_CHALLENGE := 0;
OUT_BIND_STRUCT.BTDATA.KEY_GENERATION_CTRL := IN_BIND_STRUCT.BTDATA.KEY_GENERATION_CTRL;
OUT_BIND_STRUCT.BTDATA[bytes 127:33] := 0;
EXIT:
RFLAGS.CF := 0;
RFLAGS.PF := 0;
RFLAGS.AF := 0;
RFLAGS.OF := 0;
RFLAGS.SF := 0;
PCONFIG—Platform Configuration
Opcode/ Op/ 64/32 bit CPUID Feature Description
Instruction En Mode Flag
Support
NP 0F 01 C5 A V/V PCONFIG This instruction is used to execute functions for
PCONFIG configuring platform features.
Description
The PCONFIG instruction allows software to configure certain platform features. It supports these features with
multiple leaf functions, selecting a leaf function using the value in EAX.
Depending on the leaf function, the registers RBX, RCX, and RDX may be used to provide input information or for
the instruction to report output information. Addresses and operands are 32 bits outside 64-bit mode and are 64
bits in 64-bit mode. The value of CS.D does not affect operand size or address size.
Executions of PCONFIG may fail for platform-specific reasons. An execution reports failure by setting the ZF flag
and loading EAX with a non-zero failure reason; a successful execution clears ZF and EAX.
Each PCONFIG leaf function applies to a specific hardware block called a PCONFIG target. The leaf function is
supported only if the processor supports that target. Each target is associated with a numerical target identifier,
and CPUID leaf 1BH (PCONFIG information) enumerates the identifiers of the supported targets. An attempt to
execute an undefined leaf function, or a leaf function that applies to an unsupported target identifier, results in a
general-protection exception (#GP).
not change the state of the TLB caches or memory pipeline. Software is responsible for taking appropriate actions
to ensure correct behavior.
The key table used by TME-MK is shared by all logical processors in a platform. For this reason, execution of this
leaf function must gain exclusive access to the key table before updating it. The leaf function does this by acquiring
a lock (implemented in the platform) and retaining that lock until the execution completes. An execution of the leaf
function may fail to acquire the lock if it is already in use. In this situation, the leaf function will load EAX with failure
reason 5 (DEVICE_BUSY). When this happens, the key table is not updated, and software should retry execution of
PCONFIG.
• KEY_FIELD_2: If the direct key-programming command is used (TSE_SET_KEY_DIRECT), this field carries
the software supplied tweak key to be used for the KeyID. Otherwise, the field is ignored.
The TSE key table is shared by all logical processors in a platform. For this reason, execution of this leaf function
must gain exclusive access to the key table before updating it. The leaf function does this by acquiring a lock
(implemented in the platform) and retaining that lock until the execution completes. An execution of the leaf func-
tion may fail to acquire the lock if it is already in use. In this situation, the leaf function will load EAX with failure
reason 5 (DEVICE_BUSY). When this happens, the key table is not updated, and software should retry execution of
PCONFIG.
• IV: The initialization vector that PBNDKB used for encryption. The PCONFIG leaf function will use this in its
decryption of encrypted data and computation of the MAC.
• BTENCDATA: Data which had been encrypted by PBNDKB, containing the data and tweak keys to be used by
TSE.
• BTDATA: Data that was input to PBNDKB that was output without encryption. It has the following format:
— USER_SUPP_CHALLENGE (bytes 31:0): PBNDKB uses a value provided by software in its input bind
structure but writes zero to this field in the output bind structure to be used by PCONFIG. Software should
configure this field with the proper value before executing this PCONFIG leaf function.
— KEY_GENERATION_CTRL (byte 32): PBNDKB uses this value to determine whether to generate random
keys. The PCONFIG leaf function does not use this field.
— The remaining 95 bytes are reserved and must be zero.
The leaf function uses the entire BTDATA field when it computes the MAC.
The leaf function determines a 256-bit wrapping key by computing an HMAC based on SHA-256 using 256-bit
platform-specific key and the USER_SUPP_CHALLENGE in the BTDATA field of the TSE_BIND_STRUCT.
Using the wrapping key, the leaf function uses an AES GCM authenticated decryption function to decrypt BTENC-
DATA and compute a MAC. The decryption function uses the following inputs:
• The 64-byte BTENCDATA from TSE_BIND_STRUCT to be decrypted.
• The 256-bit wrapping key.
• The 96-bit IV from TSE_BIND_STRUCT.
• Additional authenticated data that is the concatenation of bytes 63:16 and bytes 255:128 of the TSE_BIND_-
STRUCT. These 176 bytes will comprise 8 bytes of zeroes, the 12-byte IV, 28 bytes of zeroes, and 128 bytes of
BTDATA of which the upper 95 bytes are zero).
• The length of the additional authenticated data (176).
The decryption function produces a structure with a 64 bytes of decrypted data and a 16-byte MAC. The decrypted
data comprises a 256-bit data key and a 256-bit tweak key.
If the MAC produced by the decryption function differs from that provided in the TSE_BIND_STRUCT, the leaf func-
tion will load EAX with failure reason 7 (UNWRAP_FAILURE). Otherwise, the leaf function will attempt to program
the TSE key table for the selected KeyID with the keys contained in the decrypted data.
The TSE key table is shared by all logical processors in a platform. For this reason, execution of this leaf function
must gain exclusive access to the key table before updating it. The leaf function does this by acquiring a lock
(implemented in the platform) and retaining that lock until the execution completes. An execution of the leaf func-
tion may fail to acquire the lock if it is already in use. In this situation, the leaf function will load EAX with failure
reason 5 (DEVICE_BUSY). When this happens, the key table is not updated, and software should retry execution of
PCONFIG.
Operation
(* #UD if PCONFIG is not enumerated or CPL > 0 *)
IF CPUID.(EAX=07H, ECX=0H):EDX.PCONFIG[bit 18] = 0 OR CPL > 0
THEN #UD; FI;
(* Check that only one encryption algorithm is requested for the KeyID and it is one of the activated algorithms *)
IF TMP_KEY_PROGRAM_STRUCT.KEYID_CTRL.ENC_ALG does not set exactly one bit OR
(TMP_KEY_PROGRAM_STRUCT.KEYID_CTRL.ENC_ALG & IA32_TME_ACTIVATE[63:48]) = 0
THEN #GP(0); FI;
Attempt to acquire lock to gain exclusive access to platform key table for TME-MK;
IF attempt is unsuccessful
THEN (* PCONFIG failure *)
RFLAGS.ZF := 1;
RAX := DEVICE_BUSY; (* failure reason 5 *)
GOTO EXIT;
FI;
CASE (TMP_KEY_PROGRAM_STRUCT.KEYID_CTRL.COMMAND) OF
0 (KEYID_SET_KEY_DIRECT):
Update TME-MK table for TMP_KEY_PROGRAM_STRUCT.KEYID as follows:
Encrypt with the selected key
Use the encryption algorithm selected by TMP_KEY_PROGRAM_STRUCT.KEYID_CTRL.ENC_ALG
(* The number of bytes used by the next two lines depends on selected encryption algorithm *)
DATA_KEY is TMP_KEY_PROGRAM_STRUCT.KEY_FIELD_1
TWEAK_KEY is TMP_KEY_PROGRAM_STRUCT.KEY_FIELD_2
BREAK;
1 (KEYID_SET_KEY_RANDOM):
Load TMP_RND_DATA_KEY with a random key using hardware RNG; (* key size depends on selected encryption algorithm *)
IF there was insufficient entropy
THEN (* PCONFIG failure *)
RFLAGS.ZF := 1;
RAX := ENTROPY_ERROR; (* failure reason 2 *)
Release lock on platform key table;
GOTO EXIT;
FI;
Load TMP_RND_TWEAK_KEY with a random key using hardware RNG; (* key size depends on selected encryption algorithm *)
2 (KEYID_CLEAR_KEY):
Update TME-MK table for TMP_KEY_PROGRAM_STRUCT.KEYID as follows:
Encrypt (or not) using the current configuration for TME
The specified encryption algorithm and key values are not used.
BREAK;
3 (KEYID_NO_ENCRYPT):
Update TME-MK table for TMP_KEY_PROGRAM_STRUCT.KEYID as follows:
Do not encrypt
The specified encryption algorithm and key values are not used.
BREAK;
ESAC;
Release lock on platform key table for TME-MK;
1 (TSE_KEY_PROGRAM):
IF CPUID function 1BH does not enumerate support for the TSE target (value 2)
THEN #GP(0); FI;
(* Check that only one encryption algorithm is requested for the KeyID and it is one of the activated algorithms *)
IF TMP_KEY_STRUCT.KEYID_CTRL.ENC_ALG does not set exactly one bit OR
(TMP_KEY_STRUCT.KEYID_CTRL.ENC_ALG & IA32_TSE_CAPABILITY[15:0]) = 0
THEN #GP(0); FI;
Attempt to acquire lock to gain exclusive access to platform key table for TSE;
IF attempt is unsuccessful
THEN (* PCONFIG failure *)
RFLAGS.ZF := 1;
RAX := DEVICE_BUSY; (* failure reason 5 *)
GOTO EXIT;
FI;
CASE (TMP_KEY_STRUCT.KEYID_CTRL.COMMAND) OF
0 (TSE_SET_KEY_DIRECT):
Update TSE table for TMP_KEY_STRUCT.KEYID as follows:
Encrypt with the selected key
Use the encryption algorithm selected by TMP_KEY_STRUCT.KEYID_CTRL.ENC_ALG
(* The number of bytes used by the next two lines depends on selected encryption algorithm *)
DATA_KEY is TMP_KEY_STRUCT.KEY_FIELD_1
TWEAK_KEY is TMP_KEY_STRUCT.KEY_FIELD_2
BREAK;
1 (TSE_NO_ENCRYPT):
Update TSE table for TMP_KEY_STRUCT.KEYID as follows:
Do not encrypt
The specified encryption algorithm and key values are not used.
BREAK;
ESAC;
Release lock on platform key table for TSE;
2 (TSE_KEY_PROGRAM_WRAPPED):
IF CPUID function 1BH does not enumerate support for the TSE target (value 2)
THEN #GP(0); FI;
(* Check that only one encryption algorithm is requested for the KeyID and it is one of the activated algorithms *)
IF RBX[39:24] does not set exactly one bit OR (RBX[39:24] & IA32_TSE_CAPABILITY[15:0]) = 0
THEN #GP(0); FI;
IF TMP_BIND_STRUCT.BTDATA.KEY_GENERATION_CTRL > 1
THEN #GP(0); FI;
IF bytes 128:33 of TMP_BIND_STRUCT.BTDATA are not all zero
THEN #GP(0); FI;
(* Compose 176 bytes of additional authenticated data for use by authenticated decryption *)
AAD := Concatenation of bytes 63:16 and bytes 255:128 of TMP_BIND_STRUCT;
Attempt to acquire lock to gain exclusive access to platform key table for TSE;
IF attempt is unsuccessful
THEN (* PCONFIG failure *)
RFLAGS.ZF := 1;
RAX := DEVICE_BUSY; (* failure reason 5 *)
GOTO EXIT;
FI;
ESAC;
RAX := 0;
RFLAGS.ZF := 0;
EXIT:
RFLAGS.CF := 0;
RFLAGS.PF := 0;
RFLAGS.AF := 0;
RFLAGS.OF := 0;
RFLAGS.SF := 0;
Description
This instruction reads a software-provided list of up to 64 MSRs and stores their values in memory.
RDMSRLIST takes three implied input operands:
• RSI: Linear address of a table of MSR addresses (8 bytes per address).
• RDI: Linear address of a table into which MSR data is stored (8 bytes per MSR).
• RCX: 64-bit bitmask of valid bits for the MSRs. Bit 0 is the valid bit for entry 0 in each table, etc.
For each RCX bit [n] from 0 to 63, if RCX[n] is 1, RDMSRLIST will read the MSR specified at entry [n] in the RSI
table and write it out to memory at the entry [n] in the RDI table.
This implies a maximum of 64 MSRs that can be processed by this instruction. The processor will clear RCX[n] after
it finishes handling that MSR. Similar to repeated string operations, RDMSRLIST supports partial completion for
interrupts, exceptions, and traps. In these situations, the RIP register saved will point to the RDMSRLIST instruc-
tion while the RCX register will have cleared bits corresponding to all completed iterations.
This instruction must be executed at privilege level 0; otherwise, a general protection exception #GP(0) is gener-
ated. This instruction performs MSR specific checks and respects the VMX MSR VM-execution controls in the same
manner as RDMSR.
Although RDMSRLIST accesses the entries in the two tables in order, the actual reads of the MSRs may be
performed out of order: for table entries m < n, the processor may read the MSR for entry n before reading the
MSR for entry m. (This may be true also for a sequence of executions of RDMSR.) Ordering is guaranteed if the
address of the IA32_BARRIER MSR (2FH) appears in the table of MSR addresses. Specifically, if IA32_BARRIER
appears at entry m, then the MSR read for any entry n with n > m will not occur until (1) all instructions prior to
RDMSRLIST have completed locally; and (2) MSRs have been read for all table entries before entry m.
The processor is allowed to (but not required to) “load ahead” in the list. Examples:
• Use old memory type or TLB translation for loads/stores to list memory despite an MSR written by a previous
iteration changing MTRR or invalidating TLBs.
• Cause a page fault or EPT violation for a memory access to an entry > “n” in MSR address or data tables,
despite the processor only having read or written “n” MSRs.1
1. For example, the processor may take a page fault due to a linear address for the 10th entry in the MSR address table despite only
having completed the MSR writes up to entry 5.
• The value of ECX is in the range C0000000H–C0001FFFH and bit n in read bitmap for high MSRs is 1, where n
is the value of the MSR address & 00001FFFH.
A VM exit for the above reasons for the RDMSRLIST instruction will specify exit reason 78 (decimal). The exit qual-
ification is set to the MSR address causing the VM exit if “use MSR bitmaps” VM-execution control is 1. If “use MSR
bitmaps” VM-execution control is 0, then the VM-exit qualification will be 0.
If software wants to emulate a single iteration of RDMSRLIST after a VM exit, it can use the exit qualification to
identify the MSR. Such software will need to write to the table of data. It can calculate the guest-linear address of
the table entry to write by using the values of RDI (the guest-linear address of the table) and RCX (the lowest bit
set in RCX identifies the specific table entry.
Operation
WHILE (RCX != 0) {
MSR_index = TZCNT(RCX)
MSR_address = mem[RSI + (MSR_index * 8) ]
VM exit if specified by VM-execution controls (for specified MSR_address)
#GP(0) if MSR_address[61:32] !=0
#GP(0) if MSR_address is not accessible for RDMSR
mem[RDI + (MSR_index * 8)]) = RDMSR (MSR_address)
Clear RCX [MSR_index]
Take any pending interrupts/traps
}
Flags Affected
None.
Description
This instruction loads one BF16 element from memory, converts it to FP32, and broadcasts it to a SIMD register.
This instruction does not generate floating-point exceptions and does not consult or update MXCSR.
Denormal BF16 input operands are treated as zeros (DAZ). Since any BF16 number can be represented in FP32,
the conversion result is exact and no rounding is needed.
Operation
VBCSTNEBF162PS dest, src (VEX encoded version)
VL = (128, 256)
KL = VL/32
Flags Affected
None.
Other Exceptions
See Exceptions Type 5.
Description
This instruction loads one FP16 element from memory, converts it to FP32, and broadcasts it to a SIMD register.
This instruction does not generate floating-point exceptions and does not consult or update MXCSR.
Input FP16 denormals are converted to normal FP32 numbers and not treated as zero. Since any FP16 number can
be represented in FP32, the conversion result is exact and no rounding is needed.
Operation
VBCSTNESH2PS dest, src (VEX encoded version)
VL = (128, 256)
KL = VL/32
Flags Affected
None.
Other Exceptions
See Exception Type 5.
Description
This instruction loads packed BF16 elements from memory, converts the even elements to FP32, and writes the
result to the destination SIMD register.
This instruction does not generate floating-point exceptions and does not consult or update MXCSR.
Denormal BF16 input operands are treated as zeros (DAZ). Since any BF16 number can be represented in FP32,
the conversion result is exact and no rounding is needed.
Operation
VCVTNEEBF162PS dest, src (VEX encoded version)
VL = (128, 256)
KL = VL/32
Flags Affected
None.
Other Exceptions
See Exception Type 4.
Description
This instruction loads packed FP16 elements from memory, converts the even elements to FP32, and writes the
result to the destination SIMD register.
This instruction does not generate floating-point exceptions and does not consult or update MXCSR.
Input FP16 denormals are converted to normal FP32 numbers and not treated as zero. Since any FP16 number can
be represented in FP32, the conversion result is exact and no rounding is needed.
Operation
VCVTNEEPH2PS dest, src (VEX encoded version)
VL = (128, 256)
KL = VL/32
Flags Affected
None.
Other Exceptions
See Exception Type 4.
Description
This instruction loads packed BF16 elements from memory, converts the odd elements to FP32, and writes the
result to the destination SIMD register.
This instruction does not generate floating-point exceptions and does not consult or update MXCSR.
Denormal BF16 input operands are treated as zeros (DAZ). Since any BF16 number can be represented in FP32,
the conversion result is exact and no rounding is needed.
Operation
VCVTNEOBF162PS dest, src (VEX encoded version)
VL = (128, 256)
KL = VL/32
Flags Affected
None.
Other Exceptions
See Exception Type 4.
Description
This instruction loads packed FP16 elements from memory, converts the odd elements to FP32, and writes the
result to the destination SIMD register.
This instruction does not generate floating-point exceptions and does not consult or update MXCSR.
Input FP16 denormals are converted to normal FP32 numbers and not treated as zero. Since any FP16 number can
be represented in FP32, the conversion result is exact and no rounding is needed.
Operation
VCVTNEOPH2PS dest, src (VEX encoded version)
VL = (128, 256)
KL = VL/32
Flags Affected
None.
Other Exceptions
See Exception Type 4.
Description
This instruction loads packed FP32 elements from a SIMD register or memory, converts the elements to BF16, and
writes the result to the destination SIMD register.
The upper bits of the destination register beyond the down-converted BF16 elements are zeroed.
This instruction uses “Round to nearest (even)” rounding mode. Output denormals are always flushed to zero and
input denormals are always treated as zero. MXCSR is not consulted nor updated.
Operation
define convert_fp32_to_bfloat16(x):
IF x is zero or denormal:
dest[15] := x[31] // sign preserving zero (denormal go to zero)
dest[14:0] := 0
ELSE IF x is infinity:
dest[15:0] := x[31:16]
ELSE IF x is nan:
dest[15:0] := x[31:16] // truncate and set msb of the mantisa force qnan
dest[6] := 1
ELSE // normal number
lsb := x[16]
rounding_bias := 0x00007FFF + lsb
temp[31:0] := x[31:0] + rounding_bias // integer add
dest[15:0] := temp[31:16]
return dest
FOR i := 0 to KL/2-1:
t := src.fp32[i]
dest.word[i] := convert_fp32_to_bfloat16(t)
DEST[MAXVL-1:VL/2] := 0
Flags Affected
None.
Other Exceptions
See Exceptions Type 4.
VPDPB[SU,UU,SS]D[,S]—Multiply and Add Unsigned and Signed Bytes With and Without
Saturation
Opcode/ Op/ 64/32 CPUID Feature Description
Instruction En bit Mode Flag
Support
VEX.128.F2.0F38.W0 50 /r A V/V AVX-VNNI-INT8 Multiply groups of 4 pairs of signed bytes in
VPDPBSSD xmm1, xmm2, xmm3/m128 with corresponding signed bytes
xmm3/m128 of xmm2, summing those products and adding
them to the doubleword result in xmm1.
VEX.256.F2.0F38.W0 50 /r A V/V AVX-VNNI-INT8 Multiply groups of 4 pairs of signed bytes in
VPDPBSSD ymm1, ymm2, ymm3/m256 with corresponding signed bytes
ymm3/m256 of ymm2, summing those products and adding
them to the doubleword result in ymm1.
VEX.128.F2.0F38.W0 51 /r A V/V AVX-VNNI-INT8 Multiply groups of 4 pairs of signed bytes in
VPDPBSSDS xmm1, xmm2, xmm3/m128 with corresponding signed bytes
xmm3/m128 of xmm2, summing those products and adding
them to the doubleword result, with signed
saturation in xmm1.
VEX.256.F2.0F38.W0 51 /r A V/V AVX-VNNI-INT8 Multiply groups of 4 pairs of signed bytes in
VPDPBSSDS ymm1, ymm2, ymm3/m256 with corresponding signed bytes
ymm3/m256 of ymm2, summing those products and adding
them to the doubleword result, with signed
saturation in ymm1.
VEX.128.F3.0F38.W0 50 /r A V/V AVX-VNNI-INT8 Multiply groups of 4 pairs of signed bytes in
VPDPBSUD xmm1, xmm2, xmm3/m128 with corresponding unsigned
xmm3/m128 bytes of xmm2, summing those products and
adding them to doubleword result in xmm1.
VEX.256.F3.0F38.W0 50 /r A V/V AVX-VNNI-INT8 Multiply groups of 4 pairs of signed bytes in
VPDPBSUD ymm1, ymm2, ymm3/m256 with corresponding unsigned
ymm3/m256 bytes of ymm2, summing those products and
adding them to doubleword result in ymm1.
VEX.128.F3.0F38.W0 51 /r A V/V AVX-VNNI-INT8 Multiply groups of 4 pairs of signed bytes in
VPDPBSUDS xmm1, xmm2, xmm3/m128 with corresponding unsigned
xmm3/m128 bytes of xmm2, summing those products and
adding them to doubleword result, with signed
saturation in xmm1.
VEX.256.F3.0F38.W0 51 /r A V/V AVX-VNNI-INT8 Multiply groups of 4 pairs of signed bytes in
VPDPBSUDS ymm1, ymm2, ymm3/m256 with corresponding unsigned
ymm3/m256 bytes of ymm2, summing those products and
adding them to doubleword result, with signed
saturation in ymm1.
VEX.128.NP.0F38.W0 50 /r A V/V AVX-VNNI-INT8 Multiply groups of 4 pairs of unsigned bytes in
VPDPBUUD xmm1, xmm2, xmm3/m128 with corresponding unsigned
xmm3/m128 bytes of xmm2, summing those products and
adding them to doubleword result in xmm1.
Description
Multiplies the individual bytes of the first source operand by the corresponding bytes of the second source operand,
producing intermediate word results. The word results are then summed and accumulated in the destination dword
element size operand.
For unsigned saturation, when an individual result value is beyond the range of an unsigned doubleword (that is,
greater than FFFFF_FFFFH), the saturated unsigned doubleword integer value of FFFF_FFFFH is stored in the
doubleword destination.
For signed saturation, when an individual result is beyond the range of a signed doubleword integer (that is, greater
than 7FFF_FFFFH or less than 8000_0000H), the saturated value of 7FFF_FFFFH or 8000_0000H, respectively, is
written to the destination operand.
Operation
VPDPB[SU,UU,SS]D[,S] dest, src1, src2 (VEX encoded version)
VL = (128, 256)
KL = VL/32
ORIGDEST := DEST
FOR i := 0 TO KL-1:
IF *src1 is signed*:
src1extend := SIGN_EXTEND // SU, SS
ELSE:
src1extend := ZERO_EXTEND // UU
IF *src2 is signed*:
src2extend := SIGN_EXTEND // SS
ELSE:
src2extend := ZERO_EXTEND // UU, SU
IF *saturating*:
IF *UU instruction version*:
DEST.dword[i] := UNSIGNED_DWORD_SATURATE(ORIGDEST.dword[i] + p1word + p2word + p3word + p4word)
ELSE:
DEST.dword[i] := SIGNED_DWORD_SATURATE(ORIGDEST.dword[i] + p1word + p2word + p3word + p4word)
ELSE:
DEST.dword[i] := ORIGDEST.dword[i] + p1word + p2word + p3word + p4word
DEST[MAXVL-1:VL] := 0
Other Exceptions
See Exceptions Type 4.
VPDPW[SU,US,UU]D[,S]—Multiply and Add Unsigned and Signed Words With and Without
Saturation
Opcode/ Op/ 64/32 CPUID Feature Description
Instruction En bit Mode Flag
Support
VEX.128.F3.0F38.W0 D2 /r A V/V AVX-VNNI-INT16 Multiply groups of 2 pairs of signed words in
VPDPWSUD xmm1, xmm2, xmm3/m128 with corresponding unsigned
xmm3/m128 words of xmm2, summing those products and
adding them to the doubleword result in xmm1.
VEX.256.F3.0F38.W0 D2 /r A V/V AVX-VNNI-INT16 Multiply groups of 2 pairs of signed words in
VPDPWSUD ymm1, ymm2, ymm3/m256 with corresponding unsigned
ymm3/m256 words of ymm2, summing those products and
adding them to the doubleword result in ymm1.
VEX.128.F3.0F38.W0 D3 /r A V/V AVX-VNNI-INT16 Multiply groups of 2 pairs of signed words in
VPDPWSUDS xmm1, xmm2, xmm3/m128 with corresponding unsigned
xmm3/m128 words of xmm2, summing those products and
adding them to the doubleword result, with
signed saturation in xmm1.
VEX.256.F3.0F38.W0 D3 /r A V/V AVX-VNNI-INT16 Multiply groups of 2 pairs of signed words in
VPDPWSUDS ymm1, ymm2, ymm3/m256 with corresponding unsigned
ymm3/m256 words of ymm2, summing those products and
adding them to the doubleword result, with
signed saturation in ymm1.
VEX.128.66.0F38.W0 D2 /r A V/V AVX-VNNI-INT16 Multiply groups of 2 pairs of unsigned words in
VPDPWUSD xmm1, xmm2, xmm3/m128 with corresponding signed words
xmm3/m128 of xmm2, summing those products and adding
them to doubleword result in xmm1.
VEX.256.66.0F38.W0 D2 /r A V/V AVX-VNNI-INT16 Multiply groups of 2 pairs of unsigned words in
VPDPWUSD ymm1, ymm2, ymm3/m256 with corresponding signed words
ymm3/m256 of ymm2, summing those products and adding
them to doubleword result in ymm1.
VEX.128.66.0F38.W0 D3 /r A V/V AVX-VNNI-INT16 Multiply groups of 2 pairs of unsigned words in
VPDPWUSDS xmm1, xmm2, xmm3/m128 with corresponding signed words
xmm3/m128 of xmm2, summing those products and adding
them to doubleword result, with signed
saturation in xmm1.
VEX.256.66.0F38.W0 D3 /r A V/V AVX-VNNI-INT16 Multiply groups of 2 pairs of unsigned words in
VPDPWUSDS ymm1, ymm2, ymm3/m256 with corresponding signed words
ymm3/m256 of ymm2, summing those products and adding
them to doubleword result, with signed
saturation in ymm1.
VEX.128.NP.0F38.W0 D2 /r A V/V AVX-VNNI-INT16 Multiply groups of 2 pairs of unsigned words in
VPDPWUUD xmm1, xmm2, xmm3/m128 with corresponding unsigned
xmm3/m128 words of xmm2, summing those products and
adding them to doubleword result in xmm1.
Description
Multiplies the individual words of the first source operand by the corresponding words of the second source
operand, producing intermediate dword results. The dword results are then summed and accumulated in the desti-
nation dword element size operand.
For unsigned saturation, when an individual result value is beyond the range of an unsigned doubleword (that is,
greater than FFFF_FFFFH), the saturated unsigned doubleword integer value of FFFF_FFFFH is stored in the double-
word destination.
For signed saturation, when an individual result is beyond the range of a signed doubleword integer (that is,
greater than 7FFF_FFFFH or less than 8000_0000H), the saturated value of 7FFF_FFFFH or 8000_0000H, respec-
tively, is written to the destination operand.
The EVEX version of VPDPWSSD[,S] was previously introduced with AVX512-VNNI. The VEX version of
VPDPWSSD[,S] was previously introduced with AVX-VNNI.
Operation
ORIGDEST := DEST
IF *src1 is signed*: // SU
src1extend := SIGN_EXTEND
ELSE: // UU, US
src1extend := ZERO_EXTEND
IF *src2 is signed*: // US
src2extend := SIGN_EXTEND
ELSE: // UU, SU
src2extend := ZERO_EXTEND
FOR i := 0 TO KL-1:
Other Exceptions
See Exceptions Type 4.
VPMADD52HUQ—Packed Multiply of Unsigned 52-Bit Integers and Add the High 52-Bit
Products to Qword Accumulators
Opcode/ Op/ 64/32 CPUID Feature Description
Instruction En bit Mode Flag
Support
VEX.128.66.0F38.W1 B5 /r A V/V AVX-IFMA Multiply unsigned 52-bit integers in xmm2 and
VPMADD52HUQ xmm1, xmm2, xmm3/m128 and add the high 52 bits of the
xmm3/m128 104-bit product to the qword unsigned
integers in xmm1.
VEX.256.66.0F38.W1 B5 /r A V/V AVX-IFMA Multiply unsigned 52-bit integers in ymm2 and
VPMADD52HUQ ymm1, ymm2, ymm3/m256 and add the high 52 bits of the
ymm3/m256 104-bit product to the qword unsigned
integers in ymm1.
Description
Multiplies packed unsigned 52-bit integers in each qword element of the first source operand (the second operand)
with the packed unsigned 52-bit integers in the corresponding elements of the second source operand (the third
operand) to form packed 104-bit intermediate results. The high 52-bit, unsigned integer of each 104-bit product is
added to the corresponding qword unsigned integer of the destination operand (the first operand).
Operation
VPMADDHUQ srcdest, src1, src2 (VEX version)
VL = (128,256)
KL = VL/64
FOR i in 0 .. KL-1:
temp128 := zeroextend64(src1.qword[i][51:0]) *zeroextend64(src2.qword[i][51:0])
srcdest.qword[i] := srcdest.qword[i] +zeroextend64(temp128[103:52])
srcdest[MAXVL:VL] := 0
Other Exceptions
See Exceptions Type 4.
VPMADD52LUQ—Packed Multiply of Unsigned 52-Bit Integers and Add the Low 52-Bit Products
to Qword Accumulators
Opcode/ Op/ 64/32 CPUID Feature Description
Instruction En bit Mode Flag
Support
VEX.128.66.0F38.W1 B4 /r A V/V AVX-IFMA Multiply unsigned 52-bit integers in xmm2 and
VPMADD52LUQ xmm1, xmm2, xmm3/m128 and add the low 52 bits of the
xmm3/m128 104-bit product to the qword unsigned
integers in xmm1.
VEX.256.66.0F38.W1 B4 /r A V/V AVX-IFMA Multiply unsigned 52-bit integers in ymm2 and
VPMADD52LUQ ymm1, ymm2, ymm3/m256 and add the low 52 bits of the
ymm3/m256 104-bit product to the qword unsigned
integers in ymm1.
Description
Multiplies packed unsigned 52-bit integers in each qword element of the first source operand (the second operand)
with the packed unsigned 52-bit integers in the corresponding elements of the second source operand (the third
operand) to form packed 104-bit intermediate results. The low 52-bit, unsigned integer of each 104-bit product is
added to the corresponding qword unsigned integer of the destination operand (the first operand).
Operation
VPMADDLUQ srcdest, src1, src2 (VEX version)
VL = (128,256)
KL = VL/64
FOR i in 0 .. KL-1:
temp128 := zeroextend64(src1.qword[i][51:0]) *zeroextend64(src2.qword[i][51:0])
srcdest.qword[i] := srcdest.qword[i] +zeroextend64(temp128[51:0])
srcdest[MAXVL:VL] := 0
Other Exceptions
See Exceptions Type 4.
Description
The VSHA512MSG1 instruction is one of the two SHA512 message scheduling instructions. The instruction
performs an intermediate calculation for the next four SHA512 message qwords.
See https://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.180-4.pdf for more information on the SHA512 standard.
Operation
define ROR64(qword, n):
count := n % 64
dest := (qword >> count) | (qword << (64-count))
return dest
define s0(qword):
return ROR64(qword,1) ^ ROR64(qword, 8) ^ SHR64(qword, 7)
Flags Affected
None.
Other Exceptions
See Exceptions Type 6.
VSHA512MSG2—Perform a Final Calculation for the Next Four SHA512 Message Qwords
Opcode/ Op/ 64/32 CPUID Feature Description
Instruction En bit Mode Flag
Support
VEX.256.F2.0F38.W0 CD 11:rrr:bbb A V/V AVX Performs the final calculation for the next four
VSHA512MSG2 ymm1, ymm2 SHA512 SHA512 message qwords using previous
message qwords from ymm1 and ymm2,
storing the result in ymm1.
Description
The VSHA512MSG2 instruction is one of the two SHA512 message scheduling instructions. The instruction
performs the final calculation for the next four SHA512 message qwords.
See https://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.180-4.pdf for more information on the SHA512 standard.
Operation
define ROR64(qword, n):
count := n % 64
dest := (qword >> count) | (qword << (64-count))
return dest
define s1(qword):
return ROR64(qword,19) ^ ROR64(qword, 61) ^ SHR64(qword, 6)
SRCDEST.qword[3] := W[19]
SRCDEST.qword[2] := W[18]
SRCDEST.qword[1] := W[17]
SRCDEST.qword[0] := W[16]
Flags Affected
None.
Other Exceptions
See Exceptions Type 6.
Description
The VSHA512RNDS2 instruction performs two rounds of SHA512 operation using initial SHA512 state (C,D,G,H)
from the first operand, an initial SHA512 state (A,B,E,F) from the second operand, and a pre-computed sum of the
next two round message qwords and the corresponding round constants from the third operand (only the two
lower qwords of the third operand). The updated SHA512 state (A,B,E,F) is written to the first operand, and the
second operand can be used as the updated state (C,D,G,H) in later rounds.
See https://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.180-4.pdf for more information on the SHA512 standard.
Operation
define ROR64(qword, n):
count := n % 64
dest := (qword >> count) | (qword << (64-count))
return dest
define cap_sigma0(qword):
return ROR64(qword,28) ^ ROR64(qword, 34) ^ ROR64(qword, 39)
define cap_sigma1(qword):
return ROR64(qword,14) ^ ROR64(qword, 18) ^ ROR64(qword, 41)
define MAJ(a,b,c):
return (a & b) ^ (a & c) ^ (b & c)
define CH(e,f,g):
return (e & f) ^ (g & ~e)
FOR i in 0..1:
A[i+1] := CH(E[i], F[i], G[i]) +
cap_sigma1(E[i]) + WK[i] + H[i] +
MAJ(A[i], B[i], C[i]) +
cap_sigma0(A[i])
B[i+1] := A[i]
C[i+1] := B[i]
D[i+1] := C[i]
E[i+1] := CH(E[i], F[i], G[i]) +
cap_sigma1(E[i]) + WK[i] + H[i] + D[i]
F[i+1] := E[i]
G[i+1] := F[i]
H[i+1] := G[i]
SRCDEST.qword[3] = A[2]
SRCDEST.qword[2] = B[2]
SRCDEST.qword[1] = E[2]
SRCDEST.qword[0] = F[2]
Flags Affected
None.
Other Exceptions
See Exceptions Type 6.
VSM3MSG1—Perform Initial Calculation for the Next Four SM3 Message Words
Opcode/ Op/ 64/32 CPUID Feature Description
Instruction En bit Mode Flag
Support
VEX.128.NP.0F38.W0 DA /r A V/V AVX Performs an initial calculation for the next four
VSM3MSG1 xmm1, xmm2, SM3 SM3 message words using previous message
xmm3/m128 words from xmm2 and xmm3/m128, storing
the result in xmm1.
Description
The VSM3MSG1 instruction is one of the two SM3 message scheduling instructions. The instruction performs an
initial calculation for the next four SM3 message words.
Operation
define ROL32(dword, n):
count := n % 32
dest := (dword << count) | (dword >> (32-count))
return dest
define P1(x):
return x ^ ROL32(x, 15) ^ ROL32(x, 23)
W[7] := SRCDEST.dword[0]
W[8] := SRCDEST.dword[1]
W[9] := SRCDEST.dword[2]
W[10] := SRCDEST.dword[3]
W[13] := SRC1.dword[0]
W[14] := SRC1.dword[1]
W[15] := SRC1.dword[2]
SRCDEST.dword[0] := P1(TMP0)
SRCDEST.dword[1] := P1(TMP1)
SRCDEST.dword[2] := P1(TMP2)
SRCDEST.dword[3] := P1(TMP3)
Flags Affected
None.
Other Exceptions
See Exceptions Type 4.
VSM3MSG2—Perform Final Calculation for the Next Four SM3 Message Words
Opcode/ Op/ 64/32 CPUID Feature Description
Instruction En bit Mode Flag
Support
VEX.128.66.0F38.W0 DA /r A V/V AVX Performs the final calculation for the next four
VSM3MSG2 xmm1, xmm2, SM3 SM3 message words using previous message
xmm3/m128 words from xmm2 and xmm3/m128, storing
the result in xmm1.
Description
The VSM3MSG2 instruction is one of the two SM3 message scheduling instructions. The instruction performs the
final calculation for the next four SM3 message words.
Operation
//see the VSM3MSG1 instruction for definition of ROL32()
WTMP[0] := SRCDEST.dword[0]
WTMP[1] := SRCDEST.dword[1]
WTMP[2] := SRCDEST.dword[2]
WTMP[3] := SRCDEST.dword[3]
// Dword array W[] has indices are based on the SM3 specification.
W[3] := SRC1.dword[0]
W[4] := SRC1.dword[1]
W[5] := SRC1.dword[2]
W[6] := SRC1.dword[3]
W[10] := SRC2.dword[0]
W[11] := SRC2.dword[1]
W[12] := SRC2.dword[2]
W[13] := SRC2.dword[3]
SRCDEST.dword[0] := W[16]
SRCDEST.dword[1] := W[17]
SRCDEST.dword[2] := W[18]
SRCDEST.dword[3] := W[19]
Flags Affected
None.
Other Exceptions
See Exceptions Type 4.
Description
The VSM3RNDS2 instruction performs two rounds of SM3 operation using initial SM3 state (C, D, G, H) from the
first operand, an initial SM3 states (A, B, E, F) from the second operand and a pre-computed words from the third
operand. The first operand with initial SM3 state of (C, D, G, H) assumes input of non-rotated left variables from
previous state. The updated SM3 state (A, B, E, F) is written to the first operand.
The imm8 should contain the even round number for the first of the two rounds computed by this instruction. The
computation masks the imm8 value by AND’ing it with 0x3E so that only even round numbers from 0 through 62
are used for this operation.
Operation
//see the VSM3MSG1 instruction for definition of ROL32()
define P0(dword):
return dword ^ ROL32(dword, 9) ^ ROL32(dword, 17)
W[5] := SRC2.dword[3]
C[0] := ROL32(C[0], 9)
D[0] := ROL32(D[0], 9)
G[0] := ROL32(G[0], 19)
H[0] := ROL32(H[0], 19)
FOR i in 0..1:
S1 := ROL32((ROL32(A[i], 12) + E[i] + CONST), 7)
S2 := S1 ^ ROL32(A[i],12)
T1 := FF(A[i], B[i], C[i], ROUND) + D[i] + S2 + (W[i]^W[i+4])
T2 := GG(E[i], F[i], G[i], ROUND) + H[i] + S1 + W[i]
D[i+1] := C[i]
C[i+1] := ROL32(B[i],9)
B[i+1] := A[i]
A[i+1] := T1
H[i+1] := G[i]
G[i+1] := ROL32(F[i], 19)
F[i+1] := E[i]
E[i+1] := P0(T2)
CONST := ROL32(CONST, 1)
SRCDEST.dword[3] := A[2]
SRCDEST.dword[2] := B[2]
SRCDEST.dword[1] := E[2]
SRCDEST.dword[0] := F[2]
Flags Affected
None.
Other Exceptions
See Exceptions Type 4.
Description
The VSM4KEY4 instruction performs four rounds of SM4 key expansion. The instruction operates on independent
128-bit lanes.
Additional details can be found at: https://tools.ietf.org/html/draft-ribose-cfrg-sm4-10.
Both SM4 instructions use a common sbox table:
BYTE sbox[256] = {
0xD6, 0x90, 0xE9, 0xFE, 0xCC, 0xE1, 0x3D, 0xB7, 0x16, 0xB6, 0x14, 0xC2, 0x28, 0xFB, 0x2C, 0x05,
0x2B, 0x67, 0x9A, 0x76, 0x2A, 0xBE, 0x04, 0xC3, 0xAA, 0x44, 0x13, 0x26, 0x49, 0x86, 0x06, 0x99,
0x9C, 0x42, 0x50, 0xF4, 0x91, 0xEF, 0x98, 0x7A, 0x33, 0x54, 0x0B, 0x43, 0xED, 0xCF, 0xAC, 0x62,
0xE4, 0xB3, 0x1C, 0xA9, 0xC9, 0x08, 0xE8, 0x95, 0x80, 0xDF, 0x94, 0xFA, 0x75, 0x8F, 0x3F, 0xA6,
0x47, 0x07, 0xA7, 0xFC, 0xF3, 0x73, 0x17, 0xBA, 0x83, 0x59, 0x3C, 0x19, 0xE6, 0x85, 0x4F, 0xA8,
0x68, 0x6B, 0x81, 0xB2, 0x71, 0x64, 0xDA, 0x8B, 0xF8, 0xEB, 0x0F, 0x4B, 0x70, 0x56, 0x9D, 0x35,
0x1E, 0x24, 0x0E, 0x5E, 0x63, 0x58, 0xD1, 0xA2, 0x25, 0x22, 0x7C, 0x3B, 0x01, 0x21, 0x78, 0x87,
0xD4, 0x00, 0x46, 0x57, 0x9F, 0xD3, 0x27, 0x52, 0x4C, 0x36, 0x02, 0xE7, 0xA0, 0xC4, 0xC8, 0x9E,
0xEA, 0xBF, 0x8A, 0xD2, 0x40, 0xC7, 0x38, 0xB5, 0xA3, 0xF7, 0xF2, 0xCE, 0xF9, 0x61, 0x15, 0xA1,
0xE0, 0xAE, 0x5D, 0xA4, 0x9B, 0x34, 0x1A, 0x55, 0xAD, 0x93, 0x32, 0x30, 0xF5, 0x8C, 0xB1, 0xE3,
0x1D, 0xF6, 0xE2, 0x2E, 0x82, 0x66, 0xCA, 0x60, 0xC0, 0x29, 0x23, 0xAB, 0x0D, 0x53, 0x4E, 0x6F,
0xD5, 0xDB, 0x37, 0x45, 0xDE, 0xFD, 0x8E, 0x2F, 0x03, 0xFF, 0x6A, 0x72, 0x6D, 0x6C, 0x5B, 0x51,
0x8D, 0x1B, 0xAF, 0x92, 0xBB, 0xDD, 0xBC, 0x7F, 0x11, 0xD9, 0x5C, 0x41, 0x1F, 0x10, 0x5A, 0xD8,
0x0A, 0xC1, 0x31, 0x88, 0xA5, 0xCD, 0x7B, 0xBD, 0x2D, 0x74, 0xD0, 0x12, 0xB8, 0xE5, 0xB4, 0xB0,
0x89, 0x69, 0x97, 0x4A, 0x0C, 0x96, 0x77, 0x7E, 0x65, 0xB9, 0xF1, 0x09, 0xC5, 0x6E, 0xC6, 0x84,
0x18, 0xF0, 0x7D, 0xEC, 0x3A, 0xDC, 0x4D, 0x20, 0x79, 0xEE, 0x5F, 0x3E, 0xD7, 0xCB, 0x39, 0x48
}
Operation
define ROL32(dword, n):
count := n % 32
dest := (dword << count) | (dword >> (32-count))
return dest
define lower_t(dword):
tmp.byte[0] := SBOX_BYTE(dword, 0)
tmp.byte[1] := SBOX_BYTE(dword, 1)
tmp.byte[2] := SBOX_BYTE(dword, 2)
tmp.byte[3] := SBOX_BYTE(dword, 3)
return tmp
define L_KEY(dword):
return dword ^ ROL32(dword, 13) ^ ROL32(dword, 23)
define T_KEY(dword):
return L_KEY(lower_t(dword))
for i in 0..KL-1:
P[0] := SRC1.xmm[i].dword[0]
P[1] := SRC1.xmm[i].dword[1]
P[2] := SRC1.xmm[i].dword[2]
P[3] := SRC1.xmm[i].dword[3]
DEST.xmm[i].dword[0] := C[0]
DEST.xmm[i].dword[1] := C[1]
DEST.xmm[i].dword[2] := C[2]
DEST.xmm[i].dword[3] := C[3]
DEST[MAXVL-1:VL] := 0
Flags Affected
None.
Other Exceptions
See Exceptions Type 6.
Description
The SM4RNDS4 instruction performs four rounds of SM4 encryption. The instruction operates on independent 128-
bit lanes.
Additional details can be found at: https://tools.ietf.org/html/draft-ribose-cfrg-sm4-10.
See “VSM4KEY4—Perform Four Rounds of SM4 Key Expansion” for the sbox table.
Operation
// see the VSM4KEY4 instruction for the definition of ROL32, lower_t
define L_RND(dword):
tmp := dword
tmp := tmp ^ ROL32(dword, 2)
tmp := tmp ^ ROL32(dword, 10)
tmp := tmp ^ ROL32(dword, 18)
tmp := tmp ^ ROL32(dword, 24)
return tmp
define T_RND(dword):
return L_RND(lower_t(dword))
for i in 0..KL-1:
P[0] := SRC1.xmm[i].dword[0]
P[1] := SRC1.xmm[i].dword[1]
P[2] := SRC1.xmm[i].dword[2]
P[3] := SRC1.xmm[i].dword[3]
DEST.xmm[i].dword[0] := C[0]
DEST.xmm[i].dword[1] := C[1]
DEST.xmm[i].dword[2] := C[2]
DEST.xmm[i].dword[3] := C[3]
DEST[MAXVL-1:VL] := 0
Flags Affected
None.
Other Exceptions
See Exceptions Type 6.
Description
This instruction writes a software provided list of up to 64 MSRs with values loaded from memory.
WRMSRLIST takes three implied input operands:
• RSI: Linear address of a table of MSR addresses (8 bytes per address).
• RDI: Linear address of a table from which MSR data is loaded (8 bytes per MSR).
• RCX: 64-bit bitmask of valid bits for the MSRs. Bit 0 is the valid bit for entry 0 in each table, etc.
For each RCX bit [n] from 0 to 63, if RCX[n] is 1, WRMSRLIST will write the MSR specified at entry [n] in the RSI
table with the value read from memory at the entry [n] in the RDI table.
This implies a maximum of 64 MSRs that can be processed by this instruction. The processor will clear RCX[n] after
it finishes handling that MSR. Similar to repeated string operations, WRMSRLIST supports partial completion for
interrupts, exceptions, and traps. In these situations, the RIP register saved will point to the MSRLIST instruction
while the RCX register will have cleared bits corresponding to all completed iterations.
This instruction must be executed at privilege level 0; otherwise, a general protection exception #GP(0) is gener-
ated. This instruction performs MSR specific checks and respects the VMX MSR VM-execution controls in the same
manner as WRMSR.
Like WRMSRNS (and unlike WRMSR), WRMSRLIST is not defined as a serializing instruction (see “Serializing
Instructions” in Chapter 9 of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3A). This
means that software should not rely on WRMSRLIST to drain all buffered writes to memory before the next instruc-
tion is fetched and executed. For implementation reasons, some processors may serialize when writing certain
MSRs, even though that is not guaranteed.
Like WRMSR and WRMSRNS, WRMSRLIST will ensure that all operations before the WRMSRLIST do not use the new
MSR value and that all operations after the WRMSRLIST do use the new value. An exception to this rule is certain
store-related performance monitor events that only count when those stores are drained to memory. Since
WRMSRLIST is not a serializing instruction, if software is using WRMSRLIST to change the controls for such perfor-
mance monitor events, then stores before the WRMSRLIST may be counted with new MSR values written by
WRMSRLIST. Software can insert the SERIALIZE instruction before the WRMSRLIST if so desired.
Those MSRs that cause a TLB invalidation when they are written via WRMSR (e.g., MTRRs) will also cause the same
TLB invalidation when written by WRMSRLIST.
In places where WRMSR is being used as a proxy for a serializing instruction, a different serializing instruction can
be used (e.g., SERIALIZE).
WRMSRLIST writes MSRs in order, which means the processor will ensure that an MSR in iteration “n” will be written
only after previous iterations (“n-1”). If the older MSR writes had a side effect that affects the behavior of the next
MSR, the processor will ensure that side effect is honored.
The processor is allowed to (but not required to) “load ahead” in the list. Examples:
• Use old memory type or TLB translation for loads from list memory despite an MSR written by a previous
iteration changing MTRR or invalidating TLBs.
• Cause a page fault or EPT violation for a memory access to an entry > “n” in MSR address or data tables,
despite the processor only having read or written “n” MSRs.1
Operation
WHILE (RCX != 0) {
MSR_index = TZCNT(RCX)
MSR_address = mem[RSI + (MSR_index * 8) ]
MSR_data = mem[RDI + (MSR_index * 8)]
VM exit if specified by VM-execution controls (for specified MSR_address)
#GP(0) if MSR_address[61:32] !=0
#GP(0) if MSR_address is not accessible for WRMSR
#GP(0) if MSR_data has reserved bits set for MSR
#GP(0) for any other MSR_address specific checks
WRMSRNS (MSR_address) = MSR_data
Clear RCX [MSR_index]
Take any pending interrupts/traps
}
Flags Affected
None.
1. For example, the processor may take a page fault due to a linear address for the 10th entry in the MSR address table despite only
having completed the MSR writes up to entry 5.
Description
WRMSNRS is an instruction that behaves exactly like WRMSR, with the only difference being that it is not a serial-
izing instruction by default.
Writes the contents of registers EDX:EAX into the 64-bit model specific register (MSR) specified in the ECX register.
The contents of the EDX register are copied to the high-order 32 bits of the selected MSR and the contents of the
EAX register are copied to the low-order 32 bits of the MSR. The high-order 32 bits of RAX, RCX, and RDX are
ignored.
This instruction must be executed at privilege level 0 or in real-address mode; otherwise, a general protection
exception #GP(0) is generated.
Unlike WRMSR, WRMSRNS is not defined as a serializing instruction (see “Serializing Instructions” in Chapter 9 of
the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3A). This means that software should
not rely on it to drain all buffered writes to memory before the next instruction is fetched and executed. For imple-
mentation reasons, some processors may serialize when writing certain MSRs, even though that is not guaranteed.
Like WRMSR, WRMSRNS will ensure that all operations before it do not use the new MSR value and that all opera-
tions after the WRMSRNS do use the new value. An exception to this rule is certain store related performance
monitor events that only count when those stores are drained to memory. Since WRMSRNS is not a serializing
instruction, if software is using WRMSRNS to change the controls for such performance monitor events, then stores
before the WRMSRMS may be counted with new MSR values written by WRMSRNS. Software can insert the SERI-
ALIZE instruction before the WRMSRNS if so desired.
Those MSRs that cause a TLB invalidation when they are written via WRMSR (e.g., MTRRs) will also cause the same
TLB invalidation when written by WRMSRNS.
In order to improve performance, software may replace WRMSR with WRMSRNS. In places where WRMSR is being
used as a proxy for a serializing instruction, a different serializing instruction can be used (e.g., SERIALIZE).
Operation
MSR[ECX] := EDX:EAX;
Flags Affected
None.
CHAPTER 3
INTEL® AMX INSTRUCTION SET REFERENCE, A-Z
NOTES
The following Intel® AMX instructions have moved to the Intel® 64 and IA-32 Architectures
Software Developer’s Manual: LDTILECFG, STTILECFG, TDPBF16PS,
TDPBSSD/TDPBSUD/TDPBUSD/TDPBUUD, TILELOADD/TILELOADDT1, TILERELEASE,
TILESTORED, and TILEZERO.
The Intel Advanced Matrix Extensions introductory material and helper functions will be maintained
here, as well as in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, for the
reader’s convenience. For information on Intel AMX and the XSAVE feature set, and recommenda-
tions for system software, see the latest version of the Intel® 64 and IA-32 Architectures Software
Developer’s Manual.
3.1 INTRODUCTION
Intel® Advanced Matrix Extensions (Intel® AMX) is a new 64-bit programming paradigm consisting of two compo-
nents: a set of 2-dimensional registers (tiles) representing sub-arrays from a larger 2-dimensional memory image,
and an accelerator able to operate on tiles, the first implementation is called TMUL (tile matrix multiply unit).
An Intel AMX implementation enumerates to the programmer how the tiles can be programmed by providing a
palette of options. Two palettes are supported; palette 0 represents the initialized state, and palette 1 consists of
8 KB of storage spread across 8 tile registers named TMM0..TMM7. Each tile has a maximum size of 16 rows x 64
bytes, (1 KB), however the programmer can configure each tile to smaller dimensions appropriate to their algo-
rithm. The tile dimensions supplied by the programmer (rows and bytes_per_row, i.e., colsb) are metadata that
drives the execution of tile and accelerator instructions. In this way, a single instruction can launch autonomous
multi-cycle execution in the tile and accelerator hardware. The palette value (palette_id) and metadata are held
internally in a tile related control register (TILECFG). The TILECFG contents will be commensurate with that
reported in the palette_table (see “CPUID—CPU Identification” in Chapter 1 for a description of the available
parameters).
Intel AMX is an extensible architecture. New accelerators can be added, or the TMUL accelerator may be enhanced
to provide higher performance. In these cases, the state (TILEDATA) provided by tiles may need to be made larger,
either in one of the metadata dimensions (more rows or colsb) and/or by supporting more tile registers (names).
The extensibility is carried out by adding new palette entries describing the additional state. Since execution is
driven through metadata, an existing Intel AMX binary could take advantage of larger storage sizes and higher
performance TMUL units by selecting the most powerful palette indicated by CPUID and adjusting loop and pointer
updates accordingly.
Figure 3-1 shows a conceptual diagram of the Intel AMX architecture. An Intel architecture host drives the algo-
rithm, the memory blocking, loop indices and pointer arithmetic. Tile loads and stores and accelerator commands
are sent to multi-cycle execution units. Status, if required, is reported back. Intel AMX instructions are synchro-
nous in the Intel architecture instruction stream and the memory loaded and stored by the tile instructions is
coherent with respect to the host’s memory accesses. There are no restrictions on interleaving of Intel architecture
and Intel AMX code or restrictions on the resources the host can use in parallel with Intel AMX (e.g., Intel AVX-
512). There is also no architectural requirement on the Intel architecture compute capability of the Intel architec-
ture host other than it supports 64-bit mode.
TILECFG
tmm0
Coherent Memory
Accelerator 2
Interface tmm1
...
tmm[n-1]
Intel AMX instructions use new registers and inherit basic behavior from Intel architecture in the same manner that
Intel SSE and Intel AVX did. Tile instructions include loads and stores using the traditional Intel architecture
register set as pointers. The TMUL instruction set (defined to be CPUID bits AMX-BF16 and AMX-INT8) only
supports reg-reg operations.
TILECFG is programmed using the LDTILECFG instruction. The selected palette defines the available storage and
general configuration while the rest of the memory data specifies the number of rows and column bytes for each
tile. Consistency checks are performed to ensure the TILECFG matches the restrictions of the palette. A General
Protection fault (#GP) is reported if the LDTILECFG fails consistency checks. A successful load of
TILECFG with a palette_id other than 0 is represented in this document with TILES_CONFIGURED = 1. When the
TILECFG is initialized (palette_id = 0), it is represented in the document as TILES_CONFIGURED = 0. Nearly all
Intel AMX instructions will generate a #UD exception if TILES_CONFIGURED is not equal to 1; the exceptions are
those that do TILECFG maintenance: LDTILECFG, STTILECFG and TILERELEASE.
If a tile is configured to contain M rows by N column bytes, LDTILECFG will ensure that the metadata values are
appropriate to the palette (e.g., that M ≤ 16 and N ≤ 64 for palette 1). The four M and N values can all be different
as long as they adhere to the restrictions of the palette. Further dynamic checks are done in the tile and the TMUL
instruction set to deal with cases where a legally configured tile may be inappropriate for the instruction operation.
Tile registers can be set to ‘invalid’ by configuring the rows and colsb to ‘0’.
Tile loads and stores are strided accesses from the application memory to packed rows of data. Algorithms are
expressed assuming row major data layout. Column major users should translate the terms according to their
orientation.
TILELOAD* and TILESTORE* instructions are restartable and can handle (up to) 2*rows page faults per instruction.
Restartability is provided by a start_row parameter in the TILECFG register.
The TMUL unit is conceptually a grid of fused multiply-add units able to read and write tiles. The dimensions of the
TMUL unit (tmul_maxk and tmul_maxn) are enumerated similar to the maximum dimensions of the tiles (see
“CPUID—CPU Identification” in Chapter 1 for details).
The matrix multiplications in the TMUL instruction set compute C[M][N] += A[M][K] * B[K][N]. The M, N, and K
values will cause the TMUL instruction set to generate a #UD exception if the dimensions do not match for matrix
multiply or do not match the palette.
In Figure 3-2, the number of rows in tile B matches the K dimension in the matrix multiplication pseudocode. K
dimensions smaller than that enumerated in the TMUL grid are also possible and any additional computation the
TMUL unit can support will not affect the result.
The number of elements specified by colsb of the B matrix is also less than or equal to tmul_maxn. Any remaining
values beyond that specified by the metadata will be set to zero.
C[M][N]
B[0][:N]
A[m-1][1]
B[1][:N]
....
....
....
A[M][K] B[K][N]
A[m-K+1][K-1]
B[K-1][:N]
C[m-K+1][0] C[m-K+1][1] C[m-K+1][n-1]
The XSAVE feature sets supports context management of the new state defined for Intel AMX. This support is
described in Section 3.2.
To facilitate handling of tile configuration data, there is a STTILECFG instruction. If the tile configuration is in the
INIT state (TILES_CONFIGURED == 0), then STTILECFG will write 64 bytes of zeros. Otherwise STTILECFG will
store the TILECFG to memory in the format used by LDTILECFG.
C A B
LDTILECFG [rax]
// assume some outer loops driving the cache tiling (not shown)
{
TILELOADD tmm0, [rsi+rdi] // srcdst, RSI points to C, RDI is strided value
TILELOADD tmm1, [rsi+rdi+N] // second tile of C, unrolling in SIMD dimension N
MOV r14, 0
LOOP:
TILELOADD tmm2, [r8+r9] // src2 is strided load of A, reused for 2 TMUL instr.
TILELOADD tmm3, [r10+r11] // src1 is strided load of B
TDPBUSD tmm0, tmm2, tmm3 // update left tile of C
TILELOADD tmm3, [r10+r11+N] // src1 loaded with B from next rightmost tile
TDPBUSD tmm1, tmm2, tmm3 // update right tile of C
ADD r8, K // update pointers by constants known outside of loop
ADD r10, K*r11
ADD r14, K
CMP r14, LIMIT
JNE LOOP
define palette_table[id]:
uint16_t total_tile_bytes
uint16_t bytes_per_tile
uint16_t bytes_per_row
uint16_t max_names
uint16_t max_rows
define zero_tilecfg_start():
tilecfg.start_row := 0
define zero_all_tile_data():
if XCR0[TILEDATA]:
b := CPUID(0xD,TILEDATA).EAX // size of feature
for j in 0 ... b:
TILEDATA.byte[j] := 0
define xcr0_supports_palette(palette_id):
if palette_id == 0:
return 1
elif palette_id == 1:
if XCR0[TILECFG] and XCR0[TILEDATA]:
return 1
return 0
3.5 NOTATION
Instructions described in this chapter follow the general documentation convention established in Intel® 64 and IA-
32 Architectures Software Developer’s Manual Volume 2A. Additionally, Intel® Advanced Matrix Extensions use
notation conventions as described below.
In the instruction encoding boxes, sibmem is used to denote an encoding where a MODRM byte and SIB byte are
used to indicate a memory operation where the base and displacement are used to point to memory, and the index
register (if present) is used to denote a stride between memory rows. The index register is scaled by the sib.scale
field as usual. The base register is added to the displacement, if present.
In the instruction encoding, the MODRM byte is represented several ways depending on the role it plays. The
MODRM byte has 3 fields: 2-bit MODRM.MOD field, a 3-bit MODRM.REG field and a 3-bit MODRM.RM field. When all
bits of the MODRM byte have fixed values for an instruction, the 2-hex nibble value of that byte is presented after
the opcode in the encoding boxes on the instruction description pages. When only some fields of the MODRM byte
must contain fixed values, those values are specified as follows:
• If only the MODRM.MOD must be 0b11, and MODRM.REG and MODRM.RM fields are unrestricted, this is
denoted as 11:rrr:bbb. The rrr correspond to the 3-bits of the MODRM.REG field and the bbb correspond to
the 3-bits of the MODMR.RM field.
• If the MODRM.MOD field is constrained to be a value other than 0b11, i.e., it must be one of 0b00, 0b01, or
0b10, then we use the notation !(11).
• If the MODRM.REG field had a specific required value, e.g., 0b101, that would be denoted as mm:101:bbb.
NOTE
Intel®
Historically the 64 and IA-32 Architectures Software Developer’s Manual only specified the
MODRM.REG field restrictions with the notation /0 ... /7 and did not specify restrictions on the
MODRM.MOD and MODRM.RM fields in the encoding boxes.
VEX.128.66.0F38.W0 6C 11:rrr:bbb A V/N.E. AMX-COMPLEX Matrix multiply complex elements from tmm2 and
TCMMIMFP16PS tmm1, tmm2, tmm3 tmm3, and accumulate the imaginary part into
single precision elements in tmm1.
VEX.128.NP.0F38.W0 6C 11:rrr:bbb A V/N.E. AMX-COMPLEX Matrix multiply complex elements from tmm2 and
TCMMRLFP16PS tmm1, tmm2, tmm3, and accumulate the real part into single
tmm3 precision elements in tmm1.
Description
These instructions perform matrix multiplication of two tiles containing complex elements and accumulate the
results into a packed single precision tile. Each dword element in input tiles tmm2 and tmm3 is interpreted as a
complex number with FP16 real part and FP16 imaginary part.
TCMMRLFP16PS calculates the real part of the result. For each possible combination of (row of tmm2, column of
tmm3), the instruction performs a set of multiplication and accumulations on all corresponding complex numbers
(one from tmm2 and one from tmm3). The real part of the tmm2 element is multiplied with the real part of the
corresponding tmm3 element, and the negated imaginary part of the tmm2 element is multiplied with the imagi-
nary part of the corresponding tmm3 elements. The two accumulated results are added, and then accumulated into
the corresponding row and column of tmm1.
TCMMIMFP16PS calculates the imaginary part of the result. For each possible combination of (row of tmm2, column
of tmm3), the instruction performs a set of multiplication and accumulations on all corresponding complex
numbers (one from tmm2 and one from tmm3). The imaginary part of the tmm2 element is multiplied with the real
part of the corresponding tmm3 element, and the real part of the tmm2 element is multiplied with the imaginary
part of the corresponding tmm3 elements. The two accumulated results are added, and then accumulated into the
corresponding row and column of tmm1.
“Round to nearest even” rounding mode is used when doing each accumulation of the FMA. Output denormals are
always flushed to zero but FP16 input denormals are not treated as zero.
MXCSR is not consulted nor updated.
Any attempt to execute these instructions inside an Intel TSX transaction will result in a transaction abort.
Operation
TCMMIMFP16PS tsrcdest, tsrc1, tsrc2
// C = m x n (tsrcdest), A = m x k (tsrc1), B = k x n (tsrc2)
zero_upper_rows(tsrcdest, tsrcdest.rows)
zero_tileconfig_start()
zero_upper_rows(tsrcdest, tsrcdest.rows)
zero_tileconfig_start()
Flags Affected
None.
Exceptions
AMX-E4; see Section 3.6, “Exception Classes” for details.
TDPFP16PS—Dot Product of FP16 Tiles Accumulated into Packed Single Precision Tile
Opcode/ Op/ 64/32 CPUID Feature Description
Instruction En bit Mode Flag
Support
VEX.128.F2.0F38.W0 5C 11:rrr:bbb A V/N.E. AMX-FP16 Matrix multiply FP16 elements from tmm2 and
TDPFP16PS tmm1, tmm2, tmm3 tmm3, and accumulate the packed single precision
elements in tmm1.
Description
This instruction performs a set of SIMD dot-products of two FP16 elements and accumulates the results into a
packed single precision tile. Each dword element in input tiles tmm2 and tmm3 is interpreted as a FP16 pair. For
each possible combination of (row of tmm2, column of tmm3), the instruction performs a set of SIMD dot-products
on all corresponding FP16 pairs (one pair from tmm2 and one pair from tmm3), adds the results of those dot-prod-
ucts, and then accumulates the result into the corresponding row and column of tmm1.
“Round to nearest even” rounding mode is used when doing each accumulation of the Fused Multiply-Add (FMA).
Output FP32 denormals are always flushed to zero. Input FP16 denormals are always handled and not treated as
zero.
MXCSR is not consulted nor updated.
Any attempt to execute the TDPFP16PS instruction inside an Intel TSX transaction will result in a transaction abort.
Operation
TDPFP16PS tsrcdest, tsrc1, tsrc2
// C = m x n (tsrcdest), A = m x k (tsrc1), B = k x n (tsrc2)
Flags Affected
None.
Exceptions
AMX-E4; see Section 3.6, “Exception Classes” for details.
CHAPTER 4
UC-LOCK DISABLE
NOTE
No processor will both set IA32_CORE_CAPABILITIES[4] and enumerate
CPUID.(EAX=07H, ECX=2):EDX[bit 6] as 1.
If a processor enumerates support for UC-lock disable (in either way), software can enable UC-lock disable by
setting MSR_MEMORY_CTRL[28]. When this bit is set, a locked access using a memory type other than WB causes
a fault. The locked access does not occur. The specific fault that occurs depends on how UC-lock disable is enumer-
ated:
• If IA32_CORE_CAPABILITIES[4] is read as 1, the UC lock results in a general-protection exception (#GP) with
a zero error code.
• If CPUID.(EAX=07H, ECX=2):EDX[bit 6] is enumerated as 1, the UC lock results in an #AC with an error code
with value 4.
1. The term “UC lock” is used because the most common situation regards accesses to UC memory. Despite the name, locked accesses
to WC, WP, and WT memory also cause bus locks.
2. Other alignment-check exceptions occur only if CR0.AM = 1, EFLAGS.AC = 1, and CPL = 3. The alignment-check exceptions resulting
from split-lock disable may occur even if CR0.AM = 0, EFLAGS.AC = 0, or CPL < 3.
CHAPTER 5
INTEL® RESOURCE DIRECTOR TECHNOLOGY FEATURE UPDATES
Intel® Resource Director Technology (Intel® RDT) provides a number of monitoring and control capabilities for
shared resources in multiprocessor systems. This chapter covers updates to the feature that will be available in
future Intel processors, starting with brief descriptions followed by technical details.
5.1.1 Intel® RDT on the 3rd generation Intel® Xeon® Scalable Processor Family
The 3rd generation Intel® Xeon® Scalable Processor Family based on Ice Lake Server microarchitecture adds the
following Intel RDT enhancements:
• 32-bit MBM counters (vs. 24-bit in prior generations), and new CPUID enumeration capabilities for counter
width.
• Second generation Memory Bandwidth Allocation (MBA): Introduces an advanced hardware feedback controller
that operates at microsecond timescales, and software-selectable min/max throttling value resolution capabil-
ities. Baseline descriptions of the MBA “throttling values” applied to the threads running on a core are described
in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3B.
Second generation MBA capabilities also add a work-conserving feature in which applications that frequently
access the L3 cache may be throttled by a lesser amount until they exceed the user-specified memory
bandwidth usage threshold, enhancing system throughput and efficiency, in addition to adding more precise
calibration and controls. Certain BIOS implementations may further aid flexibility by providing selectable
calibration profiles for various usages.
• 15 MBA / L3 CAT CLOS: Improved feature consistency and interface flexibility. The previous generation of
processors supported 16 L3 CAT Class of Service tags (CLOS), but only 8 MBA CLOS. The changes in
enumerated CLOS counts per-feature are enumerated in the processor as before, via CPUID.
5.1.2 Intel® RDT on Intel Atom® Processors, Including the P5000 Series
Intel Atom® processors, such as the P5000 series, based on Tremont microarchitecture add the following Intel RDT
enhancements:
• L2 CAT/CDP: L2 CAT/CDP and L3 CAT/CDP may be enabled simultaneously on supported processors. As these
are existing features defined in the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume
3B, no new software enabling should be required.
• Supported processors match the capabilities of the 3rd generation Intel Xeon Scalable Processor Family based
on Ice Lake Server microarchitecture, including traditional Intel RDT uncore features: L3 CAT/CDP, CMT, MBM,
and second-generation MBA. As these features are architectural, no new software enabling is required. Related
enhancements in Intel Xeon processors also carry forward to supported Intel Atom processors, with consistent
software enabling. These features include 32-bit MBM counters, second generation MBA, and 15 MBA/L3 CAT
CLOS.
5.1.3 Intel® RDT in Future Processors Based on Sapphire Rapids Server Microarchitecture
Processors based on Sapphire Rapids Server microarchitecture add the following Intel RDT enhancements:
• STLB QoS: Capability to manage the second-level translation lookaside buffer structure within the core (STLB)
in a manner quite similar to CAT (CLOS-based, with capacity masks). This may enable software that is sensitive
to TLB performance to achieve better determinism. This is a model-specific feature due to the microarchitec-
tural nature of the STLB structure. The code regions of interest should be manually accessed.
5.2.2 Augmented MBM Enumeration and MSR Interfaces for Extensible Counter Width
A field is added to CPUID to enumerate the MBM counter width in platforms that support the extensible MBM
counter width feature.
Before this point, CPUID.0F.[ECX=1]:EAX was reserved. This CPUID output register (EAX) is redefined to provide
two new fields:
• Encode counter width as offset from 24b in bits[7:0].
• Enumeration of the presence of an overflow bit in the IA32_QM_CTR MSR via EAX bit[8].
See “CPUID—CPU Identification” in Chapter 1 for details.
In EAX bits 7:0, the counter width is encoded as an offset from 24b. A value of zero in this field means 24-bit
counters are supported. A value of 8 indicates that 32-bit counters are supported, as in the 3rd generation Intel
Xeon Scalable Processor Family.
With the addition of this enumerable counter width, the requirement that software poll at ≥ 1Hz is removed. Soft-
ware may poll at a varying rate with reduced risk of rollover, and under typical conditions rollover is likely to require
hundreds of seconds (though this value is not explicitly specified and may vary and decrease in future processor
generations as memory bandwidths increase). If software seeks to ensure that rollover does not occur more than
once between samples, then sampling at ≥ 1Hz while consuming the enumerated counter widths' worth of data will
provide this guarantee, for a specific platform and counter width, under all conditions.
Software that uses the MBM event retrieval MSR interface should be updated to comprehend this new format,
which enables up to 62-bit MBM counters to be provided by future platforms. Higher-level software that consumes
the resulting bandwidth values is not expected to be affected.
An overflow bit is defined in the IA32_QM_CTR MSR, bit 61, if CPUID.0F.[ECX=1]:EAX[bit 8] is set. This rollover bit
will be set on overflow of the MBM counters and reset upon read. Current processors do not support this capability.
The second generation MBA implementation is shown in Figure 5-1. The feature now operates through the use of
an advanced new hardware controller and feedback mechanism, which allows automated hardware monitoring and
control around the user-provided delay value set point. This set point and associated throttling value infrastructure
remains unchanged from prior generation MBA, preserving software compatibility.
MBA enhancements, in addition to the new hardware controller, include:
1. Configurable delay selection across threads.
• MBA 1.0 implementation statically picks the max MBA Throttling Level (MBAThrotLvl) across the threads
running on a core (by calculating value = max(MBAThrotLvl(CLOS[thread0]),
MBAThrotLvl(CLOS[thread1]))).
• Software may have the option to pick either maximum or minimum delay to be resolved and applied
across the threads; maximum value remains the default.
2. Increasing CLOSIDs from 8 to 15.
• Previous generations of microarchitecture provided 8 CLOS tags for MBA.
• The 3rd generation Intel Xeon Scalable Processor Family and related Intel Atom processors, such as the
P5000 Series, increase this value to 15 (also consistent with L3 CAT).
Note that bit[0] for min/max configuration is supported in second generation MBA, but is removed in third genera-
tion MBA when the controller logic becomes capable of managing throttling values on a per-logical-processor basis.
The transient nature of this enhancement is why the min/max control remains model-specific.
To enumerate and manage support for the model-specific min/max feature, software may use processor
family/model/stepping to match supported products, then CPUID to later detect enhanced third generation MBA
support.
CHAPTER 6
LINEAR ADDRESS MASKING (LAM)
This chapter describes a new feature called linear-address masking (LAM). LAM modifies the checking that is
applied to 64-bit linear addresses, allowing software to use of the untranslated address bits for metadata.
In 64-bit mode, linear address have 64 bits and are translated either with 4-level paging, which translates the low
48 bits of each linear address, or with 5-level paging, which translates 57 bits. The upper linear-address bits are
reserved through the concept of canonicality. A linear address is 48-bit canonical if bits 63:47 of the address are
identical; it is 57-bit canonical if bits 63:56 are identical. (Clearly, any linear address that is 48-bit canonical is also
57-bit canonical.) When 4-level paging is active, the processor requires all linear addresses used to access memory
to be 48-bit canonical; similarly, 5-level paging ensures that all linear addresses are 57-bit canonical.
Software usages that associate metadata with a pointer might benefit from being able to place metadata in the
upper (untranslated) bits of the pointer itself. However, the canonicality enforcement mentioned earlier implies
that software would have to mask the metadata bits in a pointer (making it canonical) before using it as a linear
address to access memory. LAM allows software to use pointers with metadata without having to mask the meta-
data bits. With LAM enabled, the processor masks the metadata bits in a pointer before using it as a linear address
to access memory.
LAM is supported only in 64-bit mode and applies only addresses used for data accesses. LAM does not apply to
addresses used for instruction fetches or to those being loaded into the RIP register (e.g., as targets of jump and
call instructions).
6.2 TREATMENT OF DATA ACCESSES WITH LAM ACTIVE FOR USER POINTERS
Recall that, without LAM, canonicality checks are defined so that 4-level paging requires bits 63:47 of each pointer
to be identical, while 5-level paging requires bits 63:56 to be identical. LAM allows some of these bits to be used as
metadata by modifying canonicality checking.
When LAM48 is enabled for user pointers (see Section 6.1), the processor allows bits 62:48 of a user pointer to be
used as metadata. Regardless of the paging mode, the processor performs a modified canonicality check that
enforces that bit 47 of the pointer matches bit 63. As illustrated in Figure 6-1, bits 62:48 are not checked and are
thus available for software metadata. After this modified canonicality check is performed, bits 62:48 are masked by
sign-extending the value of bit 47 (0), and the resulting (48-bit canonical) address is then passed on for translation
by paging.
(Note also that, without LAM, canonicality checking with 5-level paging does not apply to bit 47 of a user pointer;
when LAM48 is enabled for user pointers, bit 47 of a user pointer must be 0. Note also that linear-address
bits 56:47 are translated by 5-level paging. When LAM48 is enabled for user pointers, these bits are always 0 in
any linear address derived from a user pointer: bits 56:48 of the pointer contained metadata, while bit 47 is
required to be 0.)
63 62 48 47 46 0
0 SW Metadata 0
==
Figure 6-1. Canonicality Check When LAM48 is Enabled for User Pointers
When LAM57 is enabled for user pointers, the processor allows bits 62:57 of a user pointer to be used as metadata.
With 5-level paging, the processor performs a modified canonicality check that enforces only that bit 56 of the
pointer matches bit 63. As illustrated in Figure 6-2, bits 62:57 are not checked and are thus available for software
metadata. After this modified canonicality check is performed, bits 62:57 are masked by sign-extending the value
of bit 56 (0), and the resulting (57-bit canonical) address is then passed on for translation by 5-level paging.
63 62 57 56 55 0
0 SW Metadata 0
==
Figure 6-2. Canonicality Check When LAM57 is Enabled for User Pointers with 5-Level Paging
When LAM57 is enabled for user pointers with 4-level paging, the processor performs a modified canonicality check
that enforces only that bits 56:47 of a user pointer match bit 63. As illustrated in Figure 6-3, bits 62:57 are not
checked and are thus available for software metadata. After this modified canonicality check is performed, bits
62:57 are masked by sign-extending the value of bit 56 (0), and the resulting (48-bit canonical) address is then
passed on for translation by 4-level paging.
63 62 57 56 47 46 0
0 SW Metadata 0
==
Figure 6-3. Canonicality Check When LAM57 is Enabled for User Pointers with 4-Level Paging
63 62 57 56 55 0
1 SW Metadata 1
==
Figure 6-4. Canonicality Check When LAM57 is Enabled for Supervisor Pointers with 5-Level Paging
When LAM48 is enabled for supervisor pointers (4-level paging), the processor performs a modified canonicality
check that enforces only that bit 47 of a supervisor pointer matches bit 63. As illustrated in Figure 6-5, bits 62:48
are not checked and are thus available for software metadata. After this modified canonicality check is performed,
bits 62:48 are masked by sign-extending the value of bit 47 (1), and the resulting (48-bit canonical) address is
then passed on for translation by 4-level paging.
63 62 48 47 46 0
1 SW Metadata 1
==
Figure 6-5. Canonicality Check When LAM48 is Enabled for Supervisor Pointers with 4-Level Paging
• ATTRIBUTES.LAM_U48 (bit 9) - Activate LAM for user data pointers and use of bits 62:48 as masked metadata
in enclave mode. This bit can be set if CPUID.(EAX=12H, ECX=01H):EAX[9] is 1.
• ATTRIBUTES.LAM_U57 (bit 8) - Activate LAM for user data pointers and use of bits 62:57 as masked metadata
in enclave mode. This bit can be set if CPUID.(EAX=12H, ECX=01H):EAX[8] is 1.
ECREATE causes #GP(0) if ATTRIBUTE.LAM_U48 bit is 1 and CPUID.(EAX=12H, ECX=01H):EAX[9] is 0, or if
ATTRIBUTE.LAM_U57 bit is 1 and CPUID.(EAX=12H, ECX=01H):EAX[8] is 0.
During enclave execution, accesses using linear addresses are treated as if CR3.LAM_U48 =
SECS.ATTRIBUTES.LAM_U48, CR3.LAM_U57 = SECS.ATTRIBUTES.LAM_U57, and CR3.LAM_SUP = 0. The actual
value of CR3 is not changed. This implies that, during enclave execution, if SECS.ATTRIBUTES.LAM_U57 = 1,
LAM57 is enabled for user pointers during enclave execution and, if SECS.ATTRIBUTES.LAM_U57 = 0 and
SECS.ATTRIBUTES. LAM_U48 = 1, then LAM48 is enabled for user pointers. If SECS.ATTRIBUTES.LAM_U57 =
SECS.ATTRIBUTES. LAM_U48 = 0, LAM is not enabled for user pointers.
When in enclave mode, supervisor data pointers are not subject to any masking.
The following ENCLU leaf functions check for linear addresses to be within the ELRANGE. When LAM is active, this
check is performed on the linear addresses that result from masking metadata bits in user pointers used by the leaf
functions.
• EACCEPT
• EACCEPTCOPY
• EGETKEY
• EMODPE
• EREPORT
The following linear address fields in the Intel SGX data structures hold linear addresses that are either loaded into
the EPCM or are written out from the EPCM and do not contain any metadata.
• SECS.BASEADDR
• PAGEINFO.LINADDR
CHAPTER 7
CODE PREFETCH INSTRUCTION UPDATES
Description
Fetches the line of data or code (instructions’ bytes) from memory that contains the byte specified with the source
operand to a location in the cache hierarchy specified by a locality hint:
• T0 (temporal data)—prefetch data into all levels of the cache hierarchy.
• T1 (temporal data with respect to first level cache misses)—prefetch data into level 2 cache and higher.
• T2 (temporal data with respect to second level cache misses)—prefetch data into level 3 cache and higher, or
an implementation-specific choice.
• NTA (non-temporal data with respect to all cache levels)—prefetch data into non-temporal cache structure and
into a location close to the processor, minimizing cache pollution.
• IT0 (temporal code)—prefetch code into all levels of the cache hierarchy.
• IT1 (temporal code with respect to first level cache misses)—prefetch code into all but the first-level of the
cache hierarchy.
The source operand is a byte memory location. (The locality hints are encoded into the machine level instruction
using bits 3 through 5 of the ModR/M byte.) Some locality hints may prefetch only for RIP-relative memory
addresses; see additional details below. The address to prefetch is NextRIP + 32-bit displacement, where NextRIP
is the first byte of the instruction that follows the prefetch instruction itself.
If the line selected is already present in the cache hierarchy at a level closer to the processor, no data movement
occurs. Prefetches from uncacheable or WC memory are ignored.
The PREFETCHh instruction is merely a hint and does not affect program behavior. If executed, this instruction
moves data closer to the processor in anticipation of future use.
The implementation of prefetch locality hints is implementation-dependent, and can be overloaded or ignored by a
processor implementation. The amount of data or code lines prefetched is also processor implementation-depen-
dent. It will, however, be a minimum of 32 bytes. Additional details of the implementation-dependent locality hints
are described in Section 7.4 of Intel® 64 and IA-32 Architectures Optimization Reference Manual.
It should be noted that processors are free to speculatively fetch and cache data from system memory regions that
are assigned a memory-type that permits speculative reads (that is, the WB, WC, and WT memory types). A
PREFETCHh instruction is considered a hint to this speculative behavior. Because this speculative fetching can occur
at any time and is not tied to instruction execution, a PREFETCHh instruction is not ordered with respect to the
fence instructions (MFENCE, SFENCE, and LFENCE) or locked memory references. A PREFETCHh instruction is also
unordered with respect to CLFLUSH and CLFLUSHOPT instructions, other PREFETCHh instructions, or any other
general instruction. It is ordered with respect to serializing instructions such as CPUID, WRMSR, OUT, and MOV CR.
PREFETCHIT0/1 apply when in 64-bit mode with RIP-relative addressing; they stay NOPs otherwise. For optimal
performance, the addresses used with these instructions should be the starting byte of a real instruction.
PREFETCHIT0/1 instructions are enumerated by CPUID.(EAX=07H, ECX=01H).EDX.PREFETCHI[bit 14].The encod-
ings stay NOPs in processors that do not enumerate these instructions.
Operation
FETCH (m8);
Numeric Exceptions
None.
CHAPTER 8
NEXT GENERATION PERFORMANCE MONITORING UNIT (PMU)
The next generation Performance Monitoring Unit (PMU)1 offers additional enhancements beyond what is available
in both the 12th generation Intel® Core™ processor based on Alder Lake performance hybrid architecture and the
13th generation Intel® Core™ processor:
• Timed PEBS
• New True-View Enumeration Architecture
— General-Purpose Counters
— Fixed-Function Counters
— Architectural Performance Monitoring Events
• Topdown Microarchitecture Analysis (TMA) Level 1 Architectural Performance Monitoring Events
— Non-Architectural Capabilities
— Counters Snapshotting and PEBS Format 6
NOTE
CPUID leaf 0AH continues to report useful attributes, such as architectural performance monitoring
version ID and counter width (# bits).
1. The next generation PMU incorporates PEBS_FMT=5h as described in Section 20.6.2.4.2 of the Intel® 64 and IA-32 Architectures
Software Developer’s Manual, Volume 3B.
NOTE
Locating a PMU feature under CPUID leaf 023H alerts software that the features may not be
supported uniformly across all logical processors.
view. That is, some IA32_PERF_CAPABILITIES fields report the actual support of the individual logical processor
the RDMSR instruction was executed on. The IA32_PERF_CAPABILITIES fields are shown in Table 8-1.
PEBS Arch Regs 7 Common Indicator of PEBS assist save architectural registers.
PEBS Baseline 14 Common See Section 20.8 in the Intel® 64 and IA-32 Architectures Software
Developer’s Manual, Volume 3B.
Perf Metrics Available 15 True-View If set, indicates that the architecture provides built in support for
TMA L1 metrics through the PERF_METRICS MSR,
PEBS Output PT Available 16 True-View PEBS output via Intel® Processor Trace.
NOTES:
1. For more information on bit 17, see Section 8.3.1.
The Retire Latency field reports the number of Unhalted Core Cycles between the retirement of the current instruc-
tion (as indicated by the Instruction Pointer field of the PEBS record) and the retirement of the prior instruction. All
ones are reported when the number exceeds 16 bits.
Processors that support this enhancement set a new bit: IA32_PERF_CAPABILITIES.PEBS_TIMING_INFO[bit 17].
NOTE
Timed PEBS is not supported when PEBS is programmed on fixed-function counter 0. The Retire
Latency field of such record is undefined.
Memory Info
LBR Entries
Counters
Metrics
XMMs
GPRs
LBRs
31 23 15 7 0
Include_Fixed_CTRx Include_PMCx
63 55 47 39 32
MSR_PEBS_DATA_CFG
Address: 3F2H
Memory Info 0 Setting this bit will capture memory information such as the PEBS_FMT=4 and later
linear address, data source, and latency of the memory
access in the PEBS record.
GPRs 1 Setting this bit will capture the contents of the general- PEBS_FMT=4 and later
purpose registers in the PEBS record.
XMMs 2 Setting this bit will capture the contents of the XMM registers PEBS_FMT=4 and later
in the PEBS record.
LBRs 3 Setting this bit will capture LBR TO, FROM, and INFO in the PEBS_FMT=4 and later
PEBS record.
Counters 4 Setting this bit will allow recording of the IA32_PMCx MSRs PEBS_FMT=62
and the IA32_FIXED_CTRx counters. The Include_PMCx and
Include_Fixed_CTRx bits are also set.
Metrics 5 Setting this bit will allow recording and clearing of the PEBS_FMT=61 &&
MSR_PERF_METRICS register (when the Include_Fixed_CTR3 PERF_METRICS_AVAILABLE=1
bit is also set).
LBR Entries 31:24 Set the field to the desired number of entries minus 1. For PEBS_FMT=4 and later
example, if the LBR_Entries field is 0, a single entry will be
included in the record. To include 32 LBR entries, set the
LBR_Entries field to 31 (0x1F). To ensure all PEBS records are
16-byte aligned, it is recommended to select an even number
of LBR entries (programmed into LBR_Entries as an odd
number).
Include_PMCx 47:32 A bit mask of the programmable counters that are allowed to PEBS_FMT=61
be captured into the PEBS record. Note that only bits that
match reporting of CPUID.(EAX=23H, ECX=01H):EAX are
writable.
Include_FIXED_CTRx 55:48 A bit mask of the fixed-function counters that are allowed to PEBS_FMT=61
be captured into the PEBS record. Note that only bits that
match reporting of CPUID.(EAX=23H, ECX=01H):EBX are
writable.
NOTES:
1. A write to the MSR will be ignored when IA32_MISC_ENABLE.PERFMON_AVAILABLE is zero (default).
2. These fields are available starting with the IA32_PERF_CAPABILITIES.PEBS_FMT of 6. Additionally, these fields are also available in a
subset of processors with a CPUID signature value of DisplayFamily_DisplayModel 06_C5H or 06_C6H (though they report
IA32_PERF_CAPABILITIES.PEBS_FMT as 5).
3. Writing to the reserved bits will generate a general-protection exception #GP(0).
PMC BitVector [31:0] Bit vector of IA32_PMCx MSRs. IA32_PMCx is recorded if bit x is
set.
Counters Group FIXED_CTR BitVector [31:0] Bit vector of IA32_FIXED_CTRx MSRs. IA32_FIXED_CTRx is
Header recorded if bit x is set.
...
IA32_PMCx will be captured if both Counters and MSR_PEBS_DATA_CFG bit 32 + x are set. In this case, the PMC
BitVector field bit x will be set too.
IA32_FIXED_CTRx will be captured if both Counters and MSR_PEBS_DATA_CFG bit 48 + x are set. In this case, the
FIXED_CTR BitVector field bit x will be set too.
The performance metrics will be recorded if both Metrics and MSR_PEBS_DATA_CFG bit 51 (the bit used for
IA32_FIXED_CTR3) are set. The Metrics record will have two 64-bit fields, MSR_PERF_METRICS and the
PERF_METRICS_BASE that is derived from IA32_FIXED_CTR3. In this case, the Metrics BitVector will be 3. Note
that MSR_PERF_METRICS and the IA32_FIXED_CTR3 MSR will be cleared after they are recorded.
Size of the group can be calculated in bytes by: 16 + popcount(BitVectors[127:0]) * 8.
1. This feature is available in a subset of processors with a CPUID signature value of DisplayFamily_DisplayModel 06_C5H or 06_C6H
(though they report architectural performance monitoring version 5).
2. Note that the IA32_PMCx MSRs may only be supported in the legacy address range.
CHAPTER 9
LINEAR ADDRESS SPACE SEPARATION (LASS)
This chapter describes a new feature called linear address space separation (LASS).
9.1 INTRODUCTION
Chapter 4 of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3A describes paging,
which is the process of translating linear addresses to physical addresses and determining, for each translation, the
linear address’s access rights; these determine what accesses to a linear address are allowed.
Every access to a linear address is either a supervisor-mode access or a user-mode access. A linear address’s
access rights include an indication of whether address is a supervisor-mode address or a user-mode address.
Paging prevents user-mode accesses to supervisor-mode addresses; in addition, there are features that can
prevent supervisor-mode accesses to user-mode addresses. (These features are supervisor-mode execution
prevention — SMEP — and supervisor-mode access prevention — SMAP.) In most cases, the blocked accesses
cause page-fault exceptions (#PF); for some cases (e.g., speculative accesses), the accesses are dropped without
fault.
With these mode-based protections, paging can prevent malicious software from directly reading or writing
memory inappropriately. To enforce these protections, the processor must traverse the hierarchy of paging struc-
tures in memory. Unprivileged software can use timing information resulting from this traversal to determine
details about the paging structures, and these details may be used to determine the layout of supervisor memory.
Linear-address space separation (LASS) is an independent mechanism that enforces the same mode-based protec-
tions as paging but without traversing the paging structures. Because the protections enforced by LASS are applied
before paging, “probes” by malicious software will provide no paging-based timing information.
LASS is based on a linear-address organization established by many operating systems: all linear addresses whose
most significant bit is 0 (“low” or “positive” addresses) are user-mode addresses, while all linear addresses whose
most significant bit is 1 (“high” or “negative” addresses) are supervisor-mode addresses. An operating system
should enable LASS only if it uses this organization of linear addresses.
Some accesses do not cause faults when they would violate the mode-based protections established by paging.
These include prefetches (e.g., those resulting from execution of one of the PREFETCHh instructions), executions
of the CLDEMOTE instruction, and accesses resulting from the speculative fetch or execution of an instruction. Such
an access may cause a LASS violation; if it does, the access is not performed but no fault occurs. (When such an
access would violate the mode-based protections of paging, the access is not performed but no page fault occurs.)
In 64-bit mode, LASS violations have priority just below that of canonicality violations; in compatibility mode, they
have priority just below that of segment-limit violations.
The remainder of this section describes how LASS applies to different types of accesses to linear addresses.
Chapter 4, “Paging,” of the Intel® 64 and IA-32 Architectures Software Developer’s Manual, Volume 3A provides full
definitions of these access types. The sections below discuss specific LASS violations based on bit 63 of a linear
address. For a linear address with only 32 bits (or 16 bits), the processor treats bit 63 as if it were 0.
1. The WRUSS instruction is an exception; although it can be executed only if CPL = 0, the processor treats its shadow-stack accesses
as user accesses.
CHAPTER 10
REMOTE ATOMIC OPERATIONS IN INTEL ARCHITECTURE
10.1 INTRODUCTION
Remote Atomic Operations (RAO) are a set of instructions to improve synchronization performance. RAO is espe-
cially useful in multiprocessor applications that have a set of characteristics commonly found together:
• A need to update, i.e., read and modify, one or more variables atomically, e.g., because multiple processors
may attempt to update the same variable simultaneously.
• Updates are not expected to be interleaved with other reads or writes of the variables.
• The order in which the updates happen is unimportant.
One example of this scenario is a multiprocessor histogram computation, where multiple processors cooperate to
compute a shared histogram, which is then used in the next phase of computation. This is described in more detail
in Section 10.8.1.
RAO instructions aim to provide high performance in this scenario by:
• Atomically updating memory without returning any information to the processor itself.
• Relaxing the ordering of RAO instructions with respect to other updates or writes to the variables.
RAO instructions are defined such that, unlike conventional atomics (e.g., LOCK ADD), their operations may be
performed closer to memory, such as at a shared cache or memory controller. Performing operations closer to
memory reduces or even eliminates movement of data between memory and the processor executing the instruc-
tion. They also have weaker ordering guarantees than conventional atomics. This facilitates execution closer to
memory, and can also lead to reduced stalls in the processor pipeline. These properties mean that using RAO
instead of conventional atomics may provide a significant performance boost for the scenario outlined above.
10.2 INSTRUCTIONS
The current set of RAO instructions can be found in Chapter 2, “Instruction Set Reference, A-Z.” These instructions
include integer addition and bitwise AND, OR, and XOR. These operations may be performed on 32-bit (double-
word) or 64-bit (quadword) data elements. The destination, which is also one of the inputs, is always a location in
memory. The other input is a general-purpose register, ry, in Table 10-1. The instructions do not change any regis-
ters or flags.
10.8 EXAMPLES
10.8.1 Histogram
Histogram is a common computational pattern, including in multiprocessor programming, but achieving an efficient
parallel implementation can be tricky. In a conventional histogram computation, software sweeps over a set of
input values; it maps each input value to a histogram bin, and increments that bin.
Common multiprocessor histogram implementations partition the inputs across the processors, so each processor
works on a subset of the inputs. Straightforward implementations have each processor directly update the shared
histogram. To ensure correctness, since multiple processors may attempt updates to the same histogram bin
simultaneously, the updates must use atomics. As described above, using conventional atomics can be expensive,
especially when we have highly contended cache lines in the histogram. That may occur for small histograms or for
histograms where many inputs map to a small number of histogram bins.
A common alternative approach uses a technique called privatization, where each processor gets its own “local”
histogram; as each processor works on its subset of the inputs, it updates its local histogram. As a final “extra”
step, software must accumulate the local histograms into the globally shared histogram, a step called a reduction.
This reduction step is where processors synchronize and communicate; using it allows the computation of local
histograms to be embarrassingly parallel and require no atomics or inter-processor communication, and can often
lead to good performance. However, privatization has downsides:
• The reduction step can take a lot of time if the histogram has many bins.
• The time for a reduction is relatively constant regardless of the number of processors. As the number of
processors grows, therefore, the fraction of time spent on the reduction tends to grow.
• The local histograms require extra memory, and that memory footprint grows with the number of processors.
• The reduction is an “extra” step that complicates the software.
With RAO, software can use the simpler multiprocessor algorithm and achieve reliably good performance. The
following pseudo-code lists a RAO-based histogram implementation.
// in each processor:
double *data; // “data” is a per-processor array, holding a subset of all inputs
data = get_data(); // populate “data” values
The above code can provide good performance under various scenarios, i.e., sizes of histograms and biases in
which histogram bins are updated. RAO avoids data “ping-ponging” between processors, even under high conten-
tion. Further, the weak ordering of RAO allows a series of AADD instructions to overlap with each other in the pipe-
line, and thus provide for instruction level parallelism.
In addition to the performance benefits, the RAO code is simple and is thus easier to maintain.
While we specifically show and discuss histogram above, this computation pattern is very common, e.g., software
packet processing workloads exhibit this in how they track statistics of the packets. Other algorithms exhibiting this
pattern should similarly see benefits from RAO.
// In other processors:
12: if (my_core->flags & SOME_EVENT) {
13: …… // react to the occurrence of SOME_EVENT
14: clear_bits(&my_core->flags, SOME_EVENT);
15: }
With conventional atomics (e.g., LOCK OR), a significant portion of execution time of handle_event would be spent
accessing core->flags (line 5) and core->extra_flags (line 7). It is likely that when handle_event begins, the two
fields are in another processor's cache, e.g., if that processor updated some bits in the fields. Therefore, the data
would need to migrate to the cache of the processor executing handle_event.
In contrast, for the above code example, for RAO implementations that perform updates close to memory, the RAO
AOR instruction should reduce data movement of core->flags and core->extra_flags and thus result in a lower
execution latency. Further, when other processors later access these fields (lines 12-15), they will also benefit from
a lower latency due to reduced data movement, since they may get the data from a more central location.
Also note that since the order of notifications does not matter in this case, the function further takes advantage of
RAO's weak ordering, allowing multiple RAO AOR instructions to be executed concurrently. It does, however,
include a memory fence at the end (line 10), to ensure that all updates are visible to all processors before leaving
the handler.
CHAPTER 11
TOTAL STORAGE ENCRYPTION IN INTEL ARCHITECTURE
11.1 INTRODUCTION
Total Storage Encryption (TSE) is an architecture that allows encryption of storage at high speed. TSE provides the
following capabilities:
• Protection (confidentiality) of data at rest in storage.
• NIST Standard AES-XTS Encryption.
• A mechanism for software to configure hardware keys (which are not software visible) or software keys.
• A consistent key interface to the crypto engine.
11.2 ENUMERATION
CPUID enumerates the existence of the IA32_TSE_CAPABILITY MSR and the PBNDKB instruction.
The IA32_TSE_CAPABILITY MSR enumerates supported cryptographic algorithms and keys.
• 2: TSE
If TSE is supported on the platform, CPUID.PCONFIG_LEAF will enumerate TSE as a supported target in sub-leaf 0,
ECX=TSE:
• TSE_KEY_PROGRAM leaf is available when TSE is enumerated by PCONFIG as a target.
• TSE_KEY_PROGRAM_WRAPPED is available when TSE is enumerated by PCONFIG as a target.
Bits 15:0 enumerate, as a bitmap, the encryption algorithms that are supported. As of this writing, the only
supported algorithm is 256-bit AES-XTS, which is enumerated by setting bit 0.