Vega 7nm Shader ISA
Vega 7nm Shader ISA
Architecture
Reference Guide
27-January-2020
Specification Agreement
This Specification Agreement (this "Agreement") is a legal agreement between Advanced Micro Devices, Inc. ("AMD") and "You" as the
recipient of the attached AMD Specification (the "Specification"). If you are accessing the Specification as part of your performance of
work for another party, you acknowledge that you have authority to bind such party to the terms and conditions of this Agreement. If
you accessed the Specification by any means or otherwise use or provide Feedback (defined below) on the Specification, You agree to
the terms and conditions set forth in this Agreement. If You do not agree to the terms and conditions set forth in this Agreement, you
are not licensed to use the Specification; do not use, access or provide Feedback about the Specification. In consideration of Your use or
access of the Specification (in whole or in part), the receipt and sufficiency of which are acknowledged, You agree as follows:
1. You may review the Specification only (a) as a reference to assist You in planning and designing Your product, service or
technology ("Product") to interface with an AMD product in compliance with the requirements as set forth in the Specification and
(b) to provide Feedback about the information disclosed in the Specification to AMD.
2. Except as expressly set forth in Paragraph 1, all rights in and to the Specification are retained by AMD. This Agreement does not
give You any rights under any AMD patents, copyrights, trademarks or other intellectual property rights. You may not (i) duplicate
any part of the Specification; (ii) remove this Agreement or any notices from the Specification, or (iii) give any part of the
Specification, or assign or otherwise provide Your rights under this Agreement, to anyone else.
3. The Specification may contain preliminary information, errors, or inaccuracies, or may not include certain necessary information.
Additionally, AMD reserves the right to discontinue or make changes to the Specification and its products at any time without
notice. The Specification is provided entirely "AS IS." AMD MAKES NO WARRANTY OF ANY KIND AND DISCLAIMS ALL EXPRESS,
IMPLIED AND STATUTORY WARRANTIES, INCLUDING BUT NOT LIMITED TO IMPLIED WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE, NONINFRINGEMENT, TITLE OR THOSE WARRANTIES ARISING AS A COURSE OF DEALING
OR CUSTOM OF TRADE. AMD SHALL NOT BE LIABLE FOR DIRECT, INDIRECT, CONSEQUENTIAL, SPECIAL, INCIDENTAL, PUNITIVE
OR EXEMPLARY DAMAGES OF ANY KIND (INCLUDING LOSS OF BUSINESS, LOSS OF INFORMATION OR DATA, LOST PROFITS, LOSS
OF CAPITAL, LOSS OF GOODWILL) REGARDLESS OF THE FORM OF ACTION WHETHER IN CONTRACT, TORT (INCLUDING
NEGLIGENCE) AND STRICT PRODUCT LIABILITY OR OTHERWISE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
4. Furthermore, AMD’s products are not designed, intended, authorized or warranted for use as components in systems intended for
surgical implant into the body, or in other applications intended to support or sustain life, or in any other application in which the
failure of AMD’s product could create a situation where personal injury, death, or severe property or environmental damage may
occur.
5. You have no obligation to give AMD any suggestions, comments or feedback ("Feedback") relating to the Specification. However,
any Feedback You voluntarily provide may be used by AMD without restriction, fee or obligation of confidentiality. Accordingly, if
You do give AMD Feedback on any version of the Specification, You agree AMD may freely use, reproduce, license, distribute, and
otherwise commercialize Your Feedback in any product, as well as has the right to sublicense third parties to do the same. Further,
You will not give AMD any Feedback that You may have reason to believe is (i) subject to any patent, copyright or other intellectual
property claim or right of any third party; or (ii) subject to license terms which seek to require any product or intellectual
property incorporating or derived from Feedback or any Product or other AMD intellectual property to be licensed to or otherwise
provided to any third party.
6. You shall adhere to all applicable U.S., European, and other export laws, including but not limited to the U.S. Export
Administration Regulations ("EAR"), (15 C.F.R. Sections 730 through 774), and E.U. Council Regulation (EC) No 428/2009 of 5 May
2009. Further, pursuant to Section 740.6 of the EAR, You hereby certifies that, except pursuant to a license granted by the United
States Department of Commerce Bureau of Industry and Security or as otherwise permitted pursuant to a License Exception under
the U.S. Export Administration Regulations ("EAR"), You will not (1) export, re-export or release to a national of a country in
Country Groups D:1, E:1 or E:2 any restricted technology, software, or source code You receive hereunder, or (2) export to Country
Groups D:1, E:1 or E:2 the direct product of such technology or software, if such foreign produced direct product is subject to
national security controls as identified on the Commerce Control List (currently found in Supplement 1 to Part 774 of EAR). For the
most current Country Group listings, or for additional information about the EAR or Your obligations under those regulations,
please refer to the U.S. Bureau of Industry and Security’s website at http://www.bis.doc.gov/.
7. If You are a part of the U.S. Government, then the Specification is provided with "RESTRICTED RIGHTS" as set forth in
subparagraphs (c) (1) and (2) of the Commercial Computer Software-Restricted Rights clause at FAR 52.227-14 or subparagraph (c)
(1)(ii) of the Rights in Technical Data and Computer Software clause at DFARS 252.277-7013, as applicable.
8. This Agreement is governed by the laws of the State of California without regard to its choice of law principles. Any dispute
involving it must be brought in a court having jurisdiction of such dispute in Santa Clara County, California, and You waive any
defenses and rights allowing the dispute to be litigated elsewhere. If any part of this agreement is unenforceable, it will be
considered modified to the extent necessary to make it enforceable, and the remainder shall continue in effect. The failure of AMD
to enforce any rights granted hereunder or to take action against You in the event of any breach hereunder shall not be deemed a
waiver by AMD as to subsequent enforcement of rights or subsequent actions in the event of future breaches. This Agreement is
the entire agreement between You and AMD concerning the Specification; it may be changed only by a written document signed
by both You and an authorized representative of AMD.
DISCLAIMER
The information contained herein is for informational purposes only, and is subject to change without notice. While every
precaution has been taken in the preparation of this document, it may contain technical inaccuracies, omissions and
typographical errors, and AMD is under no obligation to update or otherwise correct this information. Advanced Micro
Devices, Inc. makes no representations or warranties with respect to the accuracy or completeness of the contents of this
document, and assumes no liability of any kind, including the implied warranties of noninfringement, merchantability or
fitness for particular purposes, with respect to the operation or use of AMD hardware, software or other products described
herein. No license, including implied or arising by estoppel, to any intellectual property rights is granted by this document.
Terms and limitations applicable to the purchase or use of AMD’s products are as set forth in a signed agreement between the
parties or in AMD’s Standard Terms and Conditions of Sale.
AMD, the AMD Arrow logo, and combinations thereof are trademarks of Advanced Micro Devices, Inc. Other product names
used in this publication are for identification purposes only and may be trademarks of their respective companies.
Preface
The document specifies the instructions (include the format of each type of instruction) and the
relevant program state (including how the program state interacts with the instructions). Some
instruction fields are mutually dependent; not all possible settings for all fields are legal. This
document specifies the valid combinations.
1. Specify the language constructs and behavior, including the organization of each type of
instruction in both text syntax and binary format.
2. Provide a reference of instruction operation that compiler writers can use to maximize
performance of the processor.
Audience
This document is intended for programmers writing application and system software, including
operating systems, compilers, loaders, linkers, device drivers, and system utilities. It assumes
that programmers are writing compute-intensive parallel applications (streaming applications)
and assumes an understanding of requisite programming practices.
Organization
This document begins with an overview of the AMD GCN processors' hardware and
programming environment (Chapter 1).
Chapter 2 describes the organization of GCN programs.
Chapter 3 describes the program state that is maintained.
Chapter 4 describes the program flow.
Chapter 5 describes the scalar ALU operations.
Chapter 6 describes the vector ALU operations.
Chapter 7 describes the scalar memory operations.
Chapter 8 describes the vector memory operations.
Chapter 9 provides information about the flat memory instructions.
Chapter 10 describes the data share operations.
Chapter 11 describes exporting the parameters of pixel color and vertex shaders.
Chapter 12 describes instruction details, first by the microcode format to which they belong,
Conventions
The following conventions are used in this document:
[1,2) A range that includes the left-most value (in this case, 1), but excludes the right-
most value (in this case, 2).
[1,2] A range that includes both the left-most and right-most values.
7:4 A bit range, from bit 7 to bit 4, inclusive. The high-order bit is shown first.
italicized word or phrase The first use of a term or concept basic to the understanding of stream
computing.
Related Documents
• Intermediate Language (IL) Reference Manual. Published by AMD.
• AMD Accelerated Parallel Processing OpenCL Programming Guide. Published by AMD.
• The OpenCL Specification. Published by Khronos Group. Aaftab Munshi, editor.
• OpenGL Programming Guide, at http://www.glprogramming.com/red/
• Microsoft DirectX® Reference Website, at https://msdn.microsoft.com/en-us/library/
windows/desktop/ee663274(v=vs.85).aspx
Conventions 2 of 290
"Vega" 7nm Instruction Set Architecture
• TMA and TBA registers are stored one per VM-ID, not per draw or dispatch.
• Added Image operations support 16-bit address and data.
• Added Global and Scratch memory read/write operations.
◦ Also added Scratch load/store to scalar memory.
• Added Scalar memory atomic instructions.
• MIMG Microcode format: removed the R128 bit.
• FLAT Microcode format: added an offset field.
• Removed V_MOVEREL instructions.
• Added control over arithmetic overflow for FP16 VALU operations.
• Modified bit packing of surface descriptors and samplers:
◦ T#: removed heap, elem_size, last_array, interlaced, uservm_mode bits.
◦ V#: removed mtype.
◦ S#: removed astc_hdr field.
New Instructions
Vega 7nm includes the additional instructions listed below:
V_FMAC_F32
V_XNOR_B32
V_DOT2_F32_F16
V_DOT2_I32_I16
V_DOT2_U32_U16
V_DOT4_I32_I8
V_DOT4_U32_U8
V_DOT8_I32_I4
V_DOT8_U32_U4
Contact Information
For information concerning AMD Accelerated Parallel Processing developing, please see:
developer.amd.com/ .
For information about developing with AMD Accelerated Parallel Processing, please see:
developer.amd.com/appsdk .
We also have a growing community of AMD Accelerated Parallel Processing users. Come visit
us at the AMD Accelerated Parallel Processing Developer Forum ( http://developer.amd.com/
openclforum ) to find out what applications other users are trying on their AMD Accelerated
Parallel Processing products.
Chapter 1. Introduction
The AMD GCN processor implements a parallel micro-architecture that provides an excellent
platform not only for computer graphics applications but also for general-purpose data parallel
applications. Data-intensive applications that require high bandwidth or are computationally
intensive may be run on an AMD GCN processor.
The figure below shows a block diagram of the AMD GCN Vega Generation series processors
The GCN device includes a data-parallel processor (DPP) array, a command processor, a
memory controller, and other logic (not shown). The GCN command processor reads
commands that the host has written to memory-mapped GCN registers in the system-memory
address space. The command processor sends hardware-generated interrupts to the host when
the command is completed. The GCN memory controller has direct access to all GCN device
memory and the host-specified areas of system memory. To satisfy read and write requests, the
memory controller performs the functions of a direct-memory access (DMA) controller, including
computing memory-address offsets based on the format of the requested data in memory. In the
GCN environment, a complete application includes two parts:
5 of 290
"Vega" 7nm Instruction Set Architecture
The DPP array is the heart of the GCN processor. The array is organized as a set of compute
unit pipelines, each independent from the others, that operate in parallel on streams of floating-
point or integer data. The compute unit pipelines can process data or, through the memory
controller, transfer data to, or from, memory. Computation in a compute unit pipeline can be
made conditional. Outputs written to memory can also be made conditional.
When it receives a request, the compute unit pipeline loads instructions and data from memory,
begins execution, and continues until the end of the kernel. As kernels are running, the GCN
hardware automatically fetches instructions from memory into on-chip caches; GCN software
plays no role in this. GCN kernels can load data from off-chip memory into on-chip general-
purpose registers (GPRs) and caches.
The AMD GCN devices can detect floating point exceptions and can generate interrupts. In
particular, they detect IEEE floating-point exceptions in hardware; these can be recorded for
post-execution analysis. The software interrupts shown in the previous figure from the command
processor to the host represent hardware-generated interrupts for signaling command-
completion and related management functions.
The GCN processor hides memory latency by keeping track of potentially hundreds of work-
items in different stages of execution, and by overlapping compute operations with memory-
access operations.
1.1. Terminology
Table 1. Basic Terms
Term Description
GCN Processor The Graphics Core Next shader processor is a scalar and vector ALU designed to run
complex programs on behalf of a wavefront.
Dispatch A dispatch launches a 1D, 2D, or 3D grid of work to the GCN processor array.
Workgroup A workgroup is a collection of wavefronts that have the ability to synchronize with each other
quickly; they also can share data through the Local Data Share.
Work-item A single element of work: one element from the dispatch grid, or in graphics a pixel or
vertex.
Literal Constant A 32-bit integer or float constant that is placed in the instruction stream.
Scalar ALU (SALU) The scalar ALU operates on one value per wavefront and manages all control flow.
Term Description
Vector ALU (VALU) The vector ALU maintains Vector GPRs that are unique for each work item and execute
arithmetic operations uniquely on each work-item.
Microcode format The microcode format describes the bit patterns used to encode instructions. Each
instruction is either 32 or 64 bits.
Instruction An instruction is the basic unit of the kernel. Instructions include: vector ALU, scalar ALU,
memory transfer, and control flow operations.
Quad A quad is a 2x2 group of screen-aligned pixels. This is relevant for sampling texture maps.
Texture Sampler (S#) A texture sampler is a 128-bit entity that describes how the vector memory system reads
and samples (filters) a texture map.
Texture Resource A texture resource descriptor describes an image in memory: address, data format, stride,
(T#) etc.
Buffer Resource (V#) A buffer resource descriptor describes a buffer in memory: address, data format, stride, etc.
• A scalar ALU, which operates on one value per wavefront (common to all work items).
• A vector ALU, which operates on unique values per work-item.
• Local data storage, which allows work-items within a workgroup to communicate and share
data.
• Scalar memory, which can transfer data between SGPRs and memory through a cache.
• Vector memory, which can transfer data between VGPRs and memory, including sampling
texture maps.
All kernel control flow is handled using scalar ALU instructions. This includes if/else, branches
and looping. Scalar ALU (SALU) and memory instructions work on an entire wavefront and
operate on up to two SGPRs, as well as literal constants.
Vector memory and ALU instructions operate on all work-items in the wavefront at one time. In
order to support branching and conditional execute, every wavefront has an EXECute mask that
determines which work-items are active at that moment, and which are dormant. Active work-
items execute the vector instruction, and dormant ones treat the instruction as a NOP. The
EXEC mask can be changed at any time by Scalar ALU instructions.
Vector ALU instructions can take up to three arguments, which can come from VGPRs, SGPRs,
or literal constants that are part of the instruction stream. They operate on all work-items
enabled by the EXEC mask. Vector compare and add with- carryout return a bit-per-work-item
mask back to the SGPRs to indicate, per work-item, which had a "true" result from the compare
or generated a carry-out.
Vector memory instructions transfer data between VGPRs and memory. Each work-item
supplies its own memory address and supplies or receives unique data. These instructions are
also subject to the EXEC mask.
• enabling a PE to recover the pre-op value from an atomic operation by performing a cache-
less load from its return address after receipt of the write confirmation acknowledgment,
and
• enabling the system to maintain a relaxed consistency model.
Each scatter write from a given PE to a given memory channel maintains order. The
acknowledgment enables one processing element to implement a fence to maintain serial
consistency by ensuring all writes have been posted to memory prior to completing a
subsequent write. In this manner, the system can maintain a relaxed consistency model
between all parallel work-items operating on the system.
LDS Local Data Share 64kB Local data share is a scratch RAM with built-in
arithmetic capabilities that allow data to be shared
between threads in a workgroup.
EXEC Execute Mask 64 A bit mask with one bit per thread, which is applied to
vector instructions and controls that threads execute
and that ignore the instruction.
EXECZ EXEC is zero 1 A single bit flag indicating that the EXEC mask is all
zeros.
VCC Vector Condition Code 64 A bit mask with one bit per thread; it holds the result
of a vector compare operation.
VCCZ VCC is zero 1 A single bit-flag indicating that the VCC mask is all
zeros.
SCC Scalar Condition Code 1 Result from a scalar ALU comparison instruction.
XNACK_MASK Address translation failure. 64 Bit mask of threads that have failed their address
translation.
TBA Trap Base Address 64 Holds the pointer to the current trap handler program.
TMA Trap Memory Address 64 Temporary register for shader operations. For
example, can hold a pointer to memory used by the
trap handler.
TTMP0-TTMP15 Trap Temporary SGPRs 32 16 SGPRs available only to the Trap Handler for
temporary storage.
VMCNT Vector memory instruction 6 Counts the number of VMEM instructions issued but
count not yet completed.
EXPCNT Export Count 3 Counts the number of Export and GDS instructions
issued but not yet completed. Also counts VMEM
writes that have not yet sent their write-data to the
TC.
LGKMCNT LDS, GDS, Constant and 4 Counts the number of LDS, GDS, constant-fetch
Message count (scalar memory read), and message instructions
issued but not yet completed.
The PC interacts with three instructions: S_GET_PC, S_SET_PC, S_SWAP_PC. These transfer
the PC to, and from, an even-aligned SGPR pair.
EXEC can be read from, and written to, through scalar instructions; it also can be written as a
result of a vector-ALU compare. This mask affects vector-ALU, vector-memory, LDS, and export
instructions. It does not affect scalar execution or branches.
A helper bit (EXECZ) can be used as a condition for branches to skip code when EXEC is zero.
executes every instruction, wasting instruction issue bandwidth. Use
CBRANCH or VSKIP to rapidly skip over code when it is likely that the EXEC
mask is zero.
SCC 1 Scalar condition code. Used as a carry-out bit. For a comparison instruction,
this bit indicates failure or success. For logical operations, this is 1 if the
result was non-zero.
SPI_PRIO 2:1 Wavefront priority set by the shader processor interpolator (SPI) when the
wavefront is created. See the S_SETPRIO instruction (page 12-49) for
details. 0 is lowest, 3 is highest priority.
WAVE_PRIO 4:3 Wavefront priority set by the shader program. See the S_SETPRIO
instruction (page 12-49) for details.
PRIV 5 Privileged mode. Can only be active when in the trap handler. Gives write
access to the TTMP, TMA, and TBA registers.
TRAP_EN 6 Indicates that a trap handler is present. When set to zero, traps are not
taken.
TTRACE_EN 7 Indicates whether thread trace is enabled for this wavefront. If zero, also
ignore any shader-generated (instruction) thread-trace data.
EXPORT_RDY 8 This status bit indicates if export buffer space has been allocated. The
shader stalls any export instruction until this bit becomes 1. It is set to 1
when export buffer space has been allocated. Before a Pixel or Vertex
shader can export, the hardware checks the state of this bit. If the bit is 1,
export can be issued. If the bit is zero, the wavefront sleeps until space
becomes available in the export buffer. Then, this bit is set to 1, and the
wavefront resumes.
HALT 13 Wavefront is halted or scheduled to halt. HALT can be set by the host
through wavefront-control messages, or by the shader. This bit is ignored
while in the trap handler (PRIV = 1); it also is ignored if a host-initiated trap
is received (request to enter the trap handler).
TTRACE_CU_EN 15 Enables/disables thread trace for this compute unit (CU). This bit allows
more than one CU to be outputting USERDATA (shader initiated writes to
the thread-trace buffer). Note that wavefront data is only traced from one
CU per shader array. Wavefront user data (instruction based) can be output
if this bit is zero.
VALID 16 Wavefront is active (has been created and not yet ended).
SKIP_EXPORT 18 For Vertex Shaders only. 1 = this shader is not allocated export buffer
space; all export instructions are ignored (treated as NOPs). Formerly
called VS_NO_ALLOC. Used for stream-out of multiple streams (multiple
passes over the same VS), and for DS running in the VS stage for
wavefronts that produced no primitives.
FP_ROUND 3:0 [1:0] Single precision round mode. [3:2] Double/Half precision round mode.
Round Modes: 0=nearest even, 1= +infinity, 2= -infinity, 3= toward zero.
FP_DENORM 7:4 [1:0] Single precision denormal mode. [3:2] Double/Half-precision denormal
mode. Denorm modes:
0 = flush input and output denorms.
1 = allow input denorms, flush output denorms.
2 = flush input denorms, allow output denorms.
3 = allow input and output denorms.
DX10_CLAMP 8 Used by the vector ALU to force DX10-style treatment of NaNs: when set,
clamp NaN to zero; otherwise, pass NaN through.
IEEE 9 Floating point opcodes that support exception flag gathering quiet and
propagate signaling NaN inputs per IEEE 754-2008. Min_dx10 and max_dx10
become IEEE 754-2008 compliant due to signaling NaN propagation and
quieting.
LOD_CLAMPED 10 Sticky bit indicating that one or more texture accesses had their LOD
clamped.
DEBUG 11 Forces the wavefront to jump to the exception handler after each instruction is
executed (but not after ENDPGM). Only works if TRAP_EN = 1.
EXCP_EN 18:12 Enable mask for exceptions. Enabled means if the exception occurs and
TRAP_EN==1, a trap is taken.
[12] : invalid.
[13] : inputDenormal.
[14] : float_div0.
[15] : overflow.
[16] : underflow.
[17] : inexact.
[18] : int_div0.
[19] : address watch
[20] : memory violation
POPS_PACKER0 24 1 = this wave is associated with packer 0. User shader must set this to
!PackerID from the POPS initialized SGPR (load_collision_waveID), or zero if
not using POPS.
POPS_PACKER1 25 1 = this wave is associated with packer 1. User shader must set this to
PackerID from the POPS initialized SGPR (load_collision_waveID), or zero if
not using POPS.
VSKIP 28 0 = normal operation. 1 = skip (do not execute) any vector instructions: valu,
vmem, export, lds, gds. "Skipping" instructions occurs at high-speed (10
wavefronts per clock cycle can skip one instruction). This is much faster than
issuing and discarding instructions.
Out-of-range can occur through GPR-indexing or bad programming. It is illegal to index from
one register type into another (for example: SGPRs into trap registers or inline constants). It is
also illegal to index within inline constants.
The following describe the out-of-range behavior for various storage types.
• SGPRs
◦ Source or destination out-of-range = (sgpr < 0 || (sgpr >= sgpr_size)).
◦ Source out-of-range: returns the value of SGPR0 (not the value 0).
◦ Destination out-of-range: instruction writes no SGPR result.
• VGPRs
◦ Similar to SGPRs. It is illegal to index from SGPRs into VGPRs, or vice versa.
◦ Out-of-range = (vgpr < 0 || (vgpr >= vgpr_size))
◦ If a source VGPR is out of range, VGPR0 is used.
◦ If a destination VGPR is out-of-range, the instruction is ignored (treated as an NOP).
• LDS
◦ If the LDS-ADDRESS is out-of-range (addr < 0 or > (MIN(lds_size, m0)):
▪ Writes out-of-range are discarded; it is undefined if SIZE is not a multiple of write-
data-size.
▪ Reads return the value zero.
◦ If any source-VGPR is out-of-range, use the VGPR0 value is used.
◦ If the dest-VGPR is out of range, nullify the instruction (issue with exec=0)
• Memory, LDS, and GDS: Reads and atomics with returns.
◦ If any source VGPR or SGPR is out-of-range, the data value is undefined.
◦ If any destination VGPR is out-of-range, the operation is nullified by issuing the
instruction as if the EXEC mask were cleared to 0.
▪ This out-of-range check must check all VGPRs that can be returned (for example:
VDST to VDST+3 for a BUFFER_LOAD_DWORDx4).
▪ This check must also include the extra PRT (partially resident texture) VGPR and
nullify the fetch if this VGPR is out-of-range, no matter whether the texture system
actually returns this value or not.
▪ Atomic operations with out-of-range destination VGPRs are nullified: issued, but
with exec mask of zero.
Instructions with multiple destinations (for example: V_ADDC): if any destination is out-of-range,
no results are written.
• When 64-bit data is used. This is required for moves to/from 64-bit registers, including the
PC.
• When scalar memory reads that the address-base comes from an SGPR-pair (either in
SGPR).
Quad-alignment is required for the data-GPR when a scalar memory read returns four or more
Dwords. When a 64-bit quantity is stored in SGPRs, the LSBs are in SGPR[n], and the MSBs
are in SGPR[n+1].
Clamping of LDS reads and writes is controlled by two size registers, which contain values for
the size of the LDS space allocated by SPI to this wavefront or work-group, and a possibly
smaller value specified in the LDS instruction (size is held in M0). The LDS operations use the
smaller of these two sizes to determine how to clamp the read/write addresses.
The SCC can be used as the carry-in for extended-precision integer arithmetic, as well as the
selector for conditional moves and branches.
There is also a VCC summary bit (vccz) that is set to 1 when the VCC result is zero. This is
useful for early-exit branch tests. VCC is also set for selected integer ALU operations (carry-
out).
Vector compares have the option of writing the result to VCC (32-bit instruction encoding) or to
any SGPR (64-bit instruction encoding). VCCZ is updated every time VCC is updated: vector
compares and scalar writes to VCC.
The EXEC mask determines which threads execute an instruction. The VCC indicates which
executing threads passed the conditional test, or which threads generated a carry-out from an
integer add or subtract.
sources VCC, that counts against the limit on the total number of SGPRs that
can be sourced for a given instruction. VCC physically resides in the highest
two user SGPRs.
Shader Hazard with VCC The user/compiler must prevent a scalar-ALU write to the SGPR
holding VCC, immediately followed by a conditional branch using VCCZ. The hardware cannot
detect this, and inserts the one required wait state (hardware does detect it when the SALU
writes to VCC, it only fails to do this when the SALU instruction references the SGPRs that
happen to hold VCC).
All Trap temporary SGPRs (TTMP*) are privileged for writes - they can be written only when in
the trap handler (status.priv = 1). When not privileged, writes to these are ignored. TMA and
TBA are read-only; they can be accessed through S_GETREG_B32.
When a trap is taken (either user initiated, exception or host initiated), the shader hardware
generates an S_TRAP instruction. This loads trap information into a pair of SGPRS:
HT is set to one for host initiated traps, and zero for user traps (s_trap) or exceptions. TRAP_ID
is zero for exceptions, or the user/host trapID for those traps. When the trap handler is entered,
the PC of the faulting instruction will be: (PC - PC_rewind*4).
STATUS . TRAP_EN - This bit indicates to the shader whether or not a trap handler is present.
When one is not present, traps are not taken, no matter whether they’re floating point, user-, or
host-initiated traps. When the trap handler is present, the wavefront uses an extra 16 SGPRs for
trap processing. If trap_en == 0, all traps and exceptions are ignored, and s_trap is converted
by hardware to NOP.
MODE . EXCP_EN[8:0] - Floating point exception enables. Defines which exceptions and
events cause a trap.
Bit Exception
0 Invalid
1 Input Denormal
2 Divide by zero
3 Overflow
4 Underflow
5 Inexact
EXCP 8:0 Status bits of which exceptions have occurred. These bits are sticky and
accumulate results until the shader program clears them. These bits are
accumulated regardless of the setting of EXCP_EN. These can be read or written
without shader privilege. Bit Exception 0 invalid
1 Input Denormal
2 Divide by zero
3 overflow
4 underflow
5 inexact
6 integer divide by zero
7 address watch
8 memory violation
SAVECTX 10 A bit set by the host command indicating that this wave must jump to its trap
handler and save its context. This bit must be cleared by the trap handler using
S_SETREG. Note - a shader can set this bit to 1 to cause a save-context trap,
and due to hardware latency the shader may execute up to 2 additional
instructions before taking the trap.
ADDR_WATCH1-3 14:12 Indicates that address watch 1, 2, or 3 has been hit. Bit 12 is address watch 1; bit
13 is 2; bit 14 is 3.
EXCP_CYCLE 21:16 When a float exception occurs, this tells the trap handler on which cycle the
exception occurred on. 0-3 for normal float operations, 0-7 for double float add,
and 0-15 for double float muladd or transcendentals. This register records the
cycle number of the first occurrence of an enabled (unmasked) exception.
EXCP_CYCLE[1:0] Phase: threads 0-15 are in phase 0, 48-63 in phase 3.
EXCP_CYCLE[3:2] Multi-slot pass.
EXCP_CYCLE[5:4] Hybrid pass: used for machines running at lower rates.
DP_RATE 31:29 Determines how the shader interprets the TRAP_STS.cycle. Different Vector
Shader Processors (VSP) process instructions at different rates.
Memory Buffer to LDS does NOT return a memory violation if the LDS address is out of range,
but masks off EXEC bits of threads that would go out of range.
When a memory access is in violation, the appropriate memory (LDS or TC) returns MEM_VIOL
to the wave. This is stored in the wave’s TRAPSTS.mem_viol bit. This bit is sticky, so once set
to 1, it remains at 1 until the user clears it.
There is a corresponding exception enable bit (EXCP_EN.mem_viol). If this bit is set when the
memory returns with a violation, the wave jumps to the trap handler.
Memory violations are not precise. The violation is reported when the LDS or TC processes the
address; during this time, the wave may have processed many more instructions. When a
mem_viol is reported, the Program Counter saved is that of the next instruction to execute; it
has no relationship the faulting instruction.
S_ENDPGM Terminates the wavefront. It can appear anywhere in the kernel and can appear multiple
times.
S_ENDPGM_SAVED Terminates the wavefront due to context save. It can appear anywhere in the kernel and can
appear multiple times.
4.2. Branching
Branching is done using one of the following scalar ALU instructions.
S_CBRANCH_<test> Conditional branch. Branch only if <test> is true. Tests are VCCZ, VCCNZ,
EXECZ, EXECNZ, SCCZ, and SCCNZ.
Instructions Description
S_SETVSKIP Set a bit that causes all vector instructions to be ignored. Useful alternative
to branching.
For conditional branches, the branch condition can be determined by either scalar or vector
operations. A scalar compare operation sets the Scalar Condition Code (SCC), which then can
be used as a conditional branch condition. Vector compare operations set the VCC mask, and
VCCZ or VCCNZ then can be used to determine branching.
4.3. Workgroups
Work-groups are collections of wavefronts running on the same compute unit which can
synchronize and share data. Up to 16 wavefronts (1024 work-items) can be combined into a
work-group. When multiple wavefronts are in a workgroup, the S_BARRIER instruction can be
used to force each wavefront to wait until all other wavefronts reach the same instruction; then,
all wavefronts continue. Any wavefront can terminate early using S_ENDPGM, and the barrier is
considered satisfied when the remaining live waves reach their barrier instruction.
The shader has three counters that track the progress of issued instructions. S_WAITCNT waits
for the values of these counters to be at, or below, specified values before continuing.
These allow the shader writer to schedule long-latency instructions, execute unrelated work,
and specify when results of long-latency operations are needed.
Instructions of a given type return in order, but instructions of different types can complete out-
of-order. For example, both GDS and LDS instructions use LGKM_cnt, but they can return out-
of-order.
completed.
◦ Incremented every time a vector-memory read or write (MIMG, MUBUF, or MTBUF
format) instruction is issued.
◦ Decremented for reads when the data has been written back to the VGPRs, and for
writes when the data has been written to the L2 cache. Ordering: Memory reads and
writes return in the order they were issued, including mixing reads and writes.
• LGKM_CNT: (LDS, GDS, (K)constant, (M)essage) Determines when one of these low-
latency instructions have completed.
◦ Incremented by 1 for every LDS or GDS instruction issued, as well as by Dword-count
for scalar-memory reads. For example, s_memtime counts the same as an
s_load_dwordx2.
◦ Decremented by 1 for LDS/GDS reads or atomic-with-return when the data has been
returned to VGPRs.
◦ Incremented by 1 for each S_SENDMSG issued. Decremented by 1 when message is
sent out.
◦ Decremented by 1 for LDS/GDS writes when the data has been written to LDS/GDS.
◦ Decremented by 1 for each Dword returned from the data-cache (SMEM).
Ordering:
▪ Instructions of different types are returned out-of-order.
▪ Instructions of the same type are returned in the order they were issued, except
scalar-memory-reads, which can return out-of-order (in which case only
S_WAITCNT 0 is the only legitimate value).
• EXP_CNT: VGPR-export count.
Determines when data has been read out of the VGPR and sent to GDS, at which time it is
safe to overwrite the contents of that VGPR.
◦ Incremented when an Export/GDS instruction is issued from the wavefront buffer.
◦ Decremented for exports/GDS when the last cycle of the export instruction is granted
and executed (VGPRs read out). Ordering
▪ Exports are kept in order only within each export type (color/null, position,
parameter cache).
VALU writes SGPR VMEM reads that SGPR 5 Hardware assumes that there is
no dependency here. If the
VALU writes the SGPR that is
used by a VMEM, the user must
add five wait states.
VALU writes EXEC VALU DPP op 5 ALU does not forward EXEC to
DPP.
Mixed use of VCC: alias vs VALU which reads VCC as a 1 VCC can be accessed by name
SGPR# constant (not as a carry-in which or by the logical SGPR which
v_readlane, v_readfirstlane is 0 wait states). holds VCC. The data
v_cmp dependency check logic does
v_add*i/u not understand that these are
v_sub*_i/u the same register and do not
v_div_scale* (writes vcc) prevent races.
1. S_CBRANCH This case is used for simple control flow, where the decision to take a branch
is based on a previous compare operation. This is the most common method for conditional
branching.
2. S_CBRANCH_I/G_FORK and S_CBRANCH_JOIN This method, intended for complex,
irreducible control flow graphs, is described in the rest of this section. The performance of
this method is lower than that for S_CBRANCH on simple flow control; use it only when
necessary.
Conditional Branch (CBR) graphs are grouped into self-contained code blocks, denoted by
FORK at the entrance point, and JOIN and the exit point. The shader compiler must add these
instructions into the code. This method uses a six-deep stack and requires three SGPRs for
each fork/join block. Fork/Join blocks can be hierarchically nested to any depth (subject to
SGPR requirements); they also can coexist with other conditional flow control or computed
jumps.
This method compares how many of the 64 threads go down the PASS path instead of the FAIL
path; then, it selects the path with the fewer number of threads first. This means at most 50% of
the threads are active, and this limits the necessary stack depth to Log264 = 6.
The following pseudo-code shows the details of CBRANCH Fork and Join operations.
if (mask_pass == exec)
I_FORK : PC += 4 + target_addr_offset
G_FORK: PC = SGPR[arg1]
else if (mask_fail == exec)
PC += 4
else if (bitcount(mask_fail) < bitcount(mask_pass))
exec = mask_fail
I_FORK : SGPR[CSP*4] = { (pc + 4 + target_addr_offset), mask_pass }
G_FORK: SGPR[CSP*4] = { SGPR[arg1], mask_pass }
CSP++
PC += 4
else
exec = mask_pass
SGPR[CSP*4] = { (pc+4), mask_fail }
CSP++
I_FORK : PC += 4 + target_addr_offset
G_FORK: PC = SGPR[arg1]
S_CBRANCH_JOIN arg0
if (CSP == SGPR[arg0]) // SGPR[arg0] holds the CSP value when the FORK started
PC += 4 // this is the 2nd time to JOIN: continue with pgm
else
CSP -- // this is the 1st time to JOIN: jump to other FORK path
{PC, EXEC} = SGPR[CSP*4] // read 128-bits from 4 consecutive SGPRs
Field Description
The lists of similar instructions sometimes use a condensed form using curly braces { } to
express a list of possible names. For example, S_AND_{B32, B64} defines two legal
instructions: S_AND_B32 and S_AND_B64.
In the table below, 0-127 can be used as scalar sources or destinations; 128-255 can only be
used as sources.
106 VCC_LO Holds the low Dword of the vector condition code
107 VCC_HI Holds the high Dword of the vector condition code
128 0 zero
236 SHARED_LIMIT
237 PRIVATE_BASE
238 PRIVATE_LIMIT
241 -0.5
242 1.0
243 -1.0
244 2.0
245 -2.0
246 4.0
247 -4.0
The SALU cannot use VGPRs or LDS. SALU instructions can use a 32-bit literal constant. This
constant is part of the instruction stream and is available to all SALU microcode formats except
SOPP and SOPK. Literal constants are used by setting the source instruction field to "literal"
(255), and then the following instruction dword is used as the source value.
If the destination SGPR is out-of-range, no SGPR is written with the result. However, SCC and
possibly EXEC (if saveexec) will still be written.
If an instruction uses 64-bit data in SGPRs, the SGPR pair must be aligned to an even
boundary. For example, it is legal to use SGPRs 2 and 3 or 8 and 9 (but not 11 and 12) to
represent 64-bit data.
S_MOV_{B32,B64} SOP1 n D = S0
{S_NAND,S_NOR,S_XNOR}_{B32,B64} SOP2 y D = ~(S0 & S1), ~(S0 OR S1), ~(S0 XOR S1)
S_BFM_{B32,B64} SOP2 n Bit field mask. D = ((1 << S0[4:0]) - 1) << S1[4:0].
S_BFE_U32, S_BFE_U64 SOP2 n Bit Field Extract, then sign-extend result for I32/64
S_BFE_I32, S_BFE_I64 instructions.
(signed/unsigned) S0 = data,
S1[5:0] = offset, S1[22:16]= width.
S_FLBIT_I32_{B32,B64} SOP1 n Find last bit. D = the number of zeros before the
first one starting from the MSB. Returns -1 if none.
S_FLBIT_I32 SOP1 n Count how many bits in a row (from MSB to LSB)
S_FLBIT_I32_I64 are the same as the sign bit. Return -1 if the input
is zero or all 1’s (-1). 32-bit pseudo-code:
if (S0 == 0 || S0 == -1) D = -1
else
D=0
for (I = 31 .. 0)
if (S0[I] == S0[31])
D++
else break
This opcode behaves the same as V_FFBH_I32.
S_SETREG_B32 SOPK* n Write the LSBs of D into a hardware register. (Note that D is a
source SGPR.) Must add an S_NOP between two consecutive
S_SETREG to the same register.
S_SETREG_IMM32_B32 SOPK* n S_SETREG where 32-bit data comes from a literal constant (so
this is a 64-bit instruction format).
The hardware register is specified in the DEST field of the instruction, using the values in the
table above. Some bits of the DEST specify which register to read/write, but additional bits
specify which bits in the specific register to read/write:
0 reserved
1 MODE R/W.
3 TRAPSTS R/W.
8 - 15 reserved.
VM_CNT 23:22, Number of VMEM instructions issued but not yet returned.
3:0
EXP_CNT 6:4 Number of Exports issued but have not yet read their data from VGPRs.
LGKM_CNT 11:8 LDS, GDS, Constant-memory and Message instructions issued-but-not-completed count.
VGPR_BASE 5:0 Physical address of first VGPR assigned to this wavefront, as [7:2]
VGPR_SIZE 13:8 Number of VGPRs assigned to this wavefront, as [7:2]. 0=4 VGPRs, 1=8 VGPRs, etc.
SGPR_BASE 21:16 Physical address of first SGPR assigned to this wavefront, as [7:3].
SGPR_SIZE 27:24 Number of SGPRs assigned to this wave, as [7:3]. 0=8 SGPRs, 1=16 SGPRs, etc.
LDS_BASE 7:0 Physical address of first LDS location assigned to this wavefront, in units of 64 Dwords.
LDS_SIZE 20:12 Amount of LDS space assigned to this wavefront, in units of 64 Dwords.
Parameter interpolation is a mixed VALU and LDS instruction, and is described in the Data
Share chapter.
When an instruction is available in two microcode formats, it is up to the user to decide which to
use. It is recommended to use the 32-bit encoding whenever possible.
VOP2 is for instructions with two inputs and a single vector destination. Instructions that have a
carry-out implicitly write the carry-out to the VCC register.
VOP1 is for instructions with no inputs or a single input and one destination.
VOP3 is for instructions with up to three inputs, input modifiers (negate and absolute value), and
output modifiers. There are two forms of VOP3: one which uses a scalar destination field (used
only for div_scale, integer add and subtract); this is designated VOP3b. All other instructions
use the common form, designated VOP3a.
Any of the 32-bit microcode formats may use a 32-bit literal constant, but not VOP3.
VOP3P is for instructions that use "packed math": They perform the operation on a pair of input
values that are packed into the high and low 16-bits of each operand; the two 16-bit results are
written to a single VGPR as two packed values.
6.2. Operands
All VALU instructions take at least one input operand (except V_NOP and V_CLREXCP). The
data-size of the operands is explicitly defined in the name of the instruction. For example,
V_MAD_F32 operates on 32-bit floating point data.
• VGPRs.
• SGPRs.
• Inline constants - constant selected by a specific VSRC value.
• Literal constant - 32-bit value in the instruction stream. When a literal constant is used with
a 64bit instruction, the literal is expanded to 64 bits by: padding the LSBs with zeros for
floats, padding the MSBs with zeros for unsigned ints, and by sign-extending signed ints.
• LDS direct data read.
• M0.
• EXEC mask.
Limitations
• At most one SGPR can be read per instruction, but the value can be used for more than
one operand.
• At most one literal constant can be used, and only when an SGPR or M0 is not used as a
source.
• Only SRC0 can use LDS_DIRECT (see Chapter 10, "Data Share Operations").
Instructions using the VOP3 form and also using floating-point inputs have the option of
applying absolute value (ABS field) or negate (NEG field) to any of the input operands.
Literal constants are 32-bits, but they can be used as sources which normally require 64-bit
data:
All V_CMPX instructions write the result of their comparison (one bit per thread) to both an
SGPR (or VCC) and the EXEC mask.
Instructions producing a carry-out (integer add and subtract) write their result to VCC when used
in the VOP2 form, and to an arbitrary SGPR-pair when used in the VOP3 form.
When the VOP3 form is used, instructions with a floating-point result can apply an output
modifier (OMOD field) that multiplies the result by: 0.5, 1.0, 2.0 or 4.0. Optionally, the result can
be clamped (CLAMP field) to the range [0.0, +1.0].
Output modifiers apply only to floating point results and are ignored for integer or bit results.
Output modifiers are not compatible with output denormals: if output denormals are enabled,
then output modifiers are ignored. If output demormals are disabled, then the output modifier is
applied and denormals are flushed to zero. Output modifiers are not IEEE compatible: -0 is
flushed to +0. Output modifiers are ignored if the IEEE mode bit is set to 1.
In the table below, all codes can be used when the vector source is nine bits; codes 0 to 255
can be the scalar source if it is eight bits; codes 0 to 127 can be the scalar source if it is seven
bits; and codes 256 to 511 can be the vector source or destination.
104 XNACK_MASK_LO
105 XNACK_MASK_HI
124 M0
125 reserved
128 0
236 SHARED_LIMIT
237 PRIVATE_BASE
238 PRIVATE_LIMIT
241 -0.5
1/(2*PI) is 0.15915494.
242 1.0 The exact value used is:
half: 0x3118
243 -1.0 single: 0x3e22f983
244 2.0 double: 0x3fc45f306dc9c882
245 -2.0
246 4.0
247 -4.0
248 1/(2*PI)
254 LDS direct Use LDS direct read to supply 32-bit value Vector-alu instructions only.
When the destination GPR is out-of-range, the instruction executes but does not write the
results.
6.3. Instructions
The table below lists the complete VALU instruction set by microcode encoding, except for
VOP3P instructions which are listed in a later section.
V_CVT_PKNORM_I16_F16 V_FREXP_MANT_{
F16,F32,64}
V_CVT_PKNORM_U16_F16 V_FREXP_EXP_I32_F32
V_MAD_U32_U16 V_FREXP_EXP_I16_F16
V_MAD_I32_I16 V_CLREXCP
V_XAD_U32 V_MOV_FED_B32
V_MIN3_{F16,I16,U16} V_CVT_NORM_I16_F16
V_MAX3_{F16,I16,U16} V_CVT_NORM_U16_F16
V_MED3_{F16,I16,U16} V_SAT_PK_U8_I16
V_CVT_PKNORM_{I16_F16, V_WRITELANE_REGWR
U16_F16}
V_READLANE_REGRD_B32 V_SWAP_B32
V_PACK_B32_F16 V_SCREEN_PARTITION_4SE
_B32
V_CMP I16, I32, I64, U16, F, LT, EQ, LE, GT, LG, GE, T Write VCC..
U32, U64
V_CMPX Write VCC and
exec.
V_CMP F16, F32, F64 F, LT, EQ, LE, GT, LG, GE, T, Write VCC.
O, U, NGE, NLG, NGT, NLE, NEQ, NLT
V_CMPX (o = total order, u = unordered, Write VCC and
N = NaN or normal compare) exec.
V_CMP_CL F16, F32, F64 Test for one of: signaling-NaN, quiet-NaN, Write VCC.
ASS positive or negative: infinity, normal, subnormal, zero.
S_SET_GPR_IDX_ON SOPC N Enable VGPR indexing, and set the index value and mode
from an SGPR. mode.gpr_idx_en = 1
M0[7:0] = S0.u[7:0]
M0[15:12] = SIMM4
Indexing is enabled and disabled by a bit in the MODE register: gpr_idx_en. When enabled, two
fields from M0 are used to determine the index value and what it applies to:
• M0[7:0] holds the unsigned index value, added to selected source or destination VGPR
addresses.
• M0[15:12] holds a four-bit mask indicating to which source or destination the index is
applied.
◦ M0[15] = dest_enable.
◦ M0[14] = src2_enable.
◦ M0[13] = src1_enable.
◦ M0[12] = src0_enable.
Indexing only works on VGPR source and destinations, not on inline constants or SGPRs. It is
illegal for the index attempt to address VGPRs that are out of range.
v_mac_* dst = src0 * src1 + dst mad: dst, src0, src1, dst, s2 x src1 src0
src2
v_madak dst = src0 * src1 + imm mad: dst, src0, src1, dst x src1 src0
src2
v_madmk dst = S0 * imm + src1 mad: dst, src0, src1, dst src2 x src0
src2
where:
src= vector source
SS = scalar source
dst = vector destination
sdst = scalar destination
Packed math uses the instructions below and the microcode format "VOP3P". This format adds
op_sel and neg fields for both the low and high operands, and removes ABS and OMOD.
V_MAD_MIX_* are not packed math, but perform a single MAD operation on
a mixture of 16- and 32-bit inputs. They are listed here because they use the
VOP3P encoding.
The scalar unit reads and writes consecutive Dwords between memory and the SGPRs. This is
intended primarily for loading ALU constants and for indirect T#/S# lookup. No data formatting is
supported, nor is byte or short data.
OP 8 Opcode.
SBASE 6 SGPR-pair (SBASE has an implied LSB of zero) which provides a base address, or for BUFFER
instructions, a set of 4 SGPRs (4-sgpr aligned) which hold the resource constant. For BUFFER
instructions, the only resource fields used are: base, stride, num_records.
OFFSET 20 An unsigned byte offset, or the address of an SGPR holding the offset. Writes and atomics: M0 or
immediate only, not SGPR.
NV 1 Non-volatile.
7.2. Operations
S_SCRATCH_LOAD / S_SCRATCH_STORE:
1 0 SGPR[base] + inst_offset
All components of the address (base, offset, inst_offset, M0) are in bytes, but the two LSBs are
ignored and treated as if they were zero. S_DCACHE_DISCARD ignores the six LSBs to make
the address 64-byte-aligned.
Scalar access to private space must either use a buffer constant or manually convert the
address:
"Hidden private base" is not available to the shader through hardware: It must be preloaded into
an SGPR or made available through a constant buffer. This is equivalent to what the driver must
do to calculate the base address from scratch for buffer constants.
A scalar instruction must not overwrite its own source registers because the possibility of the
instruction being replayed due to an ATC XNACK. Similarly, instructions in scalar memory
clauses must not overwrite the sources of any of the instructions in the clause. A clause is
defined as a string of memory instructions of the same type. A clause is broken by any non-
memory instruction.
Atomics are a different case because they are naturally aligned and they must be in a single-
instruction clause. By definition, an atomic that returns the pre-op value overwrites its data
source, which is acceptable.
Buffer constant fields used: base_address, stride, num_records, NV. Other fields are ignored.
Scalar memory read/write does not support "swizzled" buffers. Stride is used only for memory
address bounds checking, not for computing the address to access.
The SMEM supplies only a SBASE address (byte) and an offset (byte or Dword). Any "index *
stride" must be calculated manually in shader code and added to the offset prior to the SMEM.
The two LSBs of V#.base and of the final address are ignored to force Dword alignment.
atomics, scalar atomic operations can return the "pre-operation value" to the SDATA SGPRs.
This is enabled by setting the microcode GLC bit to 1.
7.2.4. S_MEMTIME
This instruction reads a 64-bit clock counter into a pair of SGPRs: SDST and SDST+1.
7.2.5. S_MEMREALTIME
This instruction reads a 64-bit "real time-counter" and returns the value into a pair of SGPRS:
SDST and SDST+1. The time value is from a clock for which the frequency is constant (not
affected by power modes or core clock frequency changes).
Because the instructions can return out-of-order, the only sensible way to use this counter is to
implement S_WAITCNT 0; this imposes a wait for all data to return from previous SMEMs
before continuing.
SBASE
The value of SBASE must be even for S_BUFFER_LOAD (specifying the address of an
SGPR which is a multiple of four). If SBASE is out-of-range, the value from SGPR0 is used.
OFFSET
The value of OFFSET has no alignment restrictions.
Memory Address : If the memory address is out-of-range (clamped), the operation is not
performed for any Dwords that are out-of-range.
Software initiates a load, store or atomic operation through the texture cache through one of
three types of VMEM instructions:
The instruction defines which VGPR(s) supply the addresses for the operation, which VGPRs
supply or receive data from the operation, and a series of SGPRs that contain the memory
buffer descriptor (V# or T#). Also, MIMG operations supply a texture sampler from a series of
four SGPRs; this sampler defines texel filtering operations to be performed on data read from
the image.
Buffer reads have the option of returning data to VGPRs or directly into LDS.
Examples of buffer objects are vertex buffers, raw buffers, stream-out buffers, and structured
buffers.
Buffer objects support both homogeneous and heterogeneous data, but no filtering of read-data
(no samplers). Buffer instructions are divided into two groups:
Atomic operations take data from VGPRs and combine them arithmetically with data already in
memory. Optionally, the value that was in memory before the operation took place can be
returned to the shader.
All VM operations use a buffer resource constant (V#) which is a 128-bit value in SGPRs. This
constant is sent to the texture cache when the instruction is executed. This constant defines the
address and characteristics of the buffer in memory. Typically, these constants are fetched from
memory using scalar memory reads prior to executing VM instructions, but these constants also
can be generated within the shader.
The D16 instruction variants convert the results to packed 16-bit values. For example,
BUFFER_LOAD_FORMAT_D16_XYZW will write two VGPRs.
MTBUF Instructions
TBUFFER_LOAD_FORMAT_{x,xy,xyz,xyzw} Read from, or write to, a typed buffer object. Also used for a vertex
TBUFFER_STORE_FORMAT_{x,xy,xyz,xyzw} fetch.
MUBUF Instructions
VADDR 8 Address of VGPR to supply first component of address (offset or index). When both index and
offset are used, index is in the first VGPR, offset in the second.
VDATA 8 Address of VGPR to supply first component of write data or receive first component of read-
data.
SOFFSET 8 SGPR to supply unsigned byte offset. Must be an SGPR, M0, or inline constant.
SRSRC 5 Specifies which SGPR supplies T# (resource constant) in four or eight consecutive SGPRs.
This field is missing the two LSBs of the SGPR address, since this address must be aligned to
a multiple of four SGPRs.
GLC 1 Globally Coherent. Controls how reads and writes are handled by the L1 texture cache.
READ
GLC = 0 Reads can hit on the L1 and persist across wavefronts
GLC = 1 Reads miss the L1 and force fetch to L2. No L1 persistence across waves.
WRITE
GLC = 0 Writes miss the L1, write through to L2, and persist in L1 across wavefronts.
GLC = 1 Writes miss the L1, write through to L2. No persistence across wavefronts.
ATOMIC
GLC = 0 Previous data value is not returned. No L1 persistence across wavefronts.
GLC = 1 Previous data value is returned. No L1 persistence across wavefronts.
Note: GLC means "return pre-op value" for atomics.
SLC 1 System Level Coherent. When set, accesses are forced to miss in level 2 texture cache and
are coherent with system memory.
TFE 1 Texel Fail Enable for PRT (partially resident textures). When set to 1, fetch can return a NACK
that causes a VGPR write into DST+1 (first GPR after all fetch-dest GPRs).
Address
Zero, one or two VGPRs are used, depending of the offset-enable (OFFEN) and index-
enable (IDXEN) in the instruction word, as shown in the table below:
0 0 nothing
0 1 uint offset
1 0 uint index
Write Data : N consecutive VGPRs, starting at VDATA. The data format specified in the
instruction word (NFMT, DFMT for MTBUF, or encoded in the opcode field for MUBUF)
determines how many Dwords to write.
Read Data Format : Read data is 32 bits, based on the data format in the instruction or
resource. Float or normalized data is returned as floats; integer formats are returned as integers
(signed or unsigned, same type as the memory storage format). Memory reads of data in
Atomics with Return : Data is read out of the VGPR(s) starting at VDATA to supply to the
atomic operation. If the atomic returns a value to VGPRs, that data is returned to those same
VGPRs starting at VDATA.
Instruction : The instruction’s dfmt and nfmt fields are used instead of the resource’s fields.
Data format derived : The data format is derived from the opcode and ignores the resource
definition. For example, buffer_load_ubyte sets the data-format to 8 and number-format to uint.
The resource’s data format must not be INVALID; that format has specific
meaning (unbound resource), and for that case the data format is not
replaced by the instruction’s implied data format.
DST_SEL identity : Depending on the number of components in the data-format, this is: X000,
XY00, XYZ0, or XYZW.
The MTBUF derives the data format from the instruction. The MUBUF
BUFFER_LOAD_FORMAT and BUFFER_STORE_FORMAT instructions use dst_sel from the
resource; other MUBUF instructions derive data-format from the instruction itself.
D16 Instructions : Load-format and store-format instructions also come in a "d16" variant. For
stores, each 32-bit VGPR holds two 16-bit data elements that are passed to the texture unit.
This texture unit converts them to the texture format before writing to memory. For loads, data
returned from the texture unit is converted to 16 bits, and a pair of data are stored in each 32-bit
VGPR (LSBs first, then MSBs). Control over int vs. float is controlled by NFMT.
inst_idxen 1 Boolean: get index from VGPR when true, or no index when false.
inst_offen 1 Boolean: get offset from VGPR when true, or no offset when false. Note that inst_offset is
present, regardless of this bit.
The "element size" for a buffer instruction is the amount of data the instruction transfers. It is
determined by the DFMT field for MTBUF instructions, or from the opcode for MUBUF
instructions. It can be 1, 2, 4, 8, or 16 bytes.
const_add_tid_enab 1 Boolean. Add thread_ID within the wavefront to the index when true.
le
Range Checking
Addresses can be checked to see if they are in or out of range. When an address is out of
range, reads will return zero, and writes and atomics will be dropped. The address range check
algorithm depends on the buffer type.
Raw Buffer
Used when: AddTID==0 && SWizzleEn==0 && IdxEn==0
Out of Range if: (InstOffset + (OffEN ? vgpr_offset : 0)) >= NumRecords
Structured Buffer
Used when: AddTID==0 && Stride!=0 && IdxEn==1
Out of Range if: Index(vgpr) >= NumRecords
Notes:
1. Reads that go out-of-range return zero (except for components with V#.dst_sel = SEL_1
that return 1).
2. Writes that are out-of-range do not write anything.
3. Load/store-format-* instruction and atomics are range-checked "all or nothing" - either
entirely in or out.
4. Load/store-Dword-x{2,3,4} and range-check per component.
Swizzled addressing rearranges the data in the buffer and can help provide improved cache
locality for arrays of structures. Swizzled addressing also requires Dword-aligned accesses. A
single fetch instruction cannot attempt to fetch a unit larger than const-element-size. The
buffer’s STRIDE must be a multiple of element_size.
Remember that the "sgpr_offset" is not a part of the "offset" term in the above equations.
Here are few proposed uses of swizzled addressing in common graphics buffers.
inst_vgpr_offset_ T F T T T T
en
inst_vgpr_index_ F T T F F F
en
const_add_tid_en F F F T T F
able
const_buffer_swiz F T T T F F
zle
const_elem_size na 4 4 4 or 16 na 4
const_index_strid na 16 16 64
e
• D16 loads data into or stores data from the lower 16 bits of a VGPR.
• D16_HI loads data into or stores data from the upper 16 bits of a VGPR.
For example, BUFFER_LOAD_UBYTE_D16 reads a byte per work-item from memory, converts
it to a 16-bit integer, then loads it into the lower 16 bits of the data VGPR.
8.1.7. Alignment
For Dword or larger reads or writes, the two LSBs of the byte-address are ignored, thus forcing
Dword alignment.
The table below details the fields that make up the buffer resource descriptor.
62 1 Cache swizzle Buffer access. Optionally, swizzle texture cache TC L1 cache banks.
104:102 3 Dst_sel_z
107:105 3 Dst_sel_w
110:108 3 Num format Numeric data type (float, int, …). See instruction encoding for values.
114:111 4 Data format Number of fields and size of each field. See instruction encoding for
values. For MUBUF instructions with ADD_TID_EN = 1. This field
holds Stride [17:14].
116 1 User VM mode Unmapped behavior: 0: null (return 0 / drop write); 1:invalid (results in
error)
118:117 2 Index stride 8, 16, 32, or 64. Used for swizzled buffer addressing.
119 1 Add tid enable Add thread ID to the index for to calculate the address.
127:126 2 Type Value == 0 for buffer. Overlaps upper two bits of four-bit TYPE field in
128-bit T# resource.
A resource set to all zeros acts as an unbound texture or buffer (return 0,0,0,0).
The figure below shows the components of the LDS and memory address calculation:
TIDinWave is only added if the resource (T#) has the ADD_TID_ENABLE field set to 1, whereas
LDS adds it. The MEM_ADDR M# is in the VDATA field; it specifies M0.
Clamping Rules
Memory address clamping follows the same rules as any other buffer fetch. LDS address
clamping: the return data must not be written outside the LDS space allocated to this wave.
• Set the active-mask to limit buffer reads to those threads that return data to a legal LDS
location.
• The LDSbase (alloc) is in units of 32 Dwords, as is LDSsize.
• M0[15:0] is in bytes.
• For GLC==0
◦ The load can read data from the GPU L1.
◦ Typically, all loads (except load-acquire) use GLC==0.
• For GLC==1
◦ The load intentionally misses the GPU L1 and reads from L2. If there was a line in the
GPU L1 that matched, it is invalidated; L2 is reread.
◦ NOTE: L2 is not re-read for every work-item in the same wave-front for a single load
instruction. For example: b=uav[N+tid] // assume this is a byte read w/ glc==1 and N is
aligned to 64B In the above op, the first Tid of the wavefront brings in the line from L2
or beyond, and all 63 of the other Tids read from same 64 B cache line in the L1.
• For GLC==0 This causes a write-combine across work-items of the wavefront store op;
dirtied lines are written to the L2 automatically.
◦ If the store operation dirtied all bytes of the 64 B line, it is left clean and valid in the L1;
subsequent accesses to the cache are allowed to hit on this cache line.
◦ Else do not leave write-combined lines in L1.
• For GLC==1 Same as GLC==0, except the write-combined lines are not left in the line,
even if all bytes are dirtied.
Atomics
Image objects are accessed using from one to four dimensional addresses; they are composed
of homogeneous data of one to four elements. These image objects are read from, or written to,
using IMAGE_* or SAMPLE_* instructions, all of which use the MIMG instruction format.
IMAGE_LOAD instructions read an element from the image buffer directly into VGPRS, and
SAMPLE instructions use sampler constants (S#) and apply filtering to the data after it is read.
IMAGE_ATOMIC instructions combine data from VGPRs with data already in memory, and
optionally return the value that was in memory before the operation.
All VM operations use an image resource constant (T#) that is a 256-bit value in SGPRs. This
constant is sent to the texture cache when the instruction is executed. This constant defines the
address, data format, and characteristics of the surface in memory. Some image instructions
also use a sampler constant that is a 128-bit constant in SGPRs. Typically, these constants are
fetched from memory using scalar memory reads prior to executing VM instructions, but these
constants can also be generated within the shader.
Texture fetch instructions have a data mask (DMASK) field. DMASK specifies how many data
components it receives. If DMASK is less than the number of components in the texture, the
texture unit only sends DMASK components, starting with R, then G, B, and A. if DMASK
specifies more than the texture format specifies, the shader receives zero for the missing
components.
IMAGE_LOAD_<op> Read data from an image object using one of the following: image_load,
image_load_mip, image_load_{pck, pck_sgn, mip_pck, mip_pck_sgn}.
IMAGE_STORE Store data to an image object. Store data to a specific mipmap level.
IMAGE_STORE_MIP
IMAGE_ATOMIC_<op> Image atomic operation, which is one of the following: swap, cmpswap, add, sub,
rsub, {u,s}{min,max}, and, or, xor, inc, dec, fcmpswap, fmin, fmax.
OP 7 Opcode.
VDATA 8 Address of VGPR to supply first component of write data or receive first component of read-data.
SSAMP 5 SGPR to supply S# (sampler constant) in four consecutive SGPRs. Missing two LSBs of SGPR-
address since must be aligned to a multiple of four SGPRs.
SRSRC 5 SGPR to supply T# (resource constant) in four or eight consecutive SGPRs. Missing two LSBs
of SGPR-address since must be aligned to a multiple of four SGPRs.
UNRM 1 Force address to be un-normalized regardless of T#. Must be set to 1 for image stores and
atomics.
DMASK 4 Data VGPR enable mask: one to four consecutive VGPRs. Reads: defines which components
are returned.
0 = red, 1 = green, 2 = blue, 3 = alpha
Writes: defines which components are written with data from VGPRs (missing components get
0). Enabled components come from consecutive VGPRs.
For example: DMASK=1001: Red is in VGPRn and alpha in VGPRn+1. For D16 writes, DMASK
is used only as a word count: each bit represents 16 bits of data to be written, starting at the
LSBs of VADDR, the MSBs, VADDR+1, etc. Bit position is ignored.
GLC 1 Globally Coherent. Controls how reads and writes are handled by the L1 texture cache.
READ:
GLC = 0 Reads can hit on the L1 and persist across waves.
GLC = 1 Reads miss the L1 and force fetch to L2. No L1 persistence across waves.
WRITE:
GLC = 0 Writes miss the L1, write through to L2, and persist in L1 across wavefronts.
GLC = 1 Writes miss the L1, write through to L2. No persistence across wavefronts.
ATOMIC:
GLC = 0 Previous data value is not returned. No L1 persistence across wavefronts.
GLC = 1 Previous data value is returned. No L1 persistence across wavefronts.
SLC 1 System Level Coherent. When set, accesses are forced to miss in level 2 texture cache and are
coherent with system memory.
TFE 1 Texel Fail Enable for PRT (partially resident textures). When set, a fetch can return a NACK,
which causes a VGPR write into DST+1 (first GPR after all fetch-dest GPRs).
LWE 1 LOD Warning Enable. When set to 1, a texture fetch may return "LOD_CLAMPED = 1".
A16 1 Address components are 16-bits (instead of the usual 32 bits). When set, all address
components are 16 bits (packed into two per Dword), except:
Texel offsets (three 6-bit uint packed into one Dword).
PCF reference (for _C instructions).
Address components are 16-bit uint for image ops without sampler; 16-bit float with sampler.
D16 1 VGPR-Data-16bit. On loads, convert data in memory to 16-bit format before storing it in VGPRs.
For stores, convert 16-bit data in VGPRs to 32 bits before going to memory. Whether the data is
treated as float or int is decided by NFMT. Allowed only with these opcodes:
IMAGE_SAMPLE*
IMAGE_GATHER4*, but not GATHER4H_PCK
IMAGE_LOAD
IMAGE_LOAD_MIP
IMAGE_STORE
IMAGE_STORE_MIP
The table below shows the contents of address VGPRs for the various image opcodes.
1 1D Array x slice
1 2D x y
2 2D MSAA x y fragid
2 2D Array x y slice
2 3D x y z
2 Cube x y face_id
2 2D x y mipid
3 3D x y z mipid
Certain sample and gather opcodes require additional values from VGPRs beyond what is
shown. These values are: offset, bias, z-compare, and gradients.
sample 0 1D x
1 1D Array x slice
1 2D x y
2 2D interlaced x y field
2 2D Array x y slice
2 3D x y z
2 Cube x y face_id
sample_l 1 1D x lod
2 2D x y lod
3 3D x y z lod
sample_cl 1 1D x clamp
2 2D x y clamp
3 3D x y z clamp
gather4 1 2D x y
2 2D interlaced x y field
2 2D Array x y slice
2 Cube x y face_id
gather4_l 2 2D x y lod
gather4_cl 2 2D x y clamp
The table below lists and briefly describes the legal suffixes for image instructions:
_B LOD BIAS 1: lod bias Add this BIAS to the LOD TA computes.
_CL LOD CLAMP - Clamp the LOD to be no larger than this value.
_D Derivative 2,4 or 6: slopes Send dx/dv, dx/dy, etc. slopes to TA for it to used in LOD computation.
_CD Coarse Derivative Send dx/dv, dx/dy, etc. slopes to TA for it to used in LOD computation.
_O Offset 1: offsets Send X, Y, Z integer offsets (packed into 1 Dword) to offset XYZ
address.
1D DX/DH DX/DV - - - -
• Body: One to four Dwords, as defined by the table: [Image Opcodes with Sampler] Address
components are X,Y,Z,W with X in VGPR_M, Y in VGPR_M+1, etc. The number of
components in "body" is the value of the ACNT field in the table, plus one.
• Data: Written from, or returned to, one to four consecutive VGPRs. The amount of data read
51:40 12 min lod 4.8 (four uint bits, eight fraction bits) format.
62 1 NV Non-volatile (0=volatile)
98:96 3 dst_sel_x 0 = 0, 1 = 1, 4 = R, 5 = G, 6 = B, 7 = A.
101:99 3 dst_sel_y
104:102 3 dst_sel_z
107:105 3 dst_sel_w
111:108 4 base level largest mip level in the resource view. For msaa, set to zero.
159:157 3 border color swizzle Specifies the channel ordering for border color independent of the T#
dst_sel fields. 0=xyzw, 1=xwyz, 2=wqyx, 3=wxyz, 4=zyxw, 5=yxwz
176:173 4 Array Pitch array pitch for quilts, encoded as: trunc(log2(array_pitch))+1
203:192 12 min LOD warn Feedback trigger for LOD, in U4.8 format.
214 1 Alpha is on MSB Set to 1 if the surface’s component swap is not reversed (DCC)
255:216 40 Meta Data Address Upper bits of meta-data address (DCC) [47:8]
All image resource view descriptors (T#'s) are written by the driver as 256 bits.
The MIMG-format instructions have a DeclareArray (DA) bit that reflects whether the shader
was expecting an array-texture or simple texture to be bound. When DA is zero, the hardware
does not send an array index to the texture cache. If the texture map was indexed, the hardware
supplies an index value of zero. Indices sent for non-indexed texture maps are ignored.
5:3 3 clamp y
8:6 3 clamp z
19 1 mc coord trunc
20 1 force degamma
27 1 trunc coord
59:56 4 perf_mip
63:60 4 perf z
89:88 2 z filter
94 1 Filter_Prec_Fix
125:108 18 unused
127:126 2 border color type Opaque-black, transparent-black, white, use border color ptr.
The shader developer’s responsibility to avoid data hazards associated with VMEM instructions
include waiting for VMEM read instruction completion before reading data fetched from the TC
(VMCNT).
ADDR 8 VGPR which holds the address. For 64-bit addresses, ADDR has the LSBs, and ADDR+1 has
the MSBs.
DATA 8 VGPR which holds the first Dword of data. Instructions can use 0-4 Dwords.
VDST 8 VGPR destination for data returned to the kernel, either from LOADs or Atomics with GLC=1
(return pre-op value).
SLC 1 System Level Coherent. Used in conjunction with GLC to determine cache policies.
GLC 1 Global Level Coherent. For Atomics, GLC: 1 means return pre-op value, 0 means do not return
pre-op value.
LDS 1 When set, data is moved between LDS and memory instead of VGPRs and memory. For Global
and Scratch only; must be zero for Flat.
SADDR 7 Scalar SGPR that provides an offset address. To disable, set this field to 0x7F. Meaning of this
field is different for Scratch and Global:
Flat: Unused.
Scratch: Use an SGPR (instead of VGPR) for the address.
Global: Use the SGPR to provide a base address; the VGPR provides a 32-bit offset.
M0 16 Implied use of M0 for SCRATCH and GLOBAL only when LDS=1. Provides the LDS address-
offset.
The atomic instructions above are also available in "_X2" versions (64-bit).
9.2. Instructions
The FLAT instruction set is nearly identical to the Buffer instruction set, but without the FORMAT
reads and writes. Unlike Buffer instructions, FLAT instructions cannot return data directly to
LDS, but only to VGPRs.
FLAT instructions do not use a resource constant (V#) or sampler (S#); however, they do require
a SGPR-pair to hold scratch-space information in case any threads' address resolves to scratch
space. See the Scratch section for details.
Internally, FLAT instruction are executed as both an LDS and a Buffer instruction; so, they
increment both VM_CNT and LGKM_CNT and are not considered done until both have been
decremented. There is no way beforehand to determine whether a FLAT instruction uses only
LDS or TA memory space.
9.2.1. Ordering
Flat instructions can complete out of order with each other. If one flat instruction finds all of its
data in Texture cache, and the next finds all of its data in LDS, the second instruction might
complete first. If the two fetches return data to the same VGPR, the result are unknown.
VM_CNT and LGKM_CNT counters. Because of this, the only sensible S_WAITCNT value to
use after FLAT instructions is zero.
9.3. Addressing
FLAT instructions support both 64- and 32-bit addressing. The address size is set using a mode
register (PTR32), and a local copy of the value is stored per wave.
The addresses for the aperture check differ in 32- and 64-bit mode; however, this is not covered
here.
64-bit addresses are stored with the LSBs in the VGPR at ADDR, and the MSBs in the VGPR at
ADDR+1.
For scratch space, the texture unit takes the address from the VGPR and does the following.
9.4. Global
Global instructions are similar to Flat instructions, but the programmer must ensure that no
threads access LDS space; thus, no LDS bandwidth is used by global instructions.
These instructions also allow direct data movement between LDS and memory without going
through VGPRs.
Since these instructions do not access LDS, only VM_CNT is used, not LGKM_CNT. If a global
instruction does attempt to access LDS, the instruction returns MEM_VIOL.
9.5. Scratch
Scratch instructions are similar to Flat, but the programmer must ensure that no threads access
LDS space, and the memory space is swizzled. Thus, no LDS bandwidth is used by scratch
instructions.
Scratch instructions also support multi-Dword access and mis-aligned access (although mis-
aligned is slower).
The size of the address component is dependent on the ADDRESS_MODE: 32-bits or 64-bit
pointers. The VGPR-offset is 32 bits.
These instructions also allow direct data movement between LDS and memory without going
through VGPRs.
Since these instructions do not access LDS, only VM_CNT is used, not LGKM_CNT. It is not
possible for a Scratch instruction to access LDS; thus, no error or aperture checking is done.
The policy for threads with bad addresses is: writes outside this range do not write a value, and
reads return zero.
Addressing errors from either LDS or TA are returned on their respective "instruction done"
busses as MEM_VIOL. This sets the wave’s MEM_VIOL TrapStatus bit and causes an
exception (trap) if the corresponding EXCPEN bit is set.
9.7. Data
FLAT instructions can use zero to four consecutive Dwords of data in VGPRs and/or memory.
The DATA field determines which VGPR(s) supply source data (if any), and the VDST VGPRs
hold return data (if any). No data-format conversion is done.
FLAT_SCRATCH is a 64-bit, byte address. The shader composes the value by adding together
two separate values: the base address, which can be passed in via an initialized SGPR, or
perhaps through a constant buffer, and the per-wave allocation offset (also initialized in an
SGPR).
10.1. Overview
The figure below shows the conceptual framework of the LDS is integration into the memory of
AMD GPUs using OpenCL.
Physically located on-chip, directly next to the ALUs, the LDS is approximately one order of
magnitude faster than global memory (assuming no bank conflicts).
There are 64 kB memory per compute unit, segmented into 32 of 512 Dwords. Each bank is a
256x32 two-port RAM (1R/1W per clock cycle). Dwords are placed in the banks serially, but all
banks can execute a store or load simultaneously. One work-group can request up to 64 kB
memory. Reads across wavefront are dispatched over four cycles in waterfall.
The high bandwidth of the LDS memory is achieved not only through its proximity to the ALUs,
but also through simultaneous access to its memory banks. Thus, it is possible to concurrently
To load data into LDS from global memory, it is read from global memory and placed into the
work-item’s registers; then, a store is performed to LDS. Similarly, to store data into global
memory, data is read from LDS and placed into the workitem’s registers, then placed into global
memory. To make effective use of the LDS, an algorithm must perform many operations on what
is transferred between global memory and LDS. It also is possible to load data from a memory
buffer directly into LDS, bypassing VGPRs.
LDS atomics are performed in the LDS hardware. (Thus, although ALUs are not directly used for
these operations, latency is incurred by the LDS executing this function.)
• Direct Read
• Parameter Read
• Indexed or Atomic
LDS Direct reads occur in vector ALU (VALU) instructions and allow the LDS to supply a single
DWORD value which is broadcast to all threads in the wavefront and is used as the SRC0 input
to the ALU operations. A VALU instruction indicates that input is to be supplied by LDS by using
the LDS_DIRECT for the SRC0 field.
The LDS address and data-type of the data to be read from LDS comes from the M0 register:
Pixel shaders use LDS to read vertex parameter values; the pixel shader then interpolates them
to find the per-pixel parameter values. LDS parameter reads occur when the following opcodes
are used.
The typical parameter interpolation operations involves reading three parameters: P0, P10, and
P20, and using the two barycentric coordinates, I and J, to determine the final per-pixel value:
Parameter interpolation instructions indicate the parameter attribute number (0 to 32) and the
component number (0=x, 1=y, 2=z and 3=w).
OP 2 Opcode:
0: v_interp_p1_f32 VDST = P10 * VSRC + P0
1: v_interp_p2_f32 VDST = P20 * VSRC + VDST
2: v_interp_mov_f32 VDST = (P0, P10 or P20 selected by VSRC[1:0])
P0, P10 and P20 are parameter values read from LDS
VSRC 8 Source VGPR supplies interpolation "I" or "J" value. For OP==v_interp_mov_f32: 0=P10,
1=P20, 2=P0. VSRC must not be the same register as VDST because 16-bank LDS chips
implement v_interp_p1 as a macro of two instructions.
Parameter interpolation and parameter move instructions must initialize the M0 register before
using it. The lds_param_offset[15:0] is an address offset from the beginning of LDS storage
allocated to this wavefront to where parameters begin in LDS memory for this wavefront. The
new_prim_mask is a 15-bit mask with one bit per quad; a one in this mask indicates that this
quad begins a new primitive, a zero indicates it uses the same primitive as the previous quad.
The mask is 15 bits, not 16, since the first quad in a wavefront begins a new primitive and so it
is not included in the mask.
Indexed and atomic operations supply a unique address per work-item from the VGPRs to the
LDS, and supply or return unique data per work-item back to VGPRs. Due to the internal
banked structure of LDS, operations can complete in as little as two cycles, or take as many 64
cycles, depending upon the number of bank conflicts (addresses that map to the same memory
bank).
Indexed operations are simple LDS load and store operations that read data from, and return
data to, VGPRs.
Atomic operations are arithmetic operations that combine data from VGPRs and data in LDS,
and write the result back to LDS. Atomic operations have the option of returning the LDS "pre-
op" value to VGPRs.
The table below lists and briefly describes the LDS instruction fields.
OP 7 LDS opcode.
OFFSET0 8 Immediate offset, in bytes. Instructions with one address combine the offset fields into a single 16-
bit unsigned offset: {offset1, offset0}. Instructions with two addresses (for example: READ2) use
OFFSET1 8 the offsets separately as two 8- bit unsigned offsets. DS_*_SRC2_* ops treat the offset as a 16-bit
signed Dword offset.
VDST 8 VGPR to which result is written: either from LDS-load or atomic return value.
All LDS operations require that M0 be initialized prior to use. M0 contains a size value that can
be used to restrict access to a subset of the allocated LDS range. If no clamping is wanted, set
M0 to 0xFFFFFFFF.
DS_READ_{B32,B64,B96,B128,U8,I8 Read one value per thread; sign extend to Dword, if signed.
,U16,I16}
DS_BPERMUTE_B32 Backward permute. Does not actually write any LDS memory.
LDS[thread_id] = src0
where thread_id is 0..63, and returnVal = LDS[dst].
Note that LDS_ADDR1 is used only for READ2*, WRITE2*, and WREXCHG2*.
M0[15:0] provides the size in bytes for this access. The size sent to LDS is MIN(M0,
LDS_SIZE), where LDS_SIZE is the amount of LDS space allocated by the shader processor
interpolator, SPI, at the time the wavefront was created.
The address comes from VGPR, and both ADDR and InstrOffset are byte addresses.
At the time of wavefront creation, LDS_BASE is assigned to the physical LDS region owned by
this wavefront or work-group.
Specify only one address by setting both offsets to the same value. This causes only one read
or write to occur and uses only the first DATA0.
SRC2 Ops The ds_<op>_src2_<type> opcodes are different. These operands perform an
atomic operation on 2 operands from the LDS memory: one is viewed as the data and the other
is the second source operand and the final destination. The addressing for these can operate in
two different modes depending on the MSB of offset1[7]: If it is 0, the offset for the data term is
derived by the offset fields as a SIGNED dword offset:
If the bit is 1, the offset for the data term becomes per thread and is a SIGNED dword offset
derived from the msbs read from the VGPR for the index. The addressing becomes:
LDS Atomic Ops DS_<atomicOp> OP, GDS=0, OFFSET0, OFFSET1, VDST, ADDR, Data0,
Data1
ADDR is a Dword address. VGPRs 0,1 and dst are double-GPRs for doubles data.
VGPR data sources can only be VGPRs or constant values, not SGPRs.
• Vertex Position
• Vertex Parameter
• Pixel color
• Pixel depth (Z)
VM 1 Valid Mask. When set to 1, this indicates that the EXEC mask represents the valid-mask for this
wavefront. It can be sent multiple times per shader (the final value is used), but must be sent at
least once per pixel shader.
DONE 1 This is the final pixel shader or vertex-position export of the program. Used only for pixel and
position exports. Set to zero for parameters.
COMPR 1 Compressed data. When set, indicates that the data being exported is 16-bits per component
rather than the usual 32-bit.
VSRC0 8
11.2. Operations
Every pixel shader must have at least one export instruction. The last export instruction
executed must have the DONE bit set to one.
The EXEC mask is applied to all exports. Only pixels with the corresponding EXEC bit set to 1
export data to the output buffer. Results from multiple exports are accumulated in the output
buffer.
At least one export must have the VM bit set to 1. This export, in addition to copying data to the
color or depth output buffer, also informs the color buffer which pixels are valid and which have
been discarded. The value of the EXEC mask communicates the pixel valid mask. If multiple
exports are sent with VM set to 1, the mask from the final export is used. If the shader program
wants to only update the valid mask but not send any new data, the program can do an export
to the NULL target.
Every vertex shader must output at least one position vector (x, y, z; w is optional) to the POS0
target. The last position export must have the DONE bit set to 1. A vertex shader can export
zero or more parameters. For enhanced performance, output all position data as early as
possible in the vertex shader.
When access to the bus is granted, the EXEC mask is read and the VGPR data sent out. After
the last of the VGPR data is sent, the EXPCNT counter is decremented by 1.
Use S_WAITCNT on EXPCNT to prevent the shader program from overwriting EXEC or the
VGPRs holding the data to be exported before the export operation has completed.
Multiple export instructions can be outstanding at one time. Exports of the same type (for
example: position) are completed in order, but exports of different types can be completed out of
order.
If the STATUS register’s SKIP_EXPORT bit is set to one, the hardware treats all EXPORT
instructions as if they were NOPs.
If an instruction has two suffixes (for example, _I32_F32), the first suffix indicates the destination
type, the second the source type.
• D = destination
• U = unsigned integer
• S = source
• SCC = scalar condition code
• I = signed integer
• B = bitfield
Note: Rounding and Denormal modes apply to all floating-point operations unless otherwise
specified in the instruction description.
Instructions in this format may use a 32-bit literal constant which occurs immediately after the
instruction.
Conditional select.
Conditional select.
14 S_OR_B32 D = S0 | S1;
SCC = (D != 0).
15 S_OR_B64 D = S0 | S1;
SCC = (D != 0).
16 S_XOR_B32 D = S0 ^ S1;
SCC = (D != 0).
17 S_XOR_B64 D = S0 ^ S1;
SCC = (D != 0).
20 S_ORN2_B32 D = S0 | ~S1;
SCC = (D != 0).
21 S_ORN2_B64 D = S0 | ~S1;
SCC = (D != 0).
Bitfield mask.
Bitfield mask.
37 S_BFE_U32 D.u = (S0.u >> S1.u[4:0]) & ((1 << S1.u[22:16]) - 1);
SCC = (D.u != 0).
38 S_BFE_I32 D.i = signext((S0.i >> S1.u[4:0]) & ((1 << S1.u[22:16]) - 1));
SCC = (D.i != 0).
39 S_BFE_U64 D.u64 = (S0.u64 >> S1.u[5:0]) & ((1 << S1.u[22:16]) - 1);
SCC = (D.u64 != 0).
Examples:
S_ABSDIFF_I32(0x00000002, 0x00000005) => 0x00000003
S_ABSDIFF_I32(0xffffffff, 0x00000000) => 0x00000001
S_ABSDIFF_I32(0x80000000, 0x00000000) => 0x80000000 //
Note: result is negative!
S_ABSDIFF_I32(0x80000000, 0x00000001) => 0x7fffffff
S_ABSDIFF_I32(0x80000000, 0xffffffff) => 0x7fffffff
S_ABSDIFF_I32(0x80000000, 0xfffffffe) => 0x7ffffffe
43 S_RFE_RESTORE_B PRIV = 0;
64 PC = S0.u64.
Instructions in this format may use a 32-bit literal constant which occurs immediately after the
instruction.
14 S_ADDK_I32 tmp = D.i; // save value so we can check sign bits for
overflow later.
D.i = D.i + signext(SIMM16);
SCC = (tmp[31] == SIMM16[15] && tmp[31] != D.i[31]). // signed
overflow.
20 S_SETREG_IMM32_B Write some or all of the LSBs of IMM32 into a hardware register;
32 this instruction requires a 32-bit literal constant.
21 S_CALL_B64 D.u64 = PC + 4;
PC = PC + signext(SIMM16 * 4) + 4.
Instructions in this format may use a 32-bit literal constant which occurs immediately after the
instruction.
Conditional move.
Conditional move.
4 S_NOT_B32 D = ~S0;
SCC = (D != 0).
Bitwise negation.
5 S_NOT_B64 D = ~S0;
SCC = (D != 0).
Bitwise negation.
Reverse bits.
Reverse bits.
10 S_BCNT0_I32_B32 D = 0;
for i in 0 ... opcode_size_in_bits - 1 do
D += (S0[i] == 0 ? 1 : 0)
endfor;
SCC = (D != 0).
Examples:
S_BCNT0_I32_B32(0x00000000) => 32
S_BCNT0_I32_B32(0xcccccccc) => 16
S_BCNT0_I32_B32(0xffffffff) => 0
11 S_BCNT0_I32_B64 D = 0;
for i in 0 ... opcode_size_in_bits - 1 do
D += (S0[i] == 0 ? 1 : 0)
endfor;
SCC = (D != 0).
Examples:
S_BCNT0_I32_B32(0x00000000) => 32
S_BCNT0_I32_B32(0xcccccccc) => 16
S_BCNT0_I32_B32(0xffffffff) => 0
12 S_BCNT1_I32_B32 D = 0;
for i in 0 ... opcode_size_in_bits - 1 do
D += (S0[i] == 1 ? 1 : 0)
endfor;
SCC = (D != 0).
Examples:
S_BCNT1_I32_B32(0x00000000) => 0
S_BCNT1_I32_B32(0xcccccccc) => 16
S_BCNT1_I32_B32(0xffffffff) => 32
13 S_BCNT1_I32_B64 D = 0;
for i in 0 ... opcode_size_in_bits - 1 do
D += (S0[i] == 1 ? 1 : 0)
endfor;
SCC = (D != 0).
Examples:
S_BCNT1_I32_B32(0x00000000) => 0
S_BCNT1_I32_B32(0xcccccccc) => 16
S_BCNT1_I32_B32(0xffffffff) => 32
Returns the bit position of the first zero from the LSB, or -1 if
there are no zeros.
Examples:
S_FF0_I32_B32(0xaaaaaaaa) => 0
S_FF0_I32_B32(0x55555555) => 1
S_FF0_I32_B32(0x00000000) => 0
S_FF0_I32_B32(0xffffffff) => 0xffffffff
S_FF0_I32_B32(0xfffeffff) => 16
Returns the bit position of the first zero from the LSB, or -1 if
there are no zeros.
Examples:
S_FF0_I32_B32(0xaaaaaaaa) => 0
S_FF0_I32_B32(0x55555555) => 1
S_FF0_I32_B32(0x00000000) => 0
S_FF0_I32_B32(0xffffffff) => 0xffffffff
S_FF0_I32_B32(0xfffeffff) => 16
Returns the bit position of the first one from the LSB, or -1 if
there are no ones.
Examples:
S_FF1_I32_B32(0xaaaaaaaa) => 1
S_FF1_I32_B32(0x55555555) => 0
S_FF1_I32_B32(0x00000000) => 0xffffffff
S_FF1_I32_B32(0xffffffff) => 0
S_FF1_I32_B32(0x00010000) => 16
Returns the bit position of the first one from the LSB, or -1 if
there are no ones.
Examples:
S_FF1_I32_B32(0xaaaaaaaa) => 1
S_FF1_I32_B32(0x55555555) => 0
S_FF1_I32_B32(0x00000000) => 0xffffffff
S_FF1_I32_B32(0xffffffff) => 0
S_FF1_I32_B32(0x00010000) => 16
Counts how many zeros before the first one starting from the MSB.
Returns -1 if there are no ones.
Examples:
S_FLBIT_I32_B32(0x00000000) => 0xffffffff
S_FLBIT_I32_B32(0x0000cccc) => 16
S_FLBIT_I32_B32(0xffff3333) => 0
S_FLBIT_I32_B32(0x7fffffff) => 1
S_FLBIT_I32_B32(0x80000000) => 0
S_FLBIT_I32_B32(0xffffffff) => 0
Counts how many zeros before the first one starting from the MSB.
Returns -1 if there are no ones.
Examples:
S_FLBIT_I32_B32(0x00000000) => 0xffffffff
S_FLBIT_I32_B32(0x0000cccc) => 16
S_FLBIT_I32_B32(0xffff3333) => 0
S_FLBIT_I32_B32(0x7fffffff) => 1
S_FLBIT_I32_B32(0x80000000) => 0
S_FLBIT_I32_B32(0xffffffff) => 0
Counts how many bits in a row (from MSB to LSB) are the same as
the sign bit. Returns -1 if all bits are the same.
Examples:
S_FLBIT_I32(0x00000000) => 0xffffffff
S_FLBIT_I32(0x0000cccc) => 16
S_FLBIT_I32(0xffff3333) => 16
S_FLBIT_I32(0x7fffffff) => 1
S_FLBIT_I32(0x80000000) => 1
S_FLBIT_I32(0xffffffff) => 0xffffffff
Counts how many bits in a row (from MSB to LSB) are the same as
the sign bit. Returns -1 if all bits are the same.
Examples:
S_FLBIT_I32(0x00000000) => 0xffffffff
S_FLBIT_I32(0x0000cccc) => 16
S_FLBIT_I32(0xffff3333) => 16
S_FLBIT_I32(0x7fffffff) => 1
S_FLBIT_I32(0x80000000) => 1
S_FLBIT_I32(0xffffffff) => 0xffffffff
Sign extension.
Sign extension.
24 S_BITSET0_B32 D.u[S0.u[4:0]] = 0.
25 S_BITSET0_B64 D.u64[S0.u[5:0]] = 0.
26 S_BITSET1_B32 D.u[S0.u[4:0]] = 1.
27 S_BITSET1_B64 D.u64[S0.u[5:0]] = 1.
28 S_GETPC_B64 D.u64 = PC + 4.
29 S_SETPC_B64 PC = S0.u64.
30 S_SWAPPC_B64 D.u64 = PC + 4;
PC = S0.u64.
31 S_RFE_B64 PRIV = 0;
PC = S0.u64.
40 S_QUADMASK_B32 D = 0;
for i in 0 ... (opcode_size_in_bits / 4) - 1 do
D[i] = (S0[i * 4 + 3:i * 4] != 0);
endfor;
SCC = (D != 0).
41 S_QUADMASK_B64 D = 0;
for i in 0 ... (opcode_size_in_bits / 4) - 1 do
D[i] = (S0[i * 4 + 3:i * 4] != 0);
endfor;
SCC = (D != 0).
Examples:
S_ABS_I32(0x00000001) => 0x00000001
S_ABS_I32(0x7fffffff) => 0x7fffffff
S_ABS_I32(0x80000000) => 0x80000000 // Note this is
negative!
S_ABS_I32(0x80000001) => 0x7fffffff
S_ABS_I32(0x80000002) => 0x7ffffffe
S_ABS_I32(0xffffffff) => 0x00000001
This opcode can be used to convert a quad mask into a pixel mask;
given quad mask in s0, the following sequence will produce a pixel
mask in s1:
s_bitreplicate_b64 s1, s0
s_bitreplicate_b64 s1, s1
Instructions in this format may use a 32-bit literal constant which occurs immediately after the
instruction.
Examples:
s_setvskip 1, 0 // Enable vskip mode.
s_setvskip 0, 0 // Disable vskip mode.
17 S_SET_GPR_IDX_ON MODE.gpr_idx_en = 1;
M0[7:0] = S0.u[7:0];
M0[15:12] = SIMM4; // this is the direct content of S1 field
// Remaining bits of M0 are unmodified.
3 S_WAKEUP Allow a wave to 'ping' all the other waves in its threadgroup
to force them to wake up immediately from an S_SLEEP
instruction. The ping is ignored if the waves are not sleeping.
This allows for efficient polling on a memory location. The
waves which are polling can sit in a long S_SLEEP between memory
reads, but the wave which writes the value can tell them all to
wake up early now that the data is available. This is useful for
fBarrier implementations (speedup). This method is also safe
from races because if any wave misses the ping, everything still
works fine (waves which missed it just complete their normal
S_SLEEP).
17 S_SENDMSGHALT Send a message and then HALT the wavefront; see S_SENDMSG for
details.
27 S_ENDPGM_SAVED End of program; signal that a wave has been saved by the
context-switch trap handler and terminate wavefront. The
hardware implicitly executes S_WAITCNT 0 before executing this
instruction. See S_ENDPGM for additional variants.
28 S_SET_GPR_IDX_OFF MODE.gpr_idx_en = 0.
Clear GPR indexing mode. Vector operations after this will not
perform relative GPR addressing regardless of the contents of
M0. This instruction does not modify M0.
30 S_ENDPGM_ORDERED End of program; signal that a wave has exited its POPS critical
_PS_DONE section and terminate wavefront. The hardware implicitly
executes S_WAITCNT 0 before executing this instruction. This
instruction is an optimization that combines
S_SENDMSG(MSG_ORDERED_PS_DONE) and S_ENDPGM; there may be cases
where you still need to send the message separately, in which
case you can end the shader with a normal S_ENDPGM instruction.
See S_ENDPGM for additional variants.
none 0 - illegal
1 S_LOAD_DWORDX2 Read 2 dwords from scalar data cache. See S_LOAD_DWORD for
details on the offset input.
2 S_LOAD_DWORDX4 Read 4 dwords from scalar data cache. See S_LOAD_DWORD for
details on the offset input.
3 S_LOAD_DWORDX8 Read 8 dwords from scalar data cache. See S_LOAD_DWORD for
details on the offset input.
4 S_LOAD_DWORDX16 Read 16 dwords from scalar data cache. See S_LOAD_DWORD for
details on the offset input.
8 S_BUFFER_LOAD_DWORD Read 1 dword from scalar data cache. See S_LOAD_DWORD for
details on the offset input.
9 S_BUFFER_LOAD_DWORDX Read 2 dwords from scalar data cache. See S_LOAD_DWORD for
2 details on the offset input.
10 S_BUFFER_LOAD_DWORDX Read 4 dwords from scalar data cache. See S_LOAD_DWORD for
4 details on the offset input.
11 S_BUFFER_LOAD_DWORDX Read 8 dwords from scalar data cache. See S_LOAD_DWORD for
8 details on the offset input.
12 S_BUFFER_LOAD_DWORDX Read 16 dwords from scalar data cache. See S_LOAD_DWORD for
16 details on the offset input.
35 S_DCACHE_WB_VOL Write back dirty data in the scalar data cache volatile
lines.
40 S_DCACHE_DISCARD Discard one dirty scalar data cache line. A cache line is
64 bytes. Normally, dirty cachelines (one which have been
written by the shader) are written back to memory, but this
instruction allows the shader to invalidate and not write
back cachelines which it has previously written. This is a
performance optimization to be used when the shader knows it
no longer needs that data. Address is calculated the same as
S_STORE_DWORD, except the 6 LSBs are ignored to get the 64
byte aligned address. LGKM count is incremented by 1 for
this opcode.
64 S_BUFFER_ATOMIC_SWAP // 32bit
tmp = MEM[ADDR];
MEM[ADDR] = DATA;
RETURN_DATA = tmp.
65 S_BUFFER_ATOMIC_CMPS // 32bit
WAP tmp = MEM[ADDR];
src = DATA[0];
cmp = DATA[1];
MEM[ADDR] = (tmp == cmp) ? src : tmp;
RETURN_DATA[0] = tmp.
66 S_BUFFER_ATOMIC_ADD // 32bit
tmp = MEM[ADDR];
MEM[ADDR] += DATA;
RETURN_DATA = tmp.
67 S_BUFFER_ATOMIC_SUB // 32bit
tmp = MEM[ADDR];
MEM[ADDR] -= DATA;
RETURN_DATA = tmp.
68 S_BUFFER_ATOMIC_SMIN // 32bit
tmp = MEM[ADDR];
MEM[ADDR] = (DATA < tmp) ? DATA : tmp; // signed compare
RETURN_DATA = tmp.
69 S_BUFFER_ATOMIC_UMIN // 32bit
tmp = MEM[ADDR];
MEM[ADDR] = (DATA < tmp) ? DATA : tmp; // unsigned compare
RETURN_DATA = tmp.
70 S_BUFFER_ATOMIC_SMAX // 32bit
tmp = MEM[ADDR];
MEM[ADDR] = (DATA > tmp) ? DATA : tmp; // signed compare
RETURN_DATA = tmp.
71 S_BUFFER_ATOMIC_UMAX // 32bit
tmp = MEM[ADDR];
MEM[ADDR] = (DATA > tmp) ? DATA : tmp; // unsigned compare
RETURN_DATA = tmp.
72 S_BUFFER_ATOMIC_AND // 32bit
tmp = MEM[ADDR];
MEM[ADDR] &= DATA;
RETURN_DATA = tmp.
73 S_BUFFER_ATOMIC_OR // 32bit
tmp = MEM[ADDR];
MEM[ADDR] |= DATA;
RETURN_DATA = tmp.
74 S_BUFFER_ATOMIC_XOR // 32bit
tmp = MEM[ADDR];
MEM[ADDR] ^= DATA;
RETURN_DATA = tmp.
75 S_BUFFER_ATOMIC_INC // 32bit
tmp = MEM[ADDR];
MEM[ADDR] = (tmp >= DATA) ? 0 : tmp + 1; // unsigned
compare
RETURN_DATA = tmp.
76 S_BUFFER_ATOMIC_DEC // 32bit
tmp = MEM[ADDR];
MEM[ADDR] = (tmp == 0 || tmp > DATA) ? DATA : tmp - 1; //
unsigned compare
RETURN_DATA = tmp.
96 S_BUFFER_ATOMIC_SWAP_ // 64bit
X2 tmp = MEM[ADDR];
MEM[ADDR] = DATA[0:1];
RETURN_DATA[0:1] = tmp.
97 S_BUFFER_ATOMIC_CMPS // 64bit
WAP_X2 tmp = MEM[ADDR];
src = DATA[0:1];
cmp = DATA[2:3];
MEM[ADDR] = (tmp == cmp) ? src : tmp;
RETURN_DATA[0:1] = tmp.
98 S_BUFFER_ATOMIC_ADD_X // 64bit
2 tmp = MEM[ADDR];
MEM[ADDR] += DATA[0:1];
RETURN_DATA[0:1] = tmp.
99 S_BUFFER_ATOMIC_SUB_X // 64bit
2 tmp = MEM[ADDR];
MEM[ADDR] -= DATA[0:1];
RETURN_DATA[0:1] = tmp.
Instructions in this format may use a 32-bit literal constant, DPP or SDWA which occurs
immediately after the instruction.
This opcode cannot use the VOP3 encoding and cannot use
input/output modifiers.
This opcode cannot use the VOP3 encoding and cannot use
input/output modifiers.
This opcode cannot use the VOP3 encoding and cannot use
input/output modifiers. Supports round mode, exception flags,
saturation.
This opcode cannot use the VOP3 encoding and cannot use
input/output modifiers. Supports round mode, exception flags,
saturation.
VOP2 version of V_FMA_F32 with 3rd src VGPR address is the vDst.
Instructions in this format may use a 32-bit literal constant, DPP or SDWA which occurs
immediately after the instruction.
0 V_NOP Do nothing.
0ULP accuracy.
0.5ULP accuracy.
0.5ULP accuracy.
0ULP accuracy.
Round-to-nearest-even semantics.
Round-to-nearest-even semantics.
Examples:
V_EXP_F32(0xff800000) => 0x00000000 // exp(-INF) = 0
V_EXP_F32(0x80000000) => 0x3f800000 // exp(-0.0) = 1
V_EXP_F32(0x7f800000) => 0x7f800000 // exp(+INF) = +INF
Examples:
V_LOG_F32(0xff800000) => 0xffc00000 // log(-INF) = NAN
V_LOG_F32(0xbf800000) => 0xffc00000 // log(-1.0) = NAN
V_LOG_F32(0x80000000) => 0xff800000 // log(-0.0) = -INF
V_LOG_F32(0x00000000) => 0xff800000 // log(+0.0) = -INF
V_LOG_F32(0x3f800000) => 0x00000000 // log(+1.0) = 0
V_LOG_F32(0x7f800000) => 0x7f800000 // log(+INF) = +INF
Examples:
V_RCP_F32(0xff800000) => 0x80000000 // rcp(-INF) = -0
V_RCP_F32(0xc0000000) => 0xbf000000 // rcp(-2.0) = -0.5
V_RCP_F32(0x80000000) => 0xff800000 // rcp(-0.0) = -INF
V_RCP_F32(0x00000000) => 0x7f800000 // rcp(+0.0) = +INF
V_RCP_F32(0x7f800000) => 0x00000000 // rcp(+INF) = +0
Unsigned:
CVT_F32_U32
RCP_IFLAG_F32
MUL_F32 (2**32 - 1)
CVT_U32_F32
Signed:
CVT_F32_I32
RCP_IFLAG_F32
MUL_F32 (2**31 - 1)
CVT_I32_F32
Examples:
V_RSQ_F32(0xff800000) => 0xffc00000 // rsq(-INF) = NAN
V_RSQ_F32(0x80000000) => 0xff800000 // rsq(-0.0) = -INF
V_RSQ_F32(0x00000000) => 0x7f800000 // rsq(+0.0) = +INF
V_RSQ_F32(0x40800000) => 0x3f000000 // rsq(+4.0) = +0.5
V_RSQ_F32(0x7f800000) => 0x00000000 // rsq(+INF) = +0
Reciprocal with IEEE rules and perhaps not the accuracy you were
hoping for -- (2**29)ULP accuracy. On the upside, denormals are
supported.
Reciprocal square root with IEEE rules and perhaps not the
accuracy you were hoping for -- (2**29)ULP accuracy. On the
upside, denormals are supported.
Examples:
V_SQRT_F32(0xff800000) => 0xffc00000 // sqrt(-INF) = NAN
V_SQRT_F32(0x80000000) => 0x80000000 // sqrt(-0.0) = -0
V_SQRT_F32(0x00000000) => 0x00000000 // sqrt(+0.0) = +0
V_SQRT_F32(0x40800000) => 0x40000000 // sqrt(+4.0) =
+2.0
V_SQRT_F32(0x7f800000) => 0x7f800000 // sqrt(+INF) =
+INF
Square root with perhaps not the accuracy you were hoping for --
(2**29)ULP accuracy. On the upside, denormals are supported.
Examples:
V_SIN_F32(0xff800000) => 0xffc00000 // sin(-INF) = NAN
V_SIN_F32(0xff7fffff) => 0x00000000 // -MaxFloat, finite
V_SIN_F32(0x80000000) => 0x80000000 // sin(-0.0) = -0
V_SIN_F32(0x3e800000) => 0x3f800000 // sin(0.25) = 1
V_SIN_F32(0x7f800000) => 0xffc00000 // sin(+INF) = NAN
Examples:
V_COS_F32(0xff800000) => 0xffc00000 // cos(-INF) = NAN
V_COS_F32(0xff7fffff) => 0x3f800000 // -MaxFloat, finite
V_COS_F32(0x80000000) => 0x3f800000 // cos(-0.0) = 1
V_COS_F32(0x3e800000) => 0x00000000 // cos(0.25) = 0
V_COS_F32(0x7f800000) => 0xffc00000 // cos(+INF) = NAN
Counts how many zeros before the first one starting from the MSB.
Returns -1 if there are no ones.
Examples:
V_FFBH_U32(0x00000000) => 0xffffffff
V_FFBH_U32(0x800000ff) => 0
V_FFBH_U32(0x100000ff) => 3
V_FFBH_U32(0x0000ffff) => 16
V_FFBH_U32(0x00000001) => 31
Returns the bit position of the first one from the LSB, or -1 if
there are no ones.
Examples:
V_FFBL_B32(0x00000000) => 0xffffffff
V_FFBL_B32(0xff000001) => 0
V_FFBL_B32(0xff000008) => 3
V_FFBL_B32(0xffff0000) => 16
V_FFBL_B32(0x80000000) => 31
Counts how many bits in a row (from MSB to LSB) are the same as
the sign bit. Returns -1 if all bits are the same.
Examples:
V_FFBH_I32(0x00000000) => 0xffffffff
V_FFBH_I32(0x40000000) => 1
V_FFBH_I32(0x80000000) => 1
V_FFBH_I32(0x0fffffff) => 4
V_FFBH_I32(0xffff0000) => 16
V_FFBH_I32(0xfffffffe) => 31
V_FFBH_I32(0xffffffff) => 0xffffffff
2. Check for any extents that do not need to use the opcode ---
12.8. VOP1 Instructions 137 of 290
"Vega" 7nm Instruction Set Architecture
Examples:
V_RCP_F16(0xfc00) => 0x8000 // rcp(-INF) = -0
V_RCP_F16(0xc000) => 0xb800 // rcp(-2.0) = -0.5
V_RCP_F16(0x8000) => 0xfc00 // rcp(-0.0) = -INF
V_RCP_F16(0x0000) => 0x7c00 // rcp(+0.0) = +INF
V_RCP_F16(0x7c00) => 0x0000 // rcp(+INF) = +0
Examples:
V_SQRT_F16(0xfc00) => 0xfe00 // sqrt(-INF) = NAN
V_SQRT_F16(0x8000) => 0x8000 // sqrt(-0.0) = -0
V_SQRT_F16(0x0000) => 0x0000 // sqrt(+0.0) = +0
V_SQRT_F16(0x4400) => 0x4000 // sqrt(+4.0) = +2.0
V_SQRT_F16(0x7c00) => 0x7c00 // sqrt(+INF) = +INF
Examples:
V_RSQ_F16(0xfc00) => 0xfe00 // rsq(-INF) = NAN
V_RSQ_F16(0x8000) => 0xfc00 // rsq(-0.0) = -INF
V_RSQ_F16(0x0000) => 0x7c00 // rsq(+0.0) = +INF
V_RSQ_F16(0x4400) => 0x3800 // rsq(+4.0) = +0.5
V_RSQ_F16(0x7c00) => 0x0000 // rsq(+INF) = +0
Examples:
V_LOG_F16(0xfc00) => 0xfe00 // log(-INF) = NAN
V_LOG_F16(0xbc00) => 0xfe00 // log(-1.0) = NAN
V_LOG_F16(0x8000) => 0xfc00 // log(-0.0) = -INF
V_LOG_F16(0x0000) => 0xfc00 // log(+0.0) = -INF
V_LOG_F16(0x3c00) => 0x0000 // log(+1.0) = 0
V_LOG_F16(0x7c00) => 0x7c00 // log(+INF) = +INF
Examples:
V_EXP_F16(0xfc00) => 0x0000 // exp(-INF) = 0
V_EXP_F16(0x8000) => 0x3c00 // exp(-0.0) = 1
V_EXP_F16(0x7c00) => 0x7c00 // exp(+INF) = +INF
Round-to-nearest-even semantics.
Examples:
V_SIN_F16(0xfc00) => 0xfe00 // sin(-INF) = NAN
V_SIN_F16(0xfbff) => 0x0000 // Most negative finite FP16
V_SIN_F16(0x8000) => 0x8000 // sin(-0.0) = -0
V_SIN_F16(0x3400) => 0x3c00 // sin(0.25) = 1
V_SIN_F16(0x7bff) => 0x0000 // Most positive finite FP16
V_SIN_F16(0x7c00) => 0xfe00 // sin(+INF) = NAN
Examples:
V_COS_F16(0xfc00) => 0xfe00 // cos(-INF) = NAN
V_COS_F16(0xfbff) => 0x3c00 // Most negative finite FP16
V_COS_F16(0x8000) => 0x3c00 // cos(-0.0) = 1
V_COS_F16(0x3400) => 0x0000 // cos(0.25) = 0
V_COS_F16(0x7bff) => 0x3c00 // Most positive finite FP16
V_COS_F16(0x7c00) => 0xfe00 // cos(+INF) = NAN
where:
Compare instructions perform the same compare operation on each lane (workItem or thread)
using that lane’s private data, and producing a 1 bit result per lane into VCC or EXEC.
Instructions in this format may use a 32-bit literal constant which occurs immediately after the
instruction.
• Those which can use one of 16 compare operations (floating point types). "{COMPF}"
• Those which can use one of 8 compare operations (integer types). "{COMPI}"
The opcode number is such that for these the opcode number can be calculated from a base
opcode number for the data type, plus an offset for the specific compare operation.
F 0 D.u = 0
TRU 15 D.u = 1
F 0 D.u = 0
TRU 7 D.u = 1
V_CMP_{COMPI}_U16 16-bit signed integer compare. Also writes EXEC. 0xA8 - 0xAF
V_CMPX_{COMPI}_U16 16-bit unsigned integer compare. Also writes EXEC. 0xB8 - 0xBF
V_CMP_{COMPI}_U32 32-bit signed integer compare. Also writes EXEC. 0xC8 - 0xCF
V_CMPX_{COMPI}_U32 32-bit unsigned integer compare. Also writes EXEC. 0xD8 - 0xDF
V_CMP_{COMPI}_U64 64-bit signed integer compare. Also writes EXEC. 0xE8 - 0xEF
V_CMPX_{COMPI}_U64 64-bit unsigned integer compare. Also writes EXEC. 0xF8 - 0xFF
32 V_CMP_F_F16 D.u64[threadId] = 0.
41 V_CMP_NGE_F16 D.u64[threadId] = !(S0 >= S1) // With NAN inputs this is not
the same operation as <.
42 V_CMP_NLG_F16 D.u64[threadId] = !(S0 <> S1) // With NAN inputs this is not
the same operation as ==.
43 V_CMP_NGT_F16 D.u64[threadId] = !(S0 > S1) // With NAN inputs this is not the
same operation as <=.
44 V_CMP_NLE_F16 D.u64[threadId] = !(S0 <= S1) // With NAN inputs this is not
the same operation as >.
46 V_CMP_NLT_F16 D.u64[threadId] = !(S0 < S1) // With NAN inputs this is not the
same operation as >=.
47 V_CMP_TRU_F16 D.u64[threadId] = 1.
64 V_CMP_F_F32 D.u64[threadId] = 0.
73 V_CMP_NGE_F32 D.u64[threadId] = !(S0 >= S1) // With NAN inputs this is not
the same operation as <.
74 V_CMP_NLG_F32 D.u64[threadId] = !(S0 <> S1) // With NAN inputs this is not
the same operation as ==.
75 V_CMP_NGT_F32 D.u64[threadId] = !(S0 > S1) // With NAN inputs this is not the
same operation as <=.
76 V_CMP_NLE_F32 D.u64[threadId] = !(S0 <= S1) // With NAN inputs this is not
the same operation as >.
78 V_CMP_NLT_F32 D.u64[threadId] = !(S0 < S1) // With NAN inputs this is not the
same operation as >=.
79 V_CMP_TRU_F32 D.u64[threadId] = 1.
96 V_CMP_F_F64 D.u64[threadId] = 0.
105 V_CMP_NGE_F64 D.u64[threadId] = !(S0 >= S1) // With NAN inputs this is not
the same operation as <.
106 V_CMP_NLG_F64 D.u64[threadId] = !(S0 <> S1) // With NAN inputs this is not
the same operation as ==.
107 V_CMP_NGT_F64 D.u64[threadId] = !(S0 > S1) // With NAN inputs this is not the
same operation as <=.
108 V_CMP_NLE_F64 D.u64[threadId] = !(S0 <= S1) // With NAN inputs this is not
the same operation as >.
109 V_CMP_NEQ_F64 D.u64[threadId] = !(S0 == S1) // With NAN inputs this is not
the same operation as !=.
110 V_CMP_NLT_F64 D.u64[threadId] = !(S0 < S1) // With NAN inputs this is not the
same operation as >=.
When the CLAMP microcode bit is set to 1, these compare instructions signal an exception
when either of the inputs is NaN. When CLAMP is set to zero, NaN does not signal an
exception. The second eight VOPC instructions have {OP8} embedded in them. This refers to
each of the compare operations listed below.
where:
32 V_MAD_MIX_F32 D.f[31:0] = S0.f * S1.f + S2.f. Size and location of S0, S1 and
S2 controlled by OPSEL: 0=src[31:0], 1=src[31:0], 2=src[15:0],
3=src[31:16]. Also, for MAD_MIX, the NEG_HI field acts instead as
an absolute-value modifier.
33 V_MAD_MIXLO_F16 D.f[15:0] = S0.f * S1.f + S2.f. Size and location of S0, S1 and
S2 controlled by OPSEL: 0=src[31:0], 1=src[31:0], 2=src[15:0],
3=src[31:16]. Also, for MAD_MIX, the NEG_HI field acts instead as
an absolute-value modifier.
Parameter interpolation.
Parameter interpolation.
VOP3B this encoding allows specifying a unique scalar destination, and is used only for:
V_ADD_CO_U32
V_SUB_CO_U32
V_SUBREV_CO_U32
V_ADDC_CO_U32
V_SUBB_CO_U32
V_SUBBREV_CO_U32
V_DIV_SCALE_F32
V_DIV_SCALE_F64
V_MAD_U64_U32
V_MAD_I64_I32
448 V_MAD_LEGACY_F3 D.f = S0.f * S1.f + S2.f. // DX9 rules, 0.0 * x = 0.0
2
452 V_CUBEID_F32 D.f = cubemap face ID ({0.0, 1.0, ..., 5.0}). XYZ coordinate is
given in (S0.f, S1.f, S2.f).
Cubemap Face ID determination. Result is a floating point face
ID.
S0.f = x
S1.f = y
S2.f = z
If (Abs(S2.f) >= Abs(S0.f) && Abs(S2.f) >= Abs(S1.f))
If (S2.f < 0) D.f = 5.0
Else D.f = 4.0
Else if (Abs(S1.f) >= Abs(S0.f))
If (S1.f < 0) D.f = 3.0
Else D.f = 2.0
Else
If (S0.f < 0) D.f = 1.0
Else D.f = 0.0
455 V_CUBEMA_F32 D.f = 2.0 * cubemap major axis. XYZ coordinate is given in
(S0.f, S1.f, S2.f).
S0.f = x
S1.f = y
S2.f = z
If (Abs(S2.f) >= Abs(S0.f) && Abs(S2.f) >= Abs(S1.f))
D.f = 2.0*S2.f
Else if (Abs(S1.f) >= Abs(S0.f))
D.f = 2.0 * S1.f
Else
D.f = 2.0 * S0.f
456 V_BFE_U32 D.u = (S0.u >> S1.u[4:0]) & ((1 << S2.u[4:0]) - 1).
457 V_BFE_I32 D.i = (S0.i >> S1.u[4:0]) & ((1 << S2.u[4:0]) - 1).
Bitfield insert.
Byte permute.
`LL' stands for `two LDS arguments'. attr_word selects the high or
low half 16 bits of each LDS dword accessed. This opcode is
available for 32-bank LDS only.
`LV' stands for `One LDS and one VGPR argument'. S2 holds two
parameters, attr_word selects the high or low word of the VGPR for
this calculation, as well as the high or low half of the LDS data.
Meant for use with 16-bank LDS.
649 V_READLANE_B32 Copy one VGPR value to one SGPR. D = SGPR-dest, S0 = Source Data
(VGPR# or M0(lds-direct)), S1 = Lane Select (SGPR or M0). Ignores
exec mask.
650 V_WRITELANE_B32 Write value into one VGPR in one lane. D = VGPR-dest, S0 = Source
Data (sgpr, m0, exec or constants), S1 = Lane Select (SGPR or M0).
Ignores exec mask.
Bit count.
where:
OFFSET0 = Unsigned byte offset added to the address from the ADDR VGPR.
OFFSET1 = Unsigned byte offset added to the address from the ADDR VGPR.
GDS = Set if GDS, cleared if LDS.
OP = DS instructions.
ADDR = Source LDS address VGPR 0 - 255.
DATA0 = Source data0 VGPR 0 - 255.
DATA1 = Source data1 VGPR 0 - 255.
VDST = Destination VGPR 0- 255.
All instructions with RTN in the name return the value that was in memory
before the operation was performed.
0 DS_ADD_U32 // 32bit
tmp = MEM[ADDR];
MEM[ADDR] += DATA;
RETURN_DATA = tmp.
1 DS_SUB_U32 // 32bit
tmp = MEM[ADDR];
MEM[ADDR] -= DATA;
RETURN_DATA = tmp.
2 DS_RSUB_U32 // 32bit
tmp = MEM[ADDR];
MEM[ADDR] = DATA - MEM[ADDR];
RETURN_DATA = tmp.
3 DS_INC_U32 // 32bit
tmp = MEM[ADDR];
MEM[ADDR] = (tmp >= DATA) ? 0 : tmp + 1; // unsigned compare
RETURN_DATA = tmp.
4 DS_DEC_U32 // 32bit
tmp = MEM[ADDR];
MEM[ADDR] = (tmp == 0 || tmp > DATA) ? DATA : tmp - 1; //
unsigned compare
RETURN_DATA = tmp.
5 DS_MIN_I32 // 32bit
tmp = MEM[ADDR];
MEM[ADDR] = (DATA < tmp) ? DATA : tmp; // signed compare
RETURN_DATA = tmp.
6 DS_MAX_I32 // 32bit
tmp = MEM[ADDR];
MEM[ADDR] = (DATA > tmp) ? DATA : tmp; // signed compare
RETURN_DATA = tmp.
7 DS_MIN_U32 // 32bit
tmp = MEM[ADDR];
MEM[ADDR] = (DATA < tmp) ? DATA : tmp; // unsigned compare
RETURN_DATA = tmp.
8 DS_MAX_U32 // 32bit
tmp = MEM[ADDR];
MEM[ADDR] = (DATA > tmp) ? DATA : tmp; // unsigned compare
RETURN_DATA = tmp.
9 DS_AND_B32 // 32bit
tmp = MEM[ADDR];
MEM[ADDR] &= DATA;
RETURN_DATA = tmp.
10 DS_OR_B32 // 32bit
tmp = MEM[ADDR];
MEM[ADDR] |= DATA;
RETURN_DATA = tmp.
11 DS_XOR_B32 // 32bit
tmp = MEM[ADDR];
MEM[ADDR] ^= DATA;
RETURN_DATA = tmp.
12 DS_MSKOR_B32 // 32bit
tmp = MEM[ADDR];
MEM[ADDR] = (MEM[ADDR] & ~DATA) | DATA2;
RETURN_DATA = tmp.
Masked dword OR, D0 contains the mask and D1 contains the new
value.
13 DS_WRITE_B32 // 32bit
MEM[ADDR] = DATA.
Write dword.
14 DS_WRITE2_B32 // 32bit
MEM[ADDR_BASE + OFFSET0 * 4] = DATA;
MEM[ADDR_BASE + OFFSET1 * 4] = DATA2.
Write 2 dwords.
15 DS_WRITE2ST64_B32 // 32bit
MEM[ADDR_BASE + OFFSET0 * 4 * 64] = DATA;
MEM[ADDR_BASE + OFFSET1 * 4 * 64] = DATA2.
Write 2 dwords.
16 DS_CMPST_B32 // 32bit
tmp = MEM[ADDR];
src = DATA2;
cmp = DATA;
MEM[ADDR] = (tmp == cmp) ? src : tmp;
RETURN_DATA[0] = tmp.
Compare and store. Caution, the order of src and cmp are the
*opposite* of the BUFFER_ATOMIC_CMPSWAP opcode.
17 DS_CMPST_F32 // 32bit
tmp = MEM[ADDR];
src = DATA2;
cmp = DATA;
MEM[ADDR] = (tmp == cmp) ? src : tmp;
RETURN_DATA[0] = tmp.
18 DS_MIN_F32 // 32bit
tmp = MEM[ADDR];
src = DATA;
cmp = DATA2;
MEM[ADDR] = (cmp < tmp) ? src : tmp.
19 DS_MAX_F32 // 32bit
tmp = MEM[ADDR];
src = DATA;
cmp = DATA2;
MEM[ADDR] = (tmp > cmp) ? src : tmp.
20 DS_NOP Do nothing.
21 DS_ADD_F32 // 32bit
tmp = MEM[ADDR];
MEM[ADDR] += DATA;
RETURN_DATA = tmp.
29 DS_WRITE_ADDTID_B32 // 32bit
MEM[ADDR_BASE + OFFSET + M0.OFFSET + TID*4] = DATA.
Write dword.
Byte write.
Short write.
32 DS_ADD_RTN_U32 // 32bit
tmp = MEM[ADDR];
MEM[ADDR] += DATA;
RETURN_DATA = tmp.
33 DS_SUB_RTN_U32 // 32bit
tmp = MEM[ADDR];
MEM[ADDR] -= DATA;
RETURN_DATA = tmp.
34 DS_RSUB_RTN_U32 // 32bit
tmp = MEM[ADDR];
MEM[ADDR] = DATA - MEM[ADDR];
RETURN_DATA = tmp.
35 DS_INC_RTN_U32 // 32bit
tmp = MEM[ADDR];
MEM[ADDR] = (tmp >= DATA) ? 0 : tmp + 1; // unsigned compare
RETURN_DATA = tmp.
36 DS_DEC_RTN_U32 // 32bit
tmp = MEM[ADDR];
MEM[ADDR] = (tmp == 0 || tmp > DATA) ? DATA : tmp - 1; //
unsigned compare
RETURN_DATA = tmp.
37 DS_MIN_RTN_I32 // 32bit
tmp = MEM[ADDR];
MEM[ADDR] = (DATA < tmp) ? DATA : tmp; // signed compare
RETURN_DATA = tmp.
38 DS_MAX_RTN_I32 // 32bit
tmp = MEM[ADDR];
MEM[ADDR] = (DATA > tmp) ? DATA : tmp; // signed compare
RETURN_DATA = tmp.
39 DS_MIN_RTN_U32 // 32bit
tmp = MEM[ADDR];
MEM[ADDR] = (DATA < tmp) ? DATA : tmp; // unsigned compare
RETURN_DATA = tmp.
40 DS_MAX_RTN_U32 // 32bit
tmp = MEM[ADDR];
MEM[ADDR] = (DATA > tmp) ? DATA : tmp; // unsigned compare
RETURN_DATA = tmp.
41 DS_AND_RTN_B32 // 32bit
tmp = MEM[ADDR];
MEM[ADDR] &= DATA;
RETURN_DATA = tmp.
42 DS_OR_RTN_B32 // 32bit
tmp = MEM[ADDR];
MEM[ADDR] |= DATA;
RETURN_DATA = tmp.
43 DS_XOR_RTN_B32 // 32bit
tmp = MEM[ADDR];
MEM[ADDR] ^= DATA;
RETURN_DATA = tmp.
44 DS_MSKOR_RTN_B32 // 32bit
tmp = MEM[ADDR];
MEM[ADDR] = (MEM[ADDR] & ~DATA) | DATA2;
RETURN_DATA = tmp.
Masked dword OR, D0 contains the mask and D1 contains the new
value.
Write-exchange operation.
48 DS_CMPST_RTN_B32 // 32bit
tmp = MEM[ADDR];
src = DATA2;
cmp = DATA;
MEM[ADDR] = (tmp == cmp) ? src : tmp;
RETURN_DATA[0] = tmp.
Compare and store. Caution, the order of src and cmp are the
*opposite* of the BUFFER_ATOMIC_CMPSWAP opcode.
49 DS_CMPST_RTN_F32 // 32bit
tmp = MEM[ADDR];
src = DATA2;
cmp = DATA;
MEM[ADDR] = (tmp == cmp) ? src : tmp;
RETURN_DATA[0] = tmp.
50 DS_MIN_RTN_F32 // 32bit
tmp = MEM[ADDR];
src = DATA;
cmp = DATA2;
MEM[ADDR] = (cmp < tmp) ? src : tmp.
51 DS_MAX_RTN_F32 // 32bit
tmp = MEM[ADDR];
src = DATA;
cmp = DATA2;
MEM[ADDR] = (tmp > cmp) ? src : tmp.
53 DS_ADD_RTN_F32 // 32bit
tmp = MEM[ADDR];
MEM[ADDR] += DATA;
RETURN_DATA = tmp.
Dword read.
Read 2 dwords.
Read 2 dwords.
Forward permute. This does not access LDS memory and may be
called even if no LDS memory is allocated to the wave. It uses
LDS hardware to implement an arbitrary swizzle across threads
in a wavefront.
VGPR[SRC0] = { A, B, C, D }
VGPR[ADDR] = { 0, 0, 12, 4 }
EXEC = 0xF, OFFSET = 0
VGPR[VDST] := { B, D, 0, C }
VGPR[SRC0] = { A, B, C, D }
VGPR[ADDR] = { 0, 0, 12, 4 }
EXEC = 0xA, OFFSET = 0
VGPR[VDST] := { -, D, -, 0 }
Backward permute. This does not access LDS memory and may be
called even if no LDS memory is allocated to the wave. It uses
LDS hardware to implement an arbitrary swizzle across threads
in a wavefront.
Note that EXEC mask is applied to both VGPR read and write. If
src_lane selects a disabled thread, zero will be returned.
VGPR[SRC0] = { A, B, C, D }
VGPR[ADDR] = { 0, 0, 12, 4 }
EXEC = 0xF, OFFSET = 0
VGPR[VDST] := { A, A, D, B }
VGPR[SRC0] = { A, B, C, D }
VGPR[ADDR] = { 0, 0, 12, 4 }
EXEC = 0xA, OFFSET = 0
VGPR[VDST] := { -, 0, -, B }
64 DS_ADD_U64 // 64bit
tmp = MEM[ADDR];
MEM[ADDR] += DATA[0:1];
RETURN_DATA[0:1] = tmp.
65 DS_SUB_U64 // 64bit
tmp = MEM[ADDR];
MEM[ADDR] -= DATA[0:1];
RETURN_DATA[0:1] = tmp.
66 DS_RSUB_U64 // 64bit
tmp = MEM[ADDR];
MEM[ADDR] = DATA - MEM[ADDR];
RETURN_DATA = tmp.
67 DS_INC_U64 // 64bit
tmp = MEM[ADDR];
MEM[ADDR] = (tmp >= DATA[0:1]) ? 0 : tmp + 1; // unsigned
compare
RETURN_DATA[0:1] = tmp.
68 DS_DEC_U64 // 64bit
tmp = MEM[ADDR];
MEM[ADDR] = (tmp == 0 || tmp > DATA[0:1]) ? DATA[0:1] : tmp -
1; // unsigned compare
RETURN_DATA[0:1] = tmp.
69 DS_MIN_I64 // 64bit
tmp = MEM[ADDR];
MEM[ADDR] -= (DATA[0:1] < tmp) ? DATA[0:1] : tmp; // signed
compare
RETURN_DATA[0:1] = tmp.
70 DS_MAX_I64 // 64bit
tmp = MEM[ADDR];
MEM[ADDR] -= (DATA[0:1] > tmp) ? DATA[0:1] : tmp; // signed
compare
RETURN_DATA[0:1] = tmp.
71 DS_MIN_U64 // 64bit
tmp = MEM[ADDR];
MEM[ADDR] -= (DATA[0:1] < tmp) ? DATA[0:1] : tmp; // unsigned
compare
RETURN_DATA[0:1] = tmp.
72 DS_MAX_U64 // 64bit
tmp = MEM[ADDR];
MEM[ADDR] -= (DATA[0:1] > tmp) ? DATA[0:1] : tmp; // unsigned
compare
RETURN_DATA[0:1] = tmp.
73 DS_AND_B64 // 64bit
tmp = MEM[ADDR];
MEM[ADDR] &= DATA[0:1];
RETURN_DATA[0:1] = tmp.
74 DS_OR_B64 // 64bit
tmp = MEM[ADDR];
MEM[ADDR] |= DATA[0:1];
RETURN_DATA[0:1] = tmp.
75 DS_XOR_B64 // 64bit
tmp = MEM[ADDR];
MEM[ADDR] ^= DATA[0:1];
RETURN_DATA[0:1] = tmp.
76 DS_MSKOR_B64 // 64bit
tmp = MEM[ADDR];
MEM[ADDR] = (MEM[ADDR] & ~DATA) | DATA2;
RETURN_DATA = tmp.
Masked dword OR, D0 contains the mask and D1 contains the new
value.
77 DS_WRITE_B64 // 64bit
MEM[ADDR] = DATA.
Write qword.
78 DS_WRITE2_B64 // 64bit
MEM[ADDR_BASE + OFFSET0 * 8] = DATA;
MEM[ADDR_BASE + OFFSET1 * 8] = DATA2.
Write 2 qwords.
79 DS_WRITE2ST64_B64 // 64bit
MEM[ADDR_BASE + OFFSET0 * 8 * 64] = DATA;
MEM[ADDR_BASE + OFFSET1 * 8 * 64] = DATA2.
Write 2 qwords.
80 DS_CMPST_B64 // 64bit
tmp = MEM[ADDR];
src = DATA2;
cmp = DATA;
MEM[ADDR] = (tmp == cmp) ? src : tmp;
RETURN_DATA[0] = tmp.
Compare and store. Caution, the order of src and cmp are the
*opposite* of the BUFFER_ATOMIC_CMPSWAP_X2 opcode.
81 DS_CMPST_F64 // 64bit
tmp = MEM[ADDR];
src = DATA2;
cmp = DATA;
MEM[ADDR] = (tmp == cmp) ? src : tmp;
RETURN_DATA[0] = tmp.
82 DS_MIN_F64 // 64bit
tmp = MEM[ADDR];
src = DATA;
cmp = DATA2;
MEM[ADDR] = (cmp < tmp) ? src : tmp.
83 DS_MAX_F64 // 64bit
tmp = MEM[ADDR];
src = DATA;
cmp = DATA2;
MEM[ADDR] = (tmp > cmp) ? src : tmp.
96 DS_ADD_RTN_U64 // 64bit
tmp = MEM[ADDR];
MEM[ADDR] += DATA[0:1];
RETURN_DATA[0:1] = tmp.
97 DS_SUB_RTN_U64 // 64bit
tmp = MEM[ADDR];
MEM[ADDR] -= DATA[0:1];
RETURN_DATA[0:1] = tmp.
98 DS_RSUB_RTN_U64 // 64bit
tmp = MEM[ADDR];
MEM[ADDR] = DATA - MEM[ADDR];
RETURN_DATA = tmp.
99 DS_INC_RTN_U64 // 64bit
tmp = MEM[ADDR];
MEM[ADDR] = (tmp >= DATA[0:1]) ? 0 : tmp + 1; // unsigned
compare
RETURN_DATA[0:1] = tmp.
Masked dword OR, D0 contains the mask and D1 contains the new
value.
Write-exchange operation.
Compare and store. Caution, the order of src and cmp are the
*opposite* of the BUFFER_ATOMIC_CMPSWAP_X2 opcode.
Read 1 qword.
Read 2 qwords.
Read 2 qwords.
Uint decrement.
Write dword.
152 DS_GWS_SEMA_RELEA GDS Only: The GWS resource (rid) indicated will process this
SE_ALL opcode by updating the counter and labeling the specified
resource as a semaphore.
154 DS_GWS_SEMA_V GDS Only: The GWS resource indicated will process this opcode
by updating the counter and labeling the resource as a
semaphore.
This action will release one waved if any are queued in this
resource.
155 DS_GWS_SEMA_BR GDS Only: The GWS resource indicated will process this opcode
by updating the counter by the bulk release delivered count and
labeling the resource as a semaphore.
156 DS_GWS_SEMA_P GDS Only: The GWS resource indicated will process this opcode
by queueing it until counter enables a release and then
decrementing the counter of the resource as a semaphore.
157 DS_GWS_BARRIER GDS Only: The GWS resource indicated will process this opcode
by queueing it until barrier is satisfied. The number of waves
needed is passed in as DATA of first valid thread.
Since the waves deliver the count for the next barrier, this
function can have a different size barrier for each occurrence.
// Release Machine
if(state.type == BARRIER) then
if(state.flag != thread.flag) then
return rd_done;
endif;
endif.
Dword read.
189 DS_CONSUME LDS & GDS. Subtract (count_bits(exec_mask)) from the value
stored in DS memory at (M0.base + instr_offset). Return the
pre-operation value to VGPRs.
190 DS_APPEND LDS & GDS. Add (count_bits(exec_mask)) to the value stored in
DS memory at (M0.base + instr_offset). Return the pre-operation
value to VGPRs.
Uint decrement.
Write qword.
Tri-dword write.
Quad-dword write.
mask, and 5-bit and-mask used to generate a thread mapping. Note that the offset bits apply to
each group of 32 within a wavefront. The details of the thread mapping are listed below. Some
example usages:
SWAPX16 : xor_mask = 0x10, or_mask = 0x00, and_mask = 0x1f
SWAPX8 : xor_mask = 0x08, or_mask = 0x00, and_mask = 0x1f
SWAPX4 : xor_mask = 0x04, or_mask = 0x00, and_mask = 0x1f
SWAPX2 : xor_mask = 0x02, or_mask = 0x00, and_mask = 0x1f
SWAPX1 : xor_mask = 0x01, or_mask = 0x00, and_mask = 0x1f
REVERSEX32 : xor_mask = 0x1f, or_mask = 0x00, and_mask = 0x1f
REVERSEX16 : xor_mask = 0x0f, or_mask = 0x00, and_mask = 0x1f
REVERSEX8 : xor_mask = 0x07, or_mask = 0x00, and_mask = 0x1f
REVERSEX4 : xor_mask = 0x03, or_mask = 0x00, and_mask = 0x1f
REVERSEX2 : xor_mask = 0x01 or_mask = 0x00, and_mask = 0x1f
BCASTX32: xor_mask = 0x00, or_mask = thread, and_mask = 0x00
BCASTX16: xor_mask = 0x00, or_mask = thread, and_mask = 0x10
BCASTX8: xor_mask = 0x00, or_mask = thread, and_mask = 0x18
BCASTX4: xor_mask = 0x00, or_mask = thread, and_mask = 0x1c
BCASTX2: xor_mask = 0x00, or_mask = thread, and_mask = 0x1e
Pseudocode follows:
offset = offset1:offset0;
• DS_GWS_SEMA_RELEASE_ALL
• DS_GWS_INIT
• DS_GWS_SEMA_V
• DS_GWS_SEMA_BR
• DS_GWS_SEMA_P
• DS_GWS_BARRIER
• DS_ORDERED_COUNT
where:
62 BUFFER_WBINVL1 Write back and invalidate the shader L1. Returns ACK
to shader.
63 BUFFER_WBINVL1_VOL Write back and invalidate the shader L1 only for lines
that are marked volatile. Returns ACK to shader.
64 BUFFER_ATOMIC_SWAP // 32bit
tmp = MEM[ADDR];
MEM[ADDR] = DATA;
RETURN_DATA = tmp.
65 BUFFER_ATOMIC_CMPSWAP // 32bit
tmp = MEM[ADDR];
src = DATA[0];
cmp = DATA[1];
MEM[ADDR] = (tmp == cmp) ? src : tmp;
RETURN_DATA[0] = tmp.
66 BUFFER_ATOMIC_ADD // 32bit
tmp = MEM[ADDR];
MEM[ADDR] += DATA;
RETURN_DATA = tmp.
67 BUFFER_ATOMIC_SUB // 32bit
tmp = MEM[ADDR];
MEM[ADDR] -= DATA;
RETURN_DATA = tmp.
68 BUFFER_ATOMIC_SMIN // 32bit
tmp = MEM[ADDR];
MEM[ADDR] = (DATA < tmp) ? DATA : tmp; // signed
compare
RETURN_DATA = tmp.
69 BUFFER_ATOMIC_UMIN // 32bit
tmp = MEM[ADDR];
MEM[ADDR] = (DATA < tmp) ? DATA : tmp; // unsigned
compare
RETURN_DATA = tmp.
70 BUFFER_ATOMIC_SMAX // 32bit
tmp = MEM[ADDR];
MEM[ADDR] = (DATA > tmp) ? DATA : tmp; // signed
compare
RETURN_DATA = tmp.
71 BUFFER_ATOMIC_UMAX // 32bit
tmp = MEM[ADDR];
MEM[ADDR] = (DATA > tmp) ? DATA : tmp; // unsigned
compare
RETURN_DATA = tmp.
72 BUFFER_ATOMIC_AND // 32bit
tmp = MEM[ADDR];
MEM[ADDR] &= DATA;
RETURN_DATA = tmp.
73 BUFFER_ATOMIC_OR // 32bit
tmp = MEM[ADDR];
MEM[ADDR] |= DATA;
RETURN_DATA = tmp.
74 BUFFER_ATOMIC_XOR // 32bit
tmp = MEM[ADDR];
MEM[ADDR] ^= DATA;
RETURN_DATA = tmp.
75 BUFFER_ATOMIC_INC // 32bit
tmp = MEM[ADDR];
MEM[ADDR] = (tmp >= DATA) ? 0 : tmp + 1; // unsigned
compare
RETURN_DATA = tmp.
76 BUFFER_ATOMIC_DEC // 32bit
tmp = MEM[ADDR];
MEM[ADDR] = (tmp == 0 || tmp > DATA) ? DATA : tmp - 1;
// unsigned compare
RETURN_DATA = tmp.
96 BUFFER_ATOMIC_SWAP_X2 // 64bit
tmp = MEM[ADDR];
MEM[ADDR] = DATA[0:1];
RETURN_DATA[0:1] = tmp.
97 BUFFER_ATOMIC_CMPSWAP_X2 // 64bit
tmp = MEM[ADDR];
src = DATA[0:1];
cmp = DATA[2:3];
MEM[ADDR] = (tmp == cmp) ? src : tmp;
RETURN_DATA[0:1] = tmp.
98 BUFFER_ATOMIC_ADD_X2 // 64bit
tmp = MEM[ADDR];
MEM[ADDR] += DATA[0:1];
RETURN_DATA[0:1] = tmp.
99 BUFFER_ATOMIC_SUB_X2 // 64bit
tmp = MEM[ADDR];
MEM[ADDR] -= DATA[0:1];
RETURN_DATA[0:1] = tmp.
where:
where:
3 IMAGE_LOAD_PCK_SGN Image memory load with with no format conversion and sign
extension. No sampler.
16 IMAGE_ATOMIC_SWAP // 32bit
tmp = MEM[ADDR];
MEM[ADDR] = DATA;
RETURN_DATA = tmp.
17 IMAGE_ATOMIC_CMPSWAP // 32bit
tmp = MEM[ADDR];
src = DATA[0];
cmp = DATA[1];
MEM[ADDR] = (tmp == cmp) ? src : tmp;
RETURN_DATA[0] = tmp.
18 IMAGE_ATOMIC_ADD // 32bit
tmp = MEM[ADDR];
MEM[ADDR] += DATA;
RETURN_DATA = tmp.
19 IMAGE_ATOMIC_SUB // 32bit
tmp = MEM[ADDR];
MEM[ADDR] -= DATA;
RETURN_DATA = tmp.
20 IMAGE_ATOMIC_SMIN // 32bit
tmp = MEM[ADDR];
MEM[ADDR] = (DATA < tmp) ? DATA : tmp; // signed compare
RETURN_DATA = tmp.
21 IMAGE_ATOMIC_UMIN // 32bit
tmp = MEM[ADDR];
MEM[ADDR] = (DATA < tmp) ? DATA : tmp; // unsigned
compare
RETURN_DATA = tmp.
22 IMAGE_ATOMIC_SMAX // 32bit
tmp = MEM[ADDR];
MEM[ADDR] = (DATA > tmp) ? DATA : tmp; // signed compare
RETURN_DATA = tmp.
23 IMAGE_ATOMIC_UMAX // 32bit
tmp = MEM[ADDR];
MEM[ADDR] = (DATA > tmp) ? DATA : tmp; // unsigned
compare
RETURN_DATA = tmp.
24 IMAGE_ATOMIC_AND // 32bit
tmp = MEM[ADDR];
MEM[ADDR] &= DATA;
RETURN_DATA = tmp.
25 IMAGE_ATOMIC_OR // 32bit
tmp = MEM[ADDR];
MEM[ADDR] |= DATA;
RETURN_DATA = tmp.
26 IMAGE_ATOMIC_XOR // 32bit
tmp = MEM[ADDR];
MEM[ADDR] ^= DATA;
RETURN_DATA = tmp.
27 IMAGE_ATOMIC_INC // 32bit
tmp = MEM[ADDR];
MEM[ADDR] = (tmp >= DATA) ? 0 : tmp + 1; // unsigned
compare
RETURN_DATA = tmp.
28 IMAGE_ATOMIC_DEC // 32bit
tmp = MEM[ADDR];
MEM[ADDR] = (tmp == 0 || tmp > DATA) ? DATA : tmp - 1; //
unsigned compare
RETURN_DATA = tmp.
104 IMAGE_SAMPLE_CD sample texture map, with user derivatives (LOD per quad)
105 IMAGE_SAMPLE_CD_CL sample texture map, with LOD clamp specified in shader,
with user derivatives (LOD per quad).
107 IMAGE_SAMPLE_C_CD_CL SAMPLE_C, with LOD clamp specified in shader, with user
derivatives (LOD per quad).
109 IMAGE_SAMPLE_CD_CL_O SAMPLE_O, with LOD clamp specified in shader, with user
derivatives (LOD per quad).
111 IMAGE_SAMPLE_C_CD_CL_O SAMPLE_C_O, with LOD clamp specified in shader, with user
derivatives (LOD per quad).
where:
64 FLAT_ATOMIC_SWAP // 32bit
tmp = MEM[ADDR];
MEM[ADDR] = DATA;
RETURN_DATA = tmp.
65 FLAT_ATOMIC_CMPSWAP // 32bit
tmp = MEM[ADDR];
src = DATA[0];
cmp = DATA[1];
MEM[ADDR] = (tmp == cmp) ? src : tmp;
RETURN_DATA[0] = tmp.
66 FLAT_ATOMIC_ADD // 32bit
tmp = MEM[ADDR];
MEM[ADDR] += DATA;
RETURN_DATA = tmp.
67 FLAT_ATOMIC_SUB // 32bit
tmp = MEM[ADDR];
MEM[ADDR] -= DATA;
RETURN_DATA = tmp.
68 FLAT_ATOMIC_SMIN // 32bit
tmp = MEM[ADDR];
MEM[ADDR] = (DATA < tmp) ? DATA : tmp; // signed compare
RETURN_DATA = tmp.
69 FLAT_ATOMIC_UMIN // 32bit
tmp = MEM[ADDR];
MEM[ADDR] = (DATA < tmp) ? DATA : tmp; // unsigned
compare
RETURN_DATA = tmp.
70 FLAT_ATOMIC_SMAX // 32bit
tmp = MEM[ADDR];
MEM[ADDR] = (DATA > tmp) ? DATA : tmp; // signed compare
RETURN_DATA = tmp.
71 FLAT_ATOMIC_UMAX // 32bit
tmp = MEM[ADDR];
MEM[ADDR] = (DATA > tmp) ? DATA : tmp; // unsigned
compare
RETURN_DATA = tmp.
72 FLAT_ATOMIC_AND // 32bit
tmp = MEM[ADDR];
MEM[ADDR] &= DATA;
RETURN_DATA = tmp.
73 FLAT_ATOMIC_OR // 32bit
tmp = MEM[ADDR];
MEM[ADDR] |= DATA;
RETURN_DATA = tmp.
74 FLAT_ATOMIC_XOR // 32bit
tmp = MEM[ADDR];
MEM[ADDR] ^= DATA;
RETURN_DATA = tmp.
75 FLAT_ATOMIC_INC // 32bit
tmp = MEM[ADDR];
MEM[ADDR] = (tmp >= DATA) ? 0 : tmp + 1; // unsigned
compare
RETURN_DATA = tmp.
76 FLAT_ATOMIC_DEC // 32bit
tmp = MEM[ADDR];
MEM[ADDR] = (tmp == 0 || tmp > DATA) ? DATA : tmp - 1; //
unsigned compare
RETURN_DATA = tmp.
96 FLAT_ATOMIC_SWAP_X2 // 64bit
tmp = MEM[ADDR];
MEM[ADDR] = DATA[0:1];
RETURN_DATA[0:1] = tmp.
97 FLAT_ATOMIC_CMPSWAP_X2 // 64bit
tmp = MEM[ADDR];
src = DATA[0:1];
cmp = DATA[2:3];
MEM[ADDR] = (tmp == cmp) ? src : tmp;
RETURN_DATA[0:1] = tmp.
98 FLAT_ATOMIC_ADD_X2 // 64bit
tmp = MEM[ADDR];
MEM[ADDR] += DATA[0:1];
RETURN_DATA[0:1] = tmp.
99 FLAT_ATOMIC_SUB_X2 // 64bit
tmp = MEM[ADDR];
MEM[ADDR] -= DATA[0:1];
RETURN_DATA[0:1] = tmp.
64 GLOBAL_ATOMIC_SWAP // 32bit
tmp = MEM[ADDR];
MEM[ADDR] = DATA;
RETURN_DATA = tmp.
65 GLOBAL_ATOMIC_CMPSWAP // 32bit
tmp = MEM[ADDR];
src = DATA[0];
cmp = DATA[1];
MEM[ADDR] = (tmp == cmp) ? src : tmp;
RETURN_DATA[0] = tmp.
66 GLOBAL_ATOMIC_ADD // 32bit
tmp = MEM[ADDR];
MEM[ADDR] += DATA;
RETURN_DATA = tmp.
67 GLOBAL_ATOMIC_SUB // 32bit
tmp = MEM[ADDR];
MEM[ADDR] -= DATA;
RETURN_DATA = tmp.
68 GLOBAL_ATOMIC_SMIN // 32bit
tmp = MEM[ADDR];
MEM[ADDR] = (DATA < tmp) ? DATA : tmp; // signed compare
RETURN_DATA = tmp.
69 GLOBAL_ATOMIC_UMIN // 32bit
tmp = MEM[ADDR];
MEM[ADDR] = (DATA < tmp) ? DATA : tmp; // unsigned
compare
RETURN_DATA = tmp.
70 GLOBAL_ATOMIC_SMAX // 32bit
tmp = MEM[ADDR];
MEM[ADDR] = (DATA > tmp) ? DATA : tmp; // signed compare
RETURN_DATA = tmp.
71 GLOBAL_ATOMIC_UMAX // 32bit
tmp = MEM[ADDR];
MEM[ADDR] = (DATA > tmp) ? DATA : tmp; // unsigned
compare
RETURN_DATA = tmp.
72 GLOBAL_ATOMIC_AND // 32bit
tmp = MEM[ADDR];
MEM[ADDR] &= DATA;
RETURN_DATA = tmp.
73 GLOBAL_ATOMIC_OR // 32bit
tmp = MEM[ADDR];
MEM[ADDR] |= DATA;
RETURN_DATA = tmp.
74 GLOBAL_ATOMIC_XOR // 32bit
tmp = MEM[ADDR];
MEM[ADDR] ^= DATA;
RETURN_DATA = tmp.
75 GLOBAL_ATOMIC_INC // 32bit
tmp = MEM[ADDR];
MEM[ADDR] = (tmp >= DATA) ? 0 : tmp + 1; // unsigned
compare
RETURN_DATA = tmp.
76 GLOBAL_ATOMIC_DEC // 32bit
tmp = MEM[ADDR];
MEM[ADDR] = (tmp == 0 || tmp > DATA) ? DATA : tmp - 1; //
unsigned compare
RETURN_DATA = tmp.
96 GLOBAL_ATOMIC_SWAP_X2 // 64bit
tmp = MEM[ADDR];
MEM[ADDR] = DATA[0:1];
RETURN_DATA[0:1] = tmp.
97 GLOBAL_ATOMIC_CMPSWAP_ // 64bit
X2 tmp = MEM[ADDR];
src = DATA[0:1];
cmp = DATA[2:3];
MEM[ADDR] = (tmp == cmp) ? src : tmp;
RETURN_DATA[0:1] = tmp.
98 GLOBAL_ATOMIC_ADD_X2 // 64bit
tmp = MEM[ADDR];
MEM[ADDR] += DATA[0:1];
RETURN_DATA[0:1] = tmp.
99 GLOBAL_ATOMIC_SUB_X2 // 64bit
tmp = MEM[ADDR];
MEM[ADDR] -= DATA[0:1];
RETURN_DATA[0:1] = tmp.
12.19.1. DPP
The following instructions cannot use DPP:
• V_MADMK_F32
• V_MADAK_F32
• V_MADMK_F16
• V_MADAK_F16
• V_READFIRSTLANE_B32
• V_CVT_I32_F64
• V_CVT_F64_I32
• V_CVT_F32_F64
• V_CVT_F64_F32
• V_CVT_U32_F64
• V_CVT_F64_U32
• V_TRUNC_F64
• V_CEIL_F64
• V_RNDNE_F64
• V_FLOOR_F64
• V_RCP_F64
• V_RSQ_F64
• V_SQRT_F64
• V_FREXP_EXP_I32_F64
• V_FREXP_MANT_F64
• V_FRACT_F64
• V_CLREXCP
• V_SWAP_B32
• V_CMP_CLASS_F64
• V_CMPX_CLASS_F64
• V_CMP_*_F64
• V_CMPX_*_F64
• V_CMP_*_I64
• V_CMP_*_U64
• V_CMPX_*_I64
• V_CMPX_*_U64
12.19.2. SDWA
The following instructions cannot use SDWA:
• V_MAC_F32
• V_MADMK_F32
• V_MADAK_F32
• V_MAC_F16
• V_MADMK_F16
• V_MADAK_F16
• V_FMAC_F32
• V_READFIRSTLANE_B32
• V_CLREXCP
• V_SWAP_B32
Endian Order - The GCN architecture addresses memory and registers using littleendian byte-
ordering and bit-ordering. Multi-byte values are stored with their least-significant (low-order) byte
(LSB) at the lowest byte address, and they are illustrated with their LSB at the right side. Byte
values are stored with their least-significant (low-order) bit (lsb) at the lowest bit address, and
they are illustrated with their lsb at the right side.
The table below summarizes the microcode formats and their widths. The sections that follow
provide details
SOP2 SOP2 32
SOP1 SOP1
SOPK SOPK
SOPP SOPP
SOPC SOPC
SMEM SMEM 64
VOP1 VOP1 32
VOP2 VOP2 32
VOPC VOPC 32
VOP3A VOP3A 64
VOP3B VOP3B 64
VOP3P VOP3P 64
DPP DPP 32
SDWA VOP2 32
VINTRP VINTRP 32
LDS/GDS Format
DS DS 64
223 of 290
"Vega" 7nm Instruction Set Architecture
MTBUF [MTUBF] 64
MUBUF MUBUF 64
MIMG MIMG 64
Export Format
EXP EXP 64
Flat Formats
FLAT FLAT 64
GLOBAL GLOBAL 64
SCRATCH SCRATCH 64
The field-definition tables that accompany the descriptions in the sections below use the
following notation.
The default value of all fields is zero. Any bitfield not identified is assumed to be reserved.
Instruction Suffixes
Most instructions include a suffix which indicates the data type the instruction handles. This
suffix may also include a number which indicate the size of the data.
For example: "F32" indicates "32-bit floating point data", or "B16" is "16-bit binary data".
• B = binary
• F = floating point
• U = unsigned integer
• S = signed integer
When more than one data-type specifier occurs in an instruction, the last one is the result type
and size, and the earlier one(s) is/are input data type and size.
13.1.1. SOP2
Scalar format with Two inputs, one output
Format SOP2
Description This is a scalar instruction with two inputs and one output. Can be followed
by a 32-bit literal constant.
0 S_ADD_U32
Opcode # Name
1 S_SUB_U32
2 S_ADD_I32
3 S_SUB_I32
4 S_ADDC_U32
5 S_SUBB_U32
6 S_MIN_I32
7 S_MIN_U32
8 S_MAX_I32
9 S_MAX_U32
10 S_CSELECT_B32
11 S_CSELECT_B64
12 S_AND_B32
13 S_AND_B64
14 S_OR_B32
15 S_OR_B64
16 S_XOR_B32
17 S_XOR_B64
18 S_ANDN2_B32
19 S_ANDN2_B64
20 S_ORN2_B32
21 S_ORN2_B64
22 S_NAND_B32
23 S_NAND_B64
24 S_NOR_B32
25 S_NOR_B64
26 S_XNOR_B32
27 S_XNOR_B64
28 S_LSHL_B32
29 S_LSHL_B64
30 S_LSHR_B32
31 S_LSHR_B64
32 S_ASHR_I32
33 S_ASHR_I64
Opcode # Name
34 S_BFM_B32
35 S_BFM_B64
36 S_MUL_I32
37 S_BFE_U32
38 S_BFE_I32
39 S_BFE_U64
40 S_BFE_I64
41 S_CBRANCH_G_FORK
42 S_ABSDIFF_I32
43 S_RFE_RESTORE_B64
44 S_MUL_HI_U32
45 S_MUL_HI_I32
46 S_LSHL1_ADD_U32
47 S_LSHL2_ADD_U32
48 S_LSHL3_ADD_U32
49 S_LSHL4_ADD_U32
50 S_PACK_LL_B32_B16
51 S_PACK_LH_B32_B16
52 S_PACK_HH_B32_B16
13.1.2. SOPK
Format SOPK
Description This is a scalar instruction with one 16-bit signed immediate (SIMM16)
input and a single destination. Instructions which take 2 inputs use the
destination as the second input.
SDST [22:16] 0 - Scalar destination, and can provide second source operand.
101 SGPR0 to SGPR101: Scalar general-purpose registers.
102 FLAT_SCRATCH_LO.
103 FLAT_SCRATCH_HI.
104 XNACK_MASK_LO.
105 XNACK_MASK_HI.
106 VCC_LO: vcc[31:0].
107 VCC_HI: vcc[63:32].
108-123 TTMP0 - TTMP15: Trap handler temporary register.
124 M0. Memory register 0.
125 Reserved
126 EXEC_LO: exec[31:0].
127 EXEC_HI: exec[63:32].
0 S_MOVK_I32
1 S_CMOVK_I32
2 S_CMPK_EQ_I32
3 S_CMPK_LG_I32
4 S_CMPK_GT_I32
5 S_CMPK_GE_I32
6 S_CMPK_LT_I32
7 S_CMPK_LE_I32
8 S_CMPK_EQ_U32
9 S_CMPK_LG_U32
10 S_CMPK_GT_U32
11 S_CMPK_GE_U32
12 S_CMPK_LT_U32
13 S_CMPK_LE_U32
14 S_ADDK_I32
15 S_MULK_I32
16 S_CBRANCH_I_FORK
17 S_GETREG_B32
18 S_SETREG_B32
20 S_SETREG_IMM32_B32
Opcode # Name
21 S_CALL_B64
13.1.3. SOP1
Format SOP1
Description This is a scalar instruction with two inputs and one output. Can be followed
by a 32-bit literal constant.
0 S_MOV_B32
1 S_MOV_B64
2 S_CMOV_B32
Opcode # Name
3 S_CMOV_B64
4 S_NOT_B32
5 S_NOT_B64
6 S_WQM_B32
7 S_WQM_B64
8 S_BREV_B32
9 S_BREV_B64
10 S_BCNT0_I32_B32
11 S_BCNT0_I32_B64
12 S_BCNT1_I32_B32
13 S_BCNT1_I32_B64
14 S_FF0_I32_B32
15 S_FF0_I32_B64
16 S_FF1_I32_B32
17 S_FF1_I32_B64
18 S_FLBIT_I32_B32
19 S_FLBIT_I32_B64
20 S_FLBIT_I32
21 S_FLBIT_I32_I64
22 S_SEXT_I32_I8
23 S_SEXT_I32_I16
24 S_BITSET0_B32
25 S_BITSET0_B64
26 S_BITSET1_B32
27 S_BITSET1_B64
28 S_GETPC_B64
29 S_SETPC_B64
30 S_SWAPPC_B64
31 S_RFE_B64
32 S_AND_SAVEEXEC_B64
33 S_OR_SAVEEXEC_B64
34 S_XOR_SAVEEXEC_B64
35 S_ANDN2_SAVEEXEC_B64
Opcode # Name
36 S_ORN2_SAVEEXEC_B64
37 S_NAND_SAVEEXEC_B64
38 S_NOR_SAVEEXEC_B64
39 S_XNOR_SAVEEXEC_B64
40 S_QUADMASK_B32
41 S_QUADMASK_B64
42 S_MOVRELS_B32
43 S_MOVRELS_B64
44 S_MOVRELD_B32
45 S_MOVRELD_B64
46 S_CBRANCH_JOIN
48 S_ABS_I32
50 S_SET_GPR_IDX_IDX
51 S_ANDN1_SAVEEXEC_B64
52 S_ORN1_SAVEEXEC_B64
53 S_ANDN1_WREXEC_B64
54 S_ANDN2_WREXEC_B64
55 S_BITREPLICATE_B64_B32
13.1.4. SOPC
Format SOPC
Description This is a scalar instruction with two inputs which are compared and
produce SCC as a result. Can be followed by a 32-bit literal constant.
0 S_CMP_EQ_I32
1 S_CMP_LG_I32
2 S_CMP_GT_I32
Opcode # Name
3 S_CMP_GE_I32
4 S_CMP_LT_I32
5 S_CMP_LE_I32
6 S_CMP_EQ_U32
7 S_CMP_LG_U32
8 S_CMP_GT_U32
9 S_CMP_GE_U32
10 S_CMP_LT_U32
11 S_CMP_LE_U32
12 S_BITCMP0_B32
13 S_BITCMP1_B32
14 S_BITCMP0_B64
15 S_BITCMP1_B64
16 S_SETVSKIP
17 S_SET_GPR_IDX_ON
18 S_CMP_EQ_U64
19 S_CMP_LG_U64
13.1.5. SOPP
Format SOPP
Description This is a scalar instruction with one 16-bit signed immediate (SIMM16)
input.
Opcode # Name
0 S_NOP
1 S_ENDPGM
2 S_BRANCH
3 S_WAKEUP
4 S_CBRANCH_SCC0
5 S_CBRANCH_SCC1
6 S_CBRANCH_VCCZ
7 S_CBRANCH_VCCNZ
8 S_CBRANCH_EXECZ
9 S_CBRANCH_EXECNZ
10 S_BARRIER
11 S_SETKILL
12 S_WAITCNT
13 S_SETHALT
14 S_SLEEP
15 S_SETPRIO
16 S_SENDMSG
17 S_SENDMSGHALT
18 S_TRAP
19 S_ICACHE_INV
20 S_INCPERFLEVEL
21 S_DECPERFLEVEL
22 S_TTRACEDATA
23 S_CBRANCH_CDBGSYS
24 S_CBRANCH_CDBGUSER
25 S_CBRANCH_CDBGSYS_OR_USER
26 S_CBRANCH_CDBGSYS_AND_USER
27 S_ENDPGM_SAVED
28 S_SET_GPR_IDX_OFF
29 S_SET_GPR_IDX_MODE
30 S_ENDPGM_ORDERED_PS_DONE
13.2.1. SMEM
Format SMEM
SBASE [5:0] SGPR-pair which provides base address or SGPR-quad which provides V#.
(LSB of SGPR address is omitted).
SDATA [12:6] SGPR which provides write data or accepts return data.
NV [15] Non-volatile
GLC [16] Globally memory Coherent. Force bypass of L1 cache, or for atomics, cause
pre-op value to be returned.
OFFSET [52:32] An immediate signed byte offset, or the address of an SGPR holding the
unsigned byte offset. Signed offsets only work with S_LOAD/STORE.
SOFFSET [63:57] SGPR offset. Used only when SOFFSET_EN = 1 May only specify an SGPR
or M0.
0 S_LOAD_DWORD
1 S_LOAD_DWORDX2
2 S_LOAD_DWORDX4
3 S_LOAD_DWORDX8
4 S_LOAD_DWORDX16
5 S_SCRATCH_LOAD_DWORD
6 S_SCRATCH_LOAD_DWORDX2
Opcode # Name
7 S_SCRATCH_LOAD_DWORDX4
8 S_BUFFER_LOAD_DWORD
9 S_BUFFER_LOAD_DWORDX2
10 S_BUFFER_LOAD_DWORDX4
11 S_BUFFER_LOAD_DWORDX8
12 S_BUFFER_LOAD_DWORDX16
16 S_STORE_DWORD
17 S_STORE_DWORDX2
18 S_STORE_DWORDX4
21 S_SCRATCH_STORE_DWORD
22 S_SCRATCH_STORE_DWORDX2
23 S_SCRATCH_STORE_DWORDX4
24 S_BUFFER_STORE_DWORD
25 S_BUFFER_STORE_DWORDX2
26 S_BUFFER_STORE_DWORDX4
32 S_DCACHE_INV
33 S_DCACHE_WB
34 S_DCACHE_INV_VOL
35 S_DCACHE_WB_VOL
36 S_MEMTIME
37 S_MEMREALTIME
38 S_ATC_PROBE
39 S_ATC_PROBE_BUFFER
40 S_DCACHE_DISCARD
41 S_DCACHE_DISCARD_X2
64 S_BUFFER_ATOMIC_SWAP
65 S_BUFFER_ATOMIC_CMPSWAP
66 S_BUFFER_ATOMIC_ADD
67 S_BUFFER_ATOMIC_SUB
68 S_BUFFER_ATOMIC_SMIN
69 S_BUFFER_ATOMIC_UMIN
70 S_BUFFER_ATOMIC_SMAX
71 S_BUFFER_ATOMIC_UMAX
Opcode # Name
72 S_BUFFER_ATOMIC_AND
73 S_BUFFER_ATOMIC_OR
74 S_BUFFER_ATOMIC_XOR
75 S_BUFFER_ATOMIC_INC
76 S_BUFFER_ATOMIC_DEC
96 S_BUFFER_ATOMIC_SWAP_X2
97 S_BUFFER_ATOMIC_CMPSWAP_X2
98 S_BUFFER_ATOMIC_ADD_X2
99 S_BUFFER_ATOMIC_SUB_X2
100 S_BUFFER_ATOMIC_SMIN_X2
101 S_BUFFER_ATOMIC_UMIN_X2
102 S_BUFFER_ATOMIC_SMAX_X2
103 S_BUFFER_ATOMIC_UMAX_X2
104 S_BUFFER_ATOMIC_AND_X2
105 S_BUFFER_ATOMIC_OR_X2
106 S_BUFFER_ATOMIC_XOR_X2
107 S_BUFFER_ATOMIC_INC_X2
108 S_BUFFER_ATOMIC_DEC_X2
128 S_ATOMIC_SWAP
129 S_ATOMIC_CMPSWAP
130 S_ATOMIC_ADD
131 S_ATOMIC_SUB
132 S_ATOMIC_SMIN
133 S_ATOMIC_UMIN
134 S_ATOMIC_SMAX
135 S_ATOMIC_UMAX
136 S_ATOMIC_AND
137 S_ATOMIC_OR
138 S_ATOMIC_XOR
139 S_ATOMIC_INC
140 S_ATOMIC_DEC
160 S_ATOMIC_SWAP_X2
161 S_ATOMIC_CMPSWAP_X2
Opcode # Name
162 S_ATOMIC_ADD_X2
163 S_ATOMIC_SUB_X2
164 S_ATOMIC_SMIN_X2
165 S_ATOMIC_UMIN_X2
166 S_ATOMIC_SMAX_X2
167 S_ATOMIC_UMAX_X2
168 S_ATOMIC_AND_X2
169 S_ATOMIC_OR_X2
170 S_ATOMIC_XOR_X2
171 S_ATOMIC_INC_X2
172 S_ATOMIC_DEC_X2
13.3.1. VOP2
Format VOP2
0 V_CNDMASK_B32
Opcode # Name
1 V_ADD_F32
2 V_SUB_F32
3 V_SUBREV_F32
4 V_MUL_LEGACY_F32
5 V_MUL_F32
6 V_MUL_I32_I24
7 V_MUL_HI_I32_I24
8 V_MUL_U32_U24
9 V_MUL_HI_U32_U24
10 V_MIN_F32
11 V_MAX_F32
12 V_MIN_I32
13 V_MAX_I32
14 V_MIN_U32
15 V_MAX_U32
16 V_LSHRREV_B32
17 V_ASHRREV_I32
18 V_LSHLREV_B32
19 V_AND_B32
20 V_OR_B32
21 V_XOR_B32
22 V_MAC_F32
23 V_MADMK_F32
24 V_MADAK_F32
25 V_ADD_CO_U32
26 V_SUB_CO_U32
27 V_SUBREV_CO_U32
28 V_ADDC_CO_U32
29 V_SUBB_CO_U32
30 V_SUBBREV_CO_U32
31 V_ADD_F16
32 V_SUB_F16
33 V_SUBREV_F16
Opcode # Name
34 V_MUL_F16
35 V_MAC_F16
36 V_MADMK_F16
37 V_MADAK_F16
38 V_ADD_U16
39 V_SUB_U16
40 V_SUBREV_U16
41 V_MUL_LO_U16
42 V_LSHLREV_B16
43 V_LSHRREV_B16
44 V_ASHRREV_I16
45 V_MAX_F16
46 V_MIN_F16
47 V_MAX_U16
48 V_MAX_I16
49 V_MIN_U16
50 V_MIN_I16
51 V_LDEXP_F16
52 V_ADD_U32
53 V_SUB_U32
54 V_SUBREV_U32
59 V_FMAC_F32
61 V_XNOR_B32
13.3.2. VOP1
Format VOP1
0 V_NOP
1 V_MOV_B32
Opcode # Name
2 V_READFIRSTLANE_B32
3 V_CVT_I32_F64
4 V_CVT_F64_I32
5 V_CVT_F32_I32
6 V_CVT_F32_U32
7 V_CVT_U32_F32
8 V_CVT_I32_F32
10 V_CVT_F16_F32
11 V_CVT_F32_F16
12 V_CVT_RPI_I32_F32
13 V_CVT_FLR_I32_F32
14 V_CVT_OFF_F32_I4
15 V_CVT_F32_F64
16 V_CVT_F64_F32
17 V_CVT_F32_UBYTE0
18 V_CVT_F32_UBYTE1
19 V_CVT_F32_UBYTE2
20 V_CVT_F32_UBYTE3
21 V_CVT_U32_F64
22 V_CVT_F64_U32
23 V_TRUNC_F64
24 V_CEIL_F64
25 V_RNDNE_F64
26 V_FLOOR_F64
27 V_FRACT_F32
28 V_TRUNC_F32
29 V_CEIL_F32
30 V_RNDNE_F32
31 V_FLOOR_F32
32 V_EXP_F32
33 V_LOG_F32
34 V_RCP_F32
35 V_RCP_IFLAG_F32
Opcode # Name
36 V_RSQ_F32
37 V_RCP_F64
38 V_RSQ_F64
39 V_SQRT_F32
40 V_SQRT_F64
41 V_SIN_F32
42 V_COS_F32
43 V_NOT_B32
44 V_BFREV_B32
45 V_FFBH_U32
46 V_FFBL_B32
47 V_FFBH_I32
48 V_FREXP_EXP_I32_F64
49 V_FREXP_MANT_F64
50 V_FRACT_F64
51 V_FREXP_EXP_I32_F32
52 V_FREXP_MANT_F32
53 V_CLREXCP
55 V_SCREEN_PARTITION_4SE_B32
57 V_CVT_F16_U16
58 V_CVT_F16_I16
59 V_CVT_U16_F16
60 V_CVT_I16_F16
61 V_RCP_F16
62 V_SQRT_F16
63 V_RSQ_F16
64 V_LOG_F16
65 V_EXP_F16
66 V_FREXP_MANT_F16
67 V_FREXP_EXP_I16_F16
68 V_FLOOR_F16
69 V_CEIL_F16
70 V_TRUNC_F16
Opcode # Name
71 V_RNDNE_F16
72 V_FRACT_F16
73 V_SIN_F16
74 V_COS_F16
75 V_EXP_LEGACY_F32
76 V_LOG_LEGACY_F32
77 V_CVT_NORM_I16_F16
78 V_CVT_NORM_U16_F16
79 V_SAT_PK_U8_I16
81 V_SWAP_B32
13.3.3. VOPC
Format VOPC
Description Vector instruction taking two inputs and producing a comparison result. Can
be followed by a 32- bit literal constant. Vector Comparison operations are
divided into three groups:
The final opcode number is determined by adding the base for the opcode family plus the offset
from the compare op. Every compare instruction writes a result to VCC (for VOPC) or an SGPR
(for VOP3). Additionally, every compare instruction has a variant that also writes to the EXEC
mask. The destination of the compare result is VCC when encoded using the VOPC format, and
can be an arbitrary SGPR when encoded in the VOP3 format.
Comparison Operations
F 0 D.u = 0
TRU 15 D.u = 1
F 0 D.u = 0
TRU 7 D.u = 1
16 V_CMP_CLASS_F32
17 V_CMPX_CLASS_F32
Opcode # Name
18 V_CMP_CLASS_F64
19 V_CMPX_CLASS_F64
20 V_CMP_CLASS_F16
21 V_CMPX_CLASS_F16
32 V_CMP_F_F16
33 V_CMP_LT_F16
34 V_CMP_EQ_F16
35 V_CMP_LE_F16
36 V_CMP_GT_F16
37 V_CMP_LG_F16
38 V_CMP_GE_F16
39 V_CMP_O_F16
40 V_CMP_U_F16
41 V_CMP_NGE_F16
42 V_CMP_NLG_F16
43 V_CMP_NGT_F16
44 V_CMP_NLE_F16
45 V_CMP_NEQ_F16
46 V_CMP_NLT_F16
47 V_CMP_TRU_F16
48 V_CMPX_F_F16
49 V_CMPX_LT_F16
50 V_CMPX_EQ_F16
51 V_CMPX_LE_F16
52 V_CMPX_GT_F16
53 V_CMPX_LG_F16
54 V_CMPX_GE_F16
55 V_CMPX_O_F16
56 V_CMPX_U_F16
57 V_CMPX_NGE_F16
58 V_CMPX_NLG_F16
59 V_CMPX_NGT_F16
60 V_CMPX_NLE_F16
Opcode # Name
61 V_CMPX_NEQ_F16
62 V_CMPX_NLT_F16
63 V_CMPX_TRU_F16
64 V_CMP_F_F32
65 V_CMP_LT_F32
66 V_CMP_EQ_F32
67 V_CMP_LE_F32
68 V_CMP_GT_F32
69 V_CMP_LG_F32
70 V_CMP_GE_F32
71 V_CMP_O_F32
72 V_CMP_U_F32
73 V_CMP_NGE_F32
74 V_CMP_NLG_F32
75 V_CMP_NGT_F32
76 V_CMP_NLE_F32
77 V_CMP_NEQ_F32
78 V_CMP_NLT_F32
79 V_CMP_TRU_F32
80 V_CMPX_F_F32
81 V_CMPX_LT_F32
82 V_CMPX_EQ_F32
83 V_CMPX_LE_F32
84 V_CMPX_GT_F32
85 V_CMPX_LG_F32
86 V_CMPX_GE_F32
87 V_CMPX_O_F32
88 V_CMPX_U_F32
89 V_CMPX_NGE_F32
90 V_CMPX_NLG_F32
91 V_CMPX_NGT_F32
92 V_CMPX_NLE_F32
93 V_CMPX_NEQ_F32
Opcode # Name
94 V_CMPX_NLT_F32
95 V_CMPX_TRU_F32
96 V_CMP_F_F64
97 V_CMP_LT_F64
98 V_CMP_EQ_F64
99 V_CMP_LE_F64
100 V_CMP_GT_F64
101 V_CMP_LG_F64
102 V_CMP_GE_F64
103 V_CMP_O_F64
104 V_CMP_U_F64
105 V_CMP_NGE_F64
106 V_CMP_NLG_F64
107 V_CMP_NGT_F64
108 V_CMP_NLE_F64
109 V_CMP_NEQ_F64
110 V_CMP_NLT_F64
111 V_CMP_TRU_F64
112 V_CMPX_F_F64
113 V_CMPX_LT_F64
114 V_CMPX_EQ_F64
115 V_CMPX_LE_F64
116 V_CMPX_GT_F64
117 V_CMPX_LG_F64
118 V_CMPX_GE_F64
119 V_CMPX_O_F64
120 V_CMPX_U_F64
121 V_CMPX_NGE_F64
122 V_CMPX_NLG_F64
123 V_CMPX_NGT_F64
124 V_CMPX_NLE_F64
125 V_CMPX_NEQ_F64
126 V_CMPX_NLT_F64
Opcode # Name
127 V_CMPX_TRU_F64
160 V_CMP_F_I16
161 V_CMP_LT_I16
162 V_CMP_EQ_I16
163 V_CMP_LE_I16
164 V_CMP_GT_I16
165 V_CMP_NE_I16
166 V_CMP_GE_I16
167 V_CMP_T_I16
168 V_CMP_F_U16
169 V_CMP_LT_U16
170 V_CMP_EQ_U16
171 V_CMP_LE_U16
172 V_CMP_GT_U16
173 V_CMP_NE_U16
174 V_CMP_GE_U16
175 V_CMP_T_U16
176 V_CMPX_F_I16
177 V_CMPX_LT_I16
178 V_CMPX_EQ_I16
179 V_CMPX_LE_I16
180 V_CMPX_GT_I16
181 V_CMPX_NE_I16
182 V_CMPX_GE_I16
183 V_CMPX_T_I16
184 V_CMPX_F_U16
185 V_CMPX_LT_U16
186 V_CMPX_EQ_U16
187 V_CMPX_LE_U16
188 V_CMPX_GT_U16
189 V_CMPX_NE_U16
190 V_CMPX_GE_U16
191 V_CMPX_T_U16
Opcode # Name
192 V_CMP_F_I32
193 V_CMP_LT_I32
194 V_CMP_EQ_I32
195 V_CMP_LE_I32
196 V_CMP_GT_I32
197 V_CMP_NE_I32
198 V_CMP_GE_I32
199 V_CMP_T_I32
200 V_CMP_F_U32
201 V_CMP_LT_U32
202 V_CMP_EQ_U32
203 V_CMP_LE_U32
204 V_CMP_GT_U32
205 V_CMP_NE_U32
206 V_CMP_GE_U32
207 V_CMP_T_U32
208 V_CMPX_F_I32
209 V_CMPX_LT_I32
210 V_CMPX_EQ_I32
211 V_CMPX_LE_I32
212 V_CMPX_GT_I32
213 V_CMPX_NE_I32
214 V_CMPX_GE_I32
215 V_CMPX_T_I32
216 V_CMPX_F_U32
217 V_CMPX_LT_U32
218 V_CMPX_EQ_U32
219 V_CMPX_LE_U32
220 V_CMPX_GT_U32
221 V_CMPX_NE_U32
222 V_CMPX_GE_U32
223 V_CMPX_T_U32
224 V_CMP_F_I64
Opcode # Name
225 V_CMP_LT_I64
226 V_CMP_EQ_I64
227 V_CMP_LE_I64
228 V_CMP_GT_I64
229 V_CMP_NE_I64
230 V_CMP_GE_I64
231 V_CMP_T_I64
232 V_CMP_F_U64
233 V_CMP_LT_U64
234 V_CMP_EQ_U64
235 V_CMP_LE_U64
236 V_CMP_GT_U64
237 V_CMP_NE_U64
238 V_CMP_GE_U64
239 V_CMP_T_U64
240 V_CMPX_F_I64
241 V_CMPX_LT_I64
242 V_CMPX_EQ_I64
243 V_CMPX_LE_I64
244 V_CMPX_GT_I64
245 V_CMPX_NE_I64
246 V_CMPX_GE_I64
247 V_CMPX_T_I64
248 V_CMPX_F_U64
249 V_CMPX_LT_U64
250 V_CMPX_EQ_U64
251 V_CMPX_LE_U64
252 V_CMPX_GT_U64
253 V_CMPX_NE_U64
254 V_CMPX_GE_U64
255 V_CMPX_T_U64
13.3.4. VOP3A
Format VOP3A
ABS [10:8] Absolute value of input. [8] = src0, [9] = src1, [10] = src2
OPSEL [14:11] Operand select for 16-bit data. 0 = select low half, 1 = select high half. [11] =
src0, [12] = src1, [13] = src2, [14] = dest.
NEG [63:61] Negate input. [61] = src0, [62] = src1, [63] = src2
448 V_MAD_LEGACY_F32
Opcode # Name
449 V_MAD_F32
450 V_MAD_I32_I24
451 V_MAD_U32_U24
452 V_CUBEID_F32
453 V_CUBESC_F32
454 V_CUBETC_F32
455 V_CUBEMA_F32
456 V_BFE_U32
457 V_BFE_I32
458 V_BFI_B32
459 V_FMA_F32
460 V_FMA_F64
461 V_LERP_U8
462 V_ALIGNBIT_B32
463 V_ALIGNBYTE_B32
464 V_MIN3_F32
465 V_MIN3_I32
466 V_MIN3_U32
467 V_MAX3_F32
468 V_MAX3_I32
469 V_MAX3_U32
470 V_MED3_F32
471 V_MED3_I32
472 V_MED3_U32
473 V_SAD_U8
474 V_SAD_HI_U8
475 V_SAD_U16
476 V_SAD_U32
477 V_CVT_PK_U8_F32
478 V_DIV_FIXUP_F32
479 V_DIV_FIXUP_F64
482 V_DIV_FMAS_F32
483 V_DIV_FMAS_F64
Opcode # Name
484 V_MSAD_U8
485 V_QSAD_PK_U16_U8
486 V_MQSAD_PK_U16_U8
487 V_MQSAD_U32_U8
490 V_MAD_LEGACY_F16
491 V_MAD_LEGACY_U16
492 V_MAD_LEGACY_I16
493 V_PERM_B32
494 V_FMA_LEGACY_F16
495 V_DIV_FIXUP_LEGACY_F16
496 V_CVT_PKACCUM_U8_F32
497 V_MAD_U32_U16
498 V_MAD_I32_I16
499 V_XAD_U32
500 V_MIN3_F16
501 V_MIN3_I16
502 V_MIN3_U16
503 V_MAX3_F16
504 V_MAX3_I16
505 V_MAX3_U16
506 V_MED3_F16
507 V_MED3_I16
508 V_MED3_U16
509 V_LSHL_ADD_U32
510 V_ADD_LSHL_U32
511 V_ADD3_U32
512 V_LSHL_OR_B32
513 V_AND_OR_B32
514 V_OR3_B32
515 V_MAD_F16
516 V_MAD_U16
517 V_MAD_I16
518 V_FMA_F16
Opcode # Name
519 V_DIV_FIXUP_F16
628 V_INTERP_P1LL_F16
629 V_INTERP_P1LV_F16
630 V_INTERP_P2_LEGACY_F16
631 V_INTERP_P2_F16
640 V_ADD_F64
641 V_MUL_F64
642 V_MIN_F64
643 V_MAX_F64
644 V_LDEXP_F64
645 V_MUL_LO_U32
646 V_MUL_HI_U32
647 V_MUL_HI_I32
648 V_LDEXP_F32
649 V_READLANE_B32
650 V_WRITELANE_B32
651 V_BCNT_U32_B32
652 V_MBCNT_LO_U32_B32
653 V_MBCNT_HI_U32_B32
655 V_LSHLREV_B64
656 V_LSHRREV_B64
657 V_ASHRREV_I64
658 V_TRIG_PREOP_F64
659 V_BFM_B32
660 V_CVT_PKNORM_I16_F32
661 V_CVT_PKNORM_U16_F32
662 V_CVT_PKRTZ_F16_F32
663 V_CVT_PK_U16_U32
664 V_CVT_PK_I16_I32
665 V_CVT_PKNORM_I16_F16
666 V_CVT_PKNORM_U16_F16
668 V_ADD_I32
669 V_SUB_I32
Opcode # Name
670 V_ADD_I16
671 V_SUB_I16
672 V_PACK_B32_F16
13.3.5. VOP3B
Format VOP3B
Description Vector ALU format with three operands and a scalar result. This encoding
is used only for a few opcodes.
This encoding allows specifying a unique scalar destination, and is used only for the opcodes
listed below. All other opcodes use VOP3A.
• V_ADD_CO_U32
• V_SUB_CO_U32
• V_SUBREV_CO_U32
• V_ADDC_CO_U32
• V_SUBB_CO_U32
• V_SUBBREV_CO_U32
• V_DIV_SCALE_F32
• V_DIV_SCALE_F64
• V_MAD_U64_U32
• V_MAD_I64_I32
NEG [63:61] Negate input. [61] = src0, [62] = src1, [63] = src2
480 V_DIV_SCALE_F32
Opcode # Name
481 V_DIV_SCALE_F64
488 V_MAD_U64_U32
489 V_MAD_I64_I32
13.3.6. VOP3P
Format VOP3P
Description Vector ALU format taking one, two or three pairs of 16 bit inputs and
producing two 16-bit outputs (packed into 1 dword).
OPSEL [13:11] Select low or high for low sources 0=[11], 1=[12], 2=[13].
OPSEL_HI2 [14] Select low or high for high sources 0=[14], 1=[60], 2=[59].
NEG [63:61] Negate input for low 16-bits of sources. [61] = src0, [62] = src1, [63] = src2
0 V_PK_MAD_I16
Opcode # Name
1 V_PK_MUL_LO_U16
2 V_PK_ADD_I16
3 V_PK_SUB_I16
4 V_PK_LSHLREV_B16
5 V_PK_LSHRREV_B16
6 V_PK_ASHRREV_I16
7 V_PK_MAX_I16
8 V_PK_MIN_I16
9 V_PK_MAD_U16
10 V_PK_ADD_U16
11 V_PK_SUB_U16
12 V_PK_MAX_U16
13 V_PK_MIN_U16
14 V_PK_FMA_F16
15 V_PK_ADD_F16
16 V_PK_MUL_F16
17 V_PK_MIN_F16
18 V_PK_MAX_F16
32 V_MAD_MIX_F32
33 V_MAD_MIXLO_F16
34 V_MAD_MIXHI_F16
35 V_DOT2_F32_F16
38 V_DOT2_I32_I16
39 V_DOT2_U32_U16
40 V_DOT4_I32_I8
41 V_DOT4_U32_U8
42 V_DOT8_I32_I4
43 V_DOT8_U32_U4
13.3.7. SDWA
Format SDWA
Description Sub-Dword Addressing. This is a second dword which can follow VOP1 or
VOP2 instructions (in place of a literal constant) to control selection of sub-
dword (16-bit) operands. Use of SDWA is indicated by assigning the SRC0
field to SDWA, and then the actual VGPR used as source-zero is
determined in SDWA instruction word.
DST_U [44:43] Destination format: what do with the bits in the VGPR that are not selected by
DST_SEL:
0 = pad with zeros + 1 = sign extend upper / zero lower
2 = preserve (don’t modify)
3 = reserved
OMOD [47:46] Output modifiers (see VOP3). [46] = low half, [47] = high half
13.3.8. SDWAB
Format SDWAB
Description Sub-Dword Addressing. This is a second dword which can follow VOPC
instructions (in place of a literal constant) to control selection of sub-dword
(16-bit) operands. Use of SDWA is indicated by assigning the SRC0 field to
SDWA, and then the actual VGPR used as source-zero is determined in
SDWA instruction word. This version has a scalar destination.
13.3.9. DPP
Format DPP
Description Data Parallel Primitives. This is a second dword which can follow VOP1,
VOP2 or VOPC instructions (in place of a literal constant) to control
selection of data from other lanes.
BC [51] Bounds Control: 0 = do not write when source is out of range, 1 = write.
BANK_MASK [59:56] Bank Mask Applies to the VGPR destination write only, does not impact the
thread mask when fetching source VGPR data.
27==0: lanes[12:15, 28:31, 44:47, 60:63] are disabled
26==0: lanes[8:11, 24:27, 40:43, 56:59] are disabled
25==0: lanes[4:7, 20:23, 36:39, 52:55] are disabled
24==0: lanes[0:3, 16:19, 32:35, 48:51] are disabled
Notice: the term "bank" here is not the same as we used for the VGPR bank.
ROW_MASK [63:60] Row Mask Applies to the VGPR destination write only, does not impact the
thread mask when fetching source VGPR data.
31==0: lanes[63:48] are disabled (wave 64 only)
30==0: lanes[47:32] are disabled (wave 64 only)
29==0: lanes[31:16] are disabled
28==0: lanes[15:0] are disabled
DPP_ROW_SL* 101- if n&0xf) < (16-cntl[3:0] pix[n].srca = pix[n+ Row shift left by 1-15
10F cntl[3:0]].srca else use bound_cntl threads.
DPP_ROW_SR* 111- if ((n&0xf) >= cntl[3:0]) pix[n].srca = pix[n - cntl[3:0]].srca Row shift right by 1-15
11F else use bound_cntl threads.
DPP_ROW_RR* 121- if ((n&0xf) >= cnt[3:0]) pix[n].srca = pix[n - cntl[3:0]].srca Row rotate right by 1-15
12F else pix[n].srca = pix[n + 16 - cntl[3:0]].srca threads.
DPP_WF_SL1* 130 if (n<63) pix[n].srca = pix[n+1].srca else use bound_cntl Wavefront left shift by 1
thread.
DPP_WF_RL1* 134 if (n<63) pix[n].srca = pix[n+1].srca else pix[n].srca = Wavefront left rotate by 1
pix[0].srca thread.
DPP_WF_SR1* 138 if (n>0) pix[n].srca = pix[n-1].srca else use bound_cntl Wavefront right shift by 1
thread.
DPP_WF_RR1* 13C if (n>0) pix[n].srca = pix[n-1].srca else pix[n].srca = Wavefront right rotate by 1
pix[63].srca thread.
DPP_ROW_BCA 142 if (n>15) pix[n].srca = pix[n & 0x30 - 1].srca Broadcast 15th thread of
ST15* each row to next row.
DPP_ROW_BCA 143 if (n>31) pix[n].srca = pix[n & 0x20 - 1].srca Broadcast thread 31 to rows
ST31* 2 and 3.
13.4.1. VINTRP
Format VINTRP
OP [17:16] Opcode:
0: v_interp_p1_f32 : VDST = P10 * VSRC + P0
1: v_interp_p2_f32: VDST = P20 * VSRC + VDST
2: v_interp_mov_f32: VDST = (P0, P10 or P20 selected by VSRC[1:0])
13.5.1. DS
OFFSET1 [15:8] Second address offset. For some opcodes this is concatenated with OFFSET0.
0 DS_ADD_U32
1 DS_SUB_U32
2 DS_RSUB_U32
3 DS_INC_U32
4 DS_DEC_U32
Opcode # Name
5 DS_MIN_I32
6 DS_MAX_I32
7 DS_MIN_U32
8 DS_MAX_U32
9 DS_AND_B32
10 DS_OR_B32
11 DS_XOR_B32
12 DS_MSKOR_B32
13 DS_WRITE_B32
14 DS_WRITE2_B32
15 DS_WRITE2ST64_B32
16 DS_CMPST_B32
17 DS_CMPST_F32
18 DS_MIN_F32
19 DS_MAX_F32
20 DS_NOP
21 DS_ADD_F32
29 DS_WRITE_ADDTID_B32
30 DS_WRITE_B8
31 DS_WRITE_B16
32 DS_ADD_RTN_U32
33 DS_SUB_RTN_U32
34 DS_RSUB_RTN_U32
35 DS_INC_RTN_U32
36 DS_DEC_RTN_U32
37 DS_MIN_RTN_I32
38 DS_MAX_RTN_I32
39 DS_MIN_RTN_U32
40 DS_MAX_RTN_U32
41 DS_AND_RTN_B32
42 DS_OR_RTN_B32
43 DS_XOR_RTN_B32
44 DS_MSKOR_RTN_B32
Opcode # Name
45 DS_WRXCHG_RTN_B32
46 DS_WRXCHG2_RTN_B32
47 DS_WRXCHG2ST64_RTN_B32
48 DS_CMPST_RTN_B32
49 DS_CMPST_RTN_F32
50 DS_MIN_RTN_F32
51 DS_MAX_RTN_F32
52 DS_WRAP_RTN_B32
53 DS_ADD_RTN_F32
54 DS_READ_B32
55 DS_READ2_B32
56 DS_READ2ST64_B32
57 DS_READ_I8
58 DS_READ_U8
59 DS_READ_I16
60 DS_READ_U16
61 DS_SWIZZLE_B32
62 DS_PERMUTE_B32
63 DS_BPERMUTE_B32
64 DS_ADD_U64
65 DS_SUB_U64
66 DS_RSUB_U64
67 DS_INC_U64
68 DS_DEC_U64
69 DS_MIN_I64
70 DS_MAX_I64
71 DS_MIN_U64
72 DS_MAX_U64
73 DS_AND_B64
74 DS_OR_B64
75 DS_XOR_B64
76 DS_MSKOR_B64
77 DS_WRITE_B64
Opcode # Name
78 DS_WRITE2_B64
79 DS_WRITE2ST64_B64
80 DS_CMPST_B64
81 DS_CMPST_F64
82 DS_MIN_F64
83 DS_MAX_F64
84 DS_WRITE_B8_D16_HI
85 DS_WRITE_B16_D16_HI
86 DS_READ_U8_D16
87 DS_READ_U8_D16_HI
88 DS_READ_I8_D16
89 DS_READ_I8_D16_HI
90 DS_READ_U16_D16
91 DS_READ_U16_D16_HI
96 DS_ADD_RTN_U64
97 DS_SUB_RTN_U64
98 DS_RSUB_RTN_U64
99 DS_INC_RTN_U64
100 DS_DEC_RTN_U64
101 DS_MIN_RTN_I64
102 DS_MAX_RTN_I64
103 DS_MIN_RTN_U64
104 DS_MAX_RTN_U64
105 DS_AND_RTN_B64
106 DS_OR_RTN_B64
107 DS_XOR_RTN_B64
108 DS_MSKOR_RTN_B64
109 DS_WRXCHG_RTN_B64
110 DS_WRXCHG2_RTN_B64
111 DS_WRXCHG2ST64_RTN_B64
112 DS_CMPST_RTN_B64
113 DS_CMPST_RTN_F64
114 DS_MIN_RTN_F64
Opcode # Name
115 DS_MAX_RTN_F64
118 DS_READ_B64
119 DS_READ2_B64
120 DS_READ2ST64_B64
126 DS_CONDXCHG32_RTN_B64
128 DS_ADD_SRC2_U32
129 DS_SUB_SRC2_U32
130 DS_RSUB_SRC2_U32
131 DS_INC_SRC2_U32
132 DS_DEC_SRC2_U32
133 DS_MIN_SRC2_I32
134 DS_MAX_SRC2_I32
135 DS_MIN_SRC2_U32
136 DS_MAX_SRC2_U32
137 DS_AND_SRC2_B32
138 DS_OR_SRC2_B32
139 DS_XOR_SRC2_B32
141 DS_WRITE_SRC2_B32
146 DS_MIN_SRC2_F32
147 DS_MAX_SRC2_F32
149 DS_ADD_SRC2_F32
152 DS_GWS_SEMA_RELEASE_ALL
153 DS_GWS_INIT
154 DS_GWS_SEMA_V
155 DS_GWS_SEMA_BR
156 DS_GWS_SEMA_P
157 DS_GWS_BARRIER
182 DS_READ_ADDTID_B32
189 DS_CONSUME
190 DS_APPEND
191 DS_ORDERED_COUNT
192 DS_ADD_SRC2_U64
193 DS_SUB_SRC2_U64
Opcode # Name
194 DS_RSUB_SRC2_U64
195 DS_INC_SRC2_U64
196 DS_DEC_SRC2_U64
197 DS_MIN_SRC2_I64
198 DS_MAX_SRC2_I64
199 DS_MIN_SRC2_U64
200 DS_MAX_SRC2_U64
201 DS_AND_SRC2_B64
202 DS_OR_SRC2_B64
203 DS_XOR_SRC2_B64
205 DS_WRITE_SRC2_B64
210 DS_MIN_SRC2_F64
211 DS_MAX_SRC2_F64
222 DS_WRITE_B96
223 DS_WRITE_B128
254 DS_READ_B96
255 DS_READ_B128
MTBUF
typed buffer access (data type is defined by the instruction)
MUBUF
untyped buffer access (data type is defined by the buffer / resource-constant)
13.6.1. MTBUF
Format MTBUF
OFFEN [12] 1 = enable offset VGPR, 0 = use zero for address offset
IDXEN [13] 1 = enable index VGPR, 0 = use zero for address index
GLC [14] 0 = normal, 1 = globally coherent (bypass L0 cache) or for atomics, return pre-
op value to VGPR.
VADDR [39:32] Address of VGPR to supply first component of address (offset or index). When
both index and offset are used, index is in the first VGPR and offset in the
second.
VDATA [47:40] Address of VGPR to supply first component of write data or receive first
component of read-data.
0 TBUFFER_LOAD_FORMAT_X
1 TBUFFER_LOAD_FORMAT_XY
2 TBUFFER_LOAD_FORMAT_XYZ
3 TBUFFER_LOAD_FORMAT_XYZW
4 TBUFFER_STORE_FORMAT_X
5 TBUFFER_STORE_FORMAT_XY
6 TBUFFER_STORE_FORMAT_XYZ
7 TBUFFER_STORE_FORMAT_XYZW
8 TBUFFER_LOAD_FORMAT_D16_X
9 TBUFFER_LOAD_FORMAT_D16_XY
10 TBUFFER_LOAD_FORMAT_D16_XYZ
11 TBUFFER_LOAD_FORMAT_D16_XYZW
12 TBUFFER_STORE_FORMAT_D16_X
13 TBUFFER_STORE_FORMAT_D16_XY
14 TBUFFER_STORE_FORMAT_D16_XYZ
15 TBUFFER_STORE_FORMAT_D16_XYZW
13.6.2. MUBUF
Format MUBUF
OFFEN [12] 1 = enable offset VGPR, 0 = use zero for address offset
IDXEN [13] 1 = enable index VGPR, 0 = use zero for address index
GLC [14] 0 = normal, 1 = globally coherent (bypass L0 cache) or for atomics, return pre-
op value to VGPR.
LDS [16] 0 = normal, 1 = transfer data between LDS and memory instead of VGPRs and
memory.
VADDR [39:32] Address of VGPR to supply first component of address (offset or index). When
both index and offset are used, index is in the first VGPR and offset in the
second.
VDATA [47:40] Address of VGPR to supply first component of write data or receive first
component of read-data.
0 BUFFER_LOAD_FORMAT_X
1 BUFFER_LOAD_FORMAT_XY
2 BUFFER_LOAD_FORMAT_XYZ
3 BUFFER_LOAD_FORMAT_XYZW
4 BUFFER_STORE_FORMAT_X
5 BUFFER_STORE_FORMAT_XY
6 BUFFER_STORE_FORMAT_XYZ
7 BUFFER_STORE_FORMAT_XYZW
8 BUFFER_LOAD_FORMAT_D16_X
9 BUFFER_LOAD_FORMAT_D16_XY
10 BUFFER_LOAD_FORMAT_D16_XYZ
11 BUFFER_LOAD_FORMAT_D16_XYZW
12 BUFFER_STORE_FORMAT_D16_X
13 BUFFER_STORE_FORMAT_D16_XY
14 BUFFER_STORE_FORMAT_D16_XYZ
Opcode # Name
15 BUFFER_STORE_FORMAT_D16_XYZW
16 BUFFER_LOAD_UBYTE
17 BUFFER_LOAD_SBYTE
18 BUFFER_LOAD_USHORT
19 BUFFER_LOAD_SSHORT
20 BUFFER_LOAD_DWORD
21 BUFFER_LOAD_DWORDX2
22 BUFFER_LOAD_DWORDX3
23 BUFFER_LOAD_DWORDX4
24 BUFFER_STORE_BYTE
25 BUFFER_STORE_BYTE_D16_HI
26 BUFFER_STORE_SHORT
27 BUFFER_STORE_SHORT_D16_HI
28 BUFFER_STORE_DWORD
29 BUFFER_STORE_DWORDX2
30 BUFFER_STORE_DWORDX3
31 BUFFER_STORE_DWORDX4
32 BUFFER_LOAD_UBYTE_D16
33 BUFFER_LOAD_UBYTE_D16_HI
34 BUFFER_LOAD_SBYTE_D16
35 BUFFER_LOAD_SBYTE_D16_HI
36 BUFFER_LOAD_SHORT_D16
37 BUFFER_LOAD_SHORT_D16_HI
38 BUFFER_LOAD_FORMAT_D16_HI_X
39 BUFFER_STORE_FORMAT_D16_HI_X
61 BUFFER_STORE_LDS_DWORD
62 BUFFER_WBINVL1
63 BUFFER_WBINVL1_VOL
64 BUFFER_ATOMIC_SWAP
65 BUFFER_ATOMIC_CMPSWAP
66 BUFFER_ATOMIC_ADD
67 BUFFER_ATOMIC_SUB
68 BUFFER_ATOMIC_SMIN
Opcode # Name
69 BUFFER_ATOMIC_UMIN
70 BUFFER_ATOMIC_SMAX
71 BUFFER_ATOMIC_UMAX
72 BUFFER_ATOMIC_AND
73 BUFFER_ATOMIC_OR
74 BUFFER_ATOMIC_XOR
75 BUFFER_ATOMIC_INC
76 BUFFER_ATOMIC_DEC
96 BUFFER_ATOMIC_SWAP_X2
97 BUFFER_ATOMIC_CMPSWAP_X2
98 BUFFER_ATOMIC_ADD_X2
99 BUFFER_ATOMIC_SUB_X2
100 BUFFER_ATOMIC_SMIN_X2
101 BUFFER_ATOMIC_UMIN_X2
102 BUFFER_ATOMIC_SMAX_X2
103 BUFFER_ATOMIC_UMAX_X2
104 BUFFER_ATOMIC_AND_X2
105 BUFFER_ATOMIC_OR_X2
106 BUFFER_ATOMIC_XOR_X2
107 BUFFER_ATOMIC_INC_X2
108 BUFFER_ATOMIC_DEC_X2
13.7.1. MIMG
Format MIMG
UNRM [12] Force address to be un-normalized. Must be set to 1 for Image stores &
atomics.
GLC [13] 0 = normal, 1 = globally coherent (bypass L0 cache) or for atomics, return pre-
op value to VGPR.
A16 [15] Address components are 16-bits (instead of the usual 32 bits).
When set, all address components are 16 bits (packed into 2 per dword),
except:
Texel offsets (3 6bit UINT packed into 1 dword)
PCF reference (for "_C" instructions)
Address components are 16b uint for image ops without sampler; 16b float with
sampler.
LWE [17] LOD Warning Enable. When set to 1, a texture fetch may return
"LOD_CLAMPED = 1".
OP [0],[24:18] Opcode. See table below. (combine bits zero and 18-24 to form opcode).
VADDR [39:32] Address of VGPR to supply first component of address (offset or index). When
both index and offset are used, index is in the first VGPR and offset in the
second.
VDATA [47:40] Address of VGPR to supply first component of write data or receive first
component of read-data.
Opcode # Name
0 IMAGE_LOAD
1 IMAGE_LOAD_MIP
2 IMAGE_LOAD_PCK
3 IMAGE_LOAD_PCK_SGN
4 IMAGE_LOAD_MIP_PCK
5 IMAGE_LOAD_MIP_PCK_SGN
8 IMAGE_STORE
9 IMAGE_STORE_MIP
10 IMAGE_STORE_PCK
11 IMAGE_STORE_MIP_PCK
14 IMAGE_GET_RESINFO
16 IMAGE_ATOMIC_SWAP
17 IMAGE_ATOMIC_CMPSWAP
18 IMAGE_ATOMIC_ADD
19 IMAGE_ATOMIC_SUB
20 IMAGE_ATOMIC_SMIN
21 IMAGE_ATOMIC_UMIN
22 IMAGE_ATOMIC_SMAX
23 IMAGE_ATOMIC_UMAX
24 IMAGE_ATOMIC_AND
25 IMAGE_ATOMIC_OR
26 IMAGE_ATOMIC_XOR
27 IMAGE_ATOMIC_INC
28 IMAGE_ATOMIC_DEC
32 IMAGE_SAMPLE
33 IMAGE_SAMPLE_CL
34 IMAGE_SAMPLE_D
35 IMAGE_SAMPLE_D_CL
36 IMAGE_SAMPLE_L
37 IMAGE_SAMPLE_B
38 IMAGE_SAMPLE_B_CL
39 IMAGE_SAMPLE_LZ
40 IMAGE_SAMPLE_C
Opcode # Name
41 IMAGE_SAMPLE_C_CL
42 IMAGE_SAMPLE_C_D
43 IMAGE_SAMPLE_C_D_CL
44 IMAGE_SAMPLE_C_L
45 IMAGE_SAMPLE_C_B
46 IMAGE_SAMPLE_C_B_CL
47 IMAGE_SAMPLE_C_LZ
48 IMAGE_SAMPLE_O
49 IMAGE_SAMPLE_CL_O
50 IMAGE_SAMPLE_D_O
51 IMAGE_SAMPLE_D_CL_O
52 IMAGE_SAMPLE_L_O
53 IMAGE_SAMPLE_B_O
54 IMAGE_SAMPLE_B_CL_O
55 IMAGE_SAMPLE_LZ_O
56 IMAGE_SAMPLE_C_O
57 IMAGE_SAMPLE_C_CL_O
58 IMAGE_SAMPLE_C_D_O
59 IMAGE_SAMPLE_C_D_CL_O
60 IMAGE_SAMPLE_C_L_O
61 IMAGE_SAMPLE_C_B_O
62 IMAGE_SAMPLE_C_B_CL_O
63 IMAGE_SAMPLE_C_LZ_O
64 IMAGE_GATHER4
65 IMAGE_GATHER4_CL
66 IMAGE_GATHER4H
68 IMAGE_GATHER4_L
69 IMAGE_GATHER4_B
70 IMAGE_GATHER4_B_CL
71 IMAGE_GATHER4_LZ
72 IMAGE_GATHER4_C
73 IMAGE_GATHER4_C_CL
74 IMAGE_GATHER4H_PCK
Opcode # Name
75 IMAGE_GATHER8H_PCK
76 IMAGE_GATHER4_C_L
77 IMAGE_GATHER4_C_B
78 IMAGE_GATHER4_C_B_CL
79 IMAGE_GATHER4_C_LZ
80 IMAGE_GATHER4_O
81 IMAGE_GATHER4_CL_O
84 IMAGE_GATHER4_L_O
85 IMAGE_GATHER4_B_O
86 IMAGE_GATHER4_B_CL_O
87 IMAGE_GATHER4_LZ_O
88 IMAGE_GATHER4_C_O
89 IMAGE_GATHER4_C_CL_O
92 IMAGE_GATHER4_C_L_O
93 IMAGE_GATHER4_C_B_O
94 IMAGE_GATHER4_C_B_CL_O
95 IMAGE_GATHER4_C_LZ_O
96 IMAGE_GET_LOD
104 IMAGE_SAMPLE_CD
105 IMAGE_SAMPLE_CD_CL
106 IMAGE_SAMPLE_C_CD
107 IMAGE_SAMPLE_C_CD_CL
108 IMAGE_SAMPLE_CD_O
109 IMAGE_SAMPLE_CD_CL_O
110 IMAGE_SAMPLE_C_CD_O
111 IMAGE_SAMPLE_C_CD_CL_O
The microcode format is identical for each, and only the value of the SEG (segment) field differs.
13.8.1. FLAT
Format FLAT
LDS [13] 0 = normal, 1 = transfer data between LDS and memory instead of VGPRs and
memory.
GLC [16] 0 = normal, 1 = globally coherent (bypass L0 cache) or for atomics, return pre-
op value to VGPR.
OP [24:18] Opcode. See tables below for FLAT, SCRATCH and GLOBAL opcodes.
ADDR [39:32] VGPR which holds address or offset. For 64-bit addresses, ADDR has the
LSB’s and ADDR+1 has the MSBs. For offset a single VGPR has a 32 bit
unsigned offset.
For FLAT_*: specifies an address.
For GLOBAL_* and SCRATCH_* when SADDR is 0x7f: specifies an address.
For GLOBAL_* and SCRATCH_* when SADDR is not 0x7f: specifies an offset.
SADDR [54:48] Scalar SGPR which provides an address of offset (unsigned). Set this field to
0x7f to disable use.
Meaning of this field is different for Scratch and Global:
FLAT: Unused
Scratch: use an SGPR for the address instead of a VGPR
Global: use the SGPR to provide a base address and the VGPR provides a 32-
bit byte offset.
NV [55] Non-Volatile.
VDST [63:56] Destination VGPR for data returned from memory to VGPRs.
16 FLAT_LOAD_UBYTE
17 FLAT_LOAD_SBYTE
18 FLAT_LOAD_USHORT
19 FLAT_LOAD_SSHORT
20 FLAT_LOAD_DWORD
21 FLAT_LOAD_DWORDX2
22 FLAT_LOAD_DWORDX3
23 FLAT_LOAD_DWORDX4
24 FLAT_STORE_BYTE
25 FLAT_STORE_BYTE_D16_HI
26 FLAT_STORE_SHORT
27 FLAT_STORE_SHORT_D16_HI
28 FLAT_STORE_DWORD
29 FLAT_STORE_DWORDX2
30 FLAT_STORE_DWORDX3
31 FLAT_STORE_DWORDX4
32 FLAT_LOAD_UBYTE_D16
33 FLAT_LOAD_UBYTE_D16_HI
34 FLAT_LOAD_SBYTE_D16
35 FLAT_LOAD_SBYTE_D16_HI
36 FLAT_LOAD_SHORT_D16
37 FLAT_LOAD_SHORT_D16_HI
64 FLAT_ATOMIC_SWAP
65 FLAT_ATOMIC_CMPSWAP
66 FLAT_ATOMIC_ADD
67 FLAT_ATOMIC_SUB
68 FLAT_ATOMIC_SMIN
69 FLAT_ATOMIC_UMIN
70 FLAT_ATOMIC_SMAX
71 FLAT_ATOMIC_UMAX
72 FLAT_ATOMIC_AND
73 FLAT_ATOMIC_OR
Opcode # Name
74 FLAT_ATOMIC_XOR
75 FLAT_ATOMIC_INC
76 FLAT_ATOMIC_DEC
96 FLAT_ATOMIC_SWAP_X2
97 FLAT_ATOMIC_CMPSWAP_X2
98 FLAT_ATOMIC_ADD_X2
99 FLAT_ATOMIC_SUB_X2
100 FLAT_ATOMIC_SMIN_X2
101 FLAT_ATOMIC_UMIN_X2
102 FLAT_ATOMIC_SMAX_X2
103 FLAT_ATOMIC_UMAX_X2
104 FLAT_ATOMIC_AND_X2
105 FLAT_ATOMIC_OR_X2
106 FLAT_ATOMIC_XOR_X2
107 FLAT_ATOMIC_INC_X2
108 FLAT_ATOMIC_DEC_X2
13.8.2. GLOBAL
Table 93. GLOBAL Opcodes
Opcode # Name
16 GLOBAL_LOAD_UBYTE
17 GLOBAL_LOAD_SBYTE
18 GLOBAL_LOAD_USHORT
19 GLOBAL_LOAD_SSHORT
20 GLOBAL_LOAD_DWORD
21 GLOBAL_LOAD_DWORDX2
22 GLOBAL_LOAD_DWORDX3
23 GLOBAL_LOAD_DWORDX4
24 GLOBAL_STORE_BYTE
25 GLOBAL_STORE_BYTE_D16_HI
26 GLOBAL_STORE_SHORT
27 GLOBAL_STORE_SHORT_D16_HI
Opcode # Name
28 GLOBAL_STORE_DWORD
29 GLOBAL_STORE_DWORDX2
30 GLOBAL_STORE_DWORDX3
31 GLOBAL_STORE_DWORDX4
32 GLOBAL_LOAD_UBYTE_D16
33 GLOBAL_LOAD_UBYTE_D16_HI
34 GLOBAL_LOAD_SBYTE_D16
35 GLOBAL_LOAD_SBYTE_D16_HI
36 GLOBAL_LOAD_SHORT_D16
37 GLOBAL_LOAD_SHORT_D16_HI
64 GLOBAL_ATOMIC_SWAP
65 GLOBAL_ATOMIC_CMPSWAP
66 GLOBAL_ATOMIC_ADD
67 GLOBAL_ATOMIC_SUB
68 GLOBAL_ATOMIC_SMIN
69 GLOBAL_ATOMIC_UMIN
70 GLOBAL_ATOMIC_SMAX
71 GLOBAL_ATOMIC_UMAX
72 GLOBAL_ATOMIC_AND
73 GLOBAL_ATOMIC_OR
74 GLOBAL_ATOMIC_XOR
75 GLOBAL_ATOMIC_INC
76 GLOBAL_ATOMIC_DEC
96 GLOBAL_ATOMIC_SWAP_X2
97 GLOBAL_ATOMIC_CMPSWAP_X2
98 GLOBAL_ATOMIC_ADD_X2
99 GLOBAL_ATOMIC_SUB_X2
100 GLOBAL_ATOMIC_SMIN_X2
101 GLOBAL_ATOMIC_UMIN_X2
102 GLOBAL_ATOMIC_SMAX_X2
103 GLOBAL_ATOMIC_UMAX_X2
104 GLOBAL_ATOMIC_AND_X2
105 GLOBAL_ATOMIC_OR_X2
Opcode # Name
106 GLOBAL_ATOMIC_XOR_X2
107 GLOBAL_ATOMIC_INC_X2
108 GLOBAL_ATOMIC_DEC_X2
13.8.3. SCRATCH
Table 94. SCRATCH Opcodes
Opcode # Name
16 SCRATCH_LOAD_UBYTE
17 SCRATCH_LOAD_SBYTE
18 SCRATCH_LOAD_USHORT
19 SCRATCH_LOAD_SSHORT
20 SCRATCH_LOAD_DWORD
21 SCRATCH_LOAD_DWORDX2
22 SCRATCH_LOAD_DWORDX3
23 SCRATCH_LOAD_DWORDX4
24 SCRATCH_STORE_BYTE
25 SCRATCH_STORE_BYTE_D16_HI
26 SCRATCH_STORE_SHORT
27 SCRATCH_STORE_SHORT_D16_HI
28 SCRATCH_STORE_DWORD
29 SCRATCH_STORE_DWORDX2
30 SCRATCH_STORE_DWORDX3
31 SCRATCH_STORE_DWORDX4
32 SCRATCH_LOAD_UBYTE_D16
33 SCRATCH_LOAD_UBYTE_D16_HI
34 SCRATCH_LOAD_SBYTE_D16
35 SCRATCH_LOAD_SBYTE_D16_HI
36 SCRATCH_LOAD_SHORT_D16
37 SCRATCH_LOAD_SHORT_D16_HI
13.9.1. EXP
Format EXP
DONE [11] Indicates that this is the last export from the shader. Used only for Position and
Pixel/color data.
VM [12] 1 = the exec mask IS the valid mask for this export. Can be sent multiple times,
must be sent at least once per pixel shader. This bit is only used for Pixel
Shaders.