CUDA C Programming Guide
CUDA C Programming Guide
Release 12.4
NVIDIA
4 Document Structure 9
5 Programming Model 11
5.1 Kernels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
5.2 Thread Hierarchy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
5.2.1 Thread Block Clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
5.3 Memory Hierarchy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
5.4 Heterogeneous Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
5.5 Asynchronous SIMT Programming Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
5.5.1 Asynchronous Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
5.6 Compute Capability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
6 Programming Interface 21
6.1 Compilation with NVCC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
6.1.1 Compilation Workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
6.1.1.1 Offline Compilation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
6.1.1.2 Just-in-Time Compilation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
6.1.2 Binary Compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
6.1.3 PTX Compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
6.1.4 Application Compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
6.1.5 C++ Compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
6.1.6 64-Bit Compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
6.2 CUDA Runtime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
6.2.1 Initialization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
6.2.2 Device Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
6.2.3 Device Memory L2 Access Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
6.2.3.1 L2 cache Set-Aside for Persisting Accesses . . . . . . . . . . . . . . . . . . . . . . . 29
6.2.3.2 L2 Policy for Persisting Accesses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
6.2.3.3 L2 Access Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
6.2.3.4 L2 Persistence Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
6.2.3.5 Reset L2 Access to Normal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
6.2.3.6 Manage Utilization of L2 set-aside cache . . . . . . . . . . . . . . . . . . . . . . . . 33
6.2.3.7 Query L2 cache Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
6.2.3.8 Control L2 Cache Set-Aside Size for Persisting Memory Access . . . . . . . . . . 33
6.2.4 Shared Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
6.2.5 Distributed Shared Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
6.2.6 Page-Locked Host Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
6.2.6.1 Portable Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
i
6.2.6.2 Write-Combining Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
6.2.6.3 Mapped Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
6.2.7 Memory Synchronization Domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
6.2.7.1 Memory Fence Interference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
6.2.7.2 Isolating Traffic with Domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
6.2.7.3 Using Domains in CUDA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
6.2.8 Asynchronous Concurrent Execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
6.2.8.1 Concurrent Execution between Host and Device . . . . . . . . . . . . . . . . . . . . 47
6.2.8.2 Concurrent Kernel Execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
6.2.8.3 Overlap of Data Transfer and Kernel Execution . . . . . . . . . . . . . . . . . . . . . 47
6.2.8.4 Concurrent Data Transfers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
6.2.8.5 Streams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
6.2.8.6 Programmatic Dependent Launch and Synchronization . . . . . . . . . . . . . . . 52
6.2.8.7 CUDA Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
6.2.8.8 Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
6.2.8.9 Synchronous Calls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
6.2.9 Multi-Device System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
6.2.9.1 Device Enumeration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
6.2.9.2 Device Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
6.2.9.3 Stream and Event Behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
6.2.9.4 Peer-to-Peer Memory Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
6.2.9.5 Peer-to-Peer Memory Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
6.2.10 Unified Virtual Address Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
6.2.11 Interprocess Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
6.2.12 Error Checking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
6.2.13 Call Stack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
6.2.14 Texture and Surface Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
6.2.14.1 Texture Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
6.2.14.2 Surface Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
6.2.14.3 CUDA Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
6.2.14.4 Read/Write Coherency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
6.2.15 Graphics Interoperability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
6.2.15.1 OpenGL Interoperability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
6.2.15.2 Direct3D Interoperability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
6.2.15.3 SLI Interoperability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
6.2.16 External Resource Interoperability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
6.2.16.1 Vulkan Interoperability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
6.2.16.2 OpenGL Interoperability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
6.2.16.3 Direct3D 12 Interoperability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
6.2.16.4 Direct3D 11 Interoperability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
6.2.16.5 NVIDIA Software Communication Interface Interoperability (NVSCI) . . . . . . . . 126
6.3 Versioning and Compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
6.4 Compute Modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
6.5 Mode Switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
6.6 Tesla Compute Cluster Mode for Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
ii
8.2.2 Device Level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
8.2.3 Multiprocessor Level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
8.2.3.1 Occupancy Calculator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
8.3 Maximize Memory Throughput . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
8.3.1 Data Transfer between Host and Device . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
8.3.2 Device Memory Accesses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
8.4 Maximize Instruction Throughput . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
8.4.1 Arithmetic Instructions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
8.4.2 Control Flow Instructions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
8.4.3 Synchronization Instruction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
8.5 Minimize Memory Thrashing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
iii
10.8.1.12 tex2DLod() for sparse CUDA arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
10.8.1.13 tex3D() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
10.8.1.14 tex3D() for sparse CUDA arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
10.8.1.15 tex3DLod() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
10.8.1.16 tex3DLod() for sparse CUDA arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
10.8.1.17 tex3DGrad() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
10.8.1.18 tex3DGrad() for sparse CUDA arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
10.8.1.19 tex1DLayered() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
10.8.1.20 tex1DLayeredLod() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
10.8.1.21 tex1DLayeredGrad() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
10.8.1.22 tex2DLayered() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
10.8.1.23 tex2DLayered() for sparse CUDA arrays . . . . . . . . . . . . . . . . . . . . . . . . . 178
10.8.1.24 tex2DLayeredLod() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
10.8.1.25 tex2DLayeredLod() for sparse CUDA arrays . . . . . . . . . . . . . . . . . . . . . . . 179
10.8.1.26 tex2DLayeredGrad() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
10.8.1.27 tex2DLayeredGrad() for sparse CUDA arrays . . . . . . . . . . . . . . . . . . . . . . 179
10.8.1.28 texCubemap() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
10.8.1.29 texCubemapGrad() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
10.8.1.30 texCubemapLod() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
10.8.1.31 texCubemapLayered() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
10.8.1.32 texCubemapLayeredGrad() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
10.8.1.33 texCubemapLayeredLod() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
10.9 Surface Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
10.9.1 Surface Object API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
10.9.1.1 surf1Dread() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
10.9.1.2 surf1Dwrite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
10.9.1.3 surf2Dread() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
10.9.1.4 surf2Dwrite() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
10.9.1.5 surf3Dread() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
10.9.1.6 surf3Dwrite() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
10.9.1.7 surf1DLayeredread() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
10.9.1.8 surf1DLayeredwrite() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
10.9.1.9 surf2DLayeredread() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
10.9.1.10 surf2DLayeredwrite() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
10.9.1.11 surfCubemapread() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
10.9.1.12 surfCubemapwrite() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
10.9.1.13 surfCubemapLayeredread() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
10.9.1.14 surfCubemapLayeredwrite() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
10.10 Read-Only Data Cache Load Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
10.11 Load Functions Using Cache Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
10.12 Store Functions Using Cache Hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
10.13 Time Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
10.14 Atomic Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
10.14.1 Arithmetic Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
10.14.1.1 atomicAdd() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
10.14.1.2 atomicSub() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
10.14.1.3 atomicExch() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
10.14.1.4 atomicMin() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
10.14.1.5 atomicMax() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
10.14.1.6 atomicInc() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
10.14.1.7 atomicDec() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
10.14.1.8 atomicCAS() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
10.14.2 Bitwise Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
10.14.2.1 atomicAnd() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
iv
10.14.2.2 atomicOr() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
10.14.2.3 atomicXor() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
10.15 Address Space Predicate Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
10.15.1 __isGlobal() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
10.15.2 __isShared() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
10.15.3 __isConstant() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
10.15.4 __isGridConstant() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
10.15.5 __isLocal() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
10.16 Address Space Conversion Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
10.16.1 __cvta_generic_to_global() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
10.16.2 __cvta_generic_to_shared() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
10.16.3 __cvta_generic_to_constant() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
10.16.4 __cvta_generic_to_local() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
10.16.5 __cvta_global_to_generic() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
10.16.6 __cvta_shared_to_generic() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
10.16.7 __cvta_constant_to_generic() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
10.16.8 __cvta_local_to_generic() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
10.17 Alloca Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
10.17.1 Synopsis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
10.17.2 Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
10.17.3 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
10.18 Compiler Optimization Hint Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
10.18.1 __builtin_assume_aligned() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
10.18.2 __builtin_assume() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
10.18.3 __assume() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
10.18.4 __builtin_expect() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
10.18.5 __builtin_unreachable() . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
10.18.6 Restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
10.19 Warp Vote Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
10.20 Warp Match Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
10.20.1 Synopsis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
10.20.2 Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
10.21 Warp Reduce Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
10.21.1 Synopsis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
10.21.2 Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
10.22 Warp Shuffle Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
10.22.1 Synopsis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
10.22.2 Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
10.22.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
10.22.3.1 Broadcast of a single value across a warp . . . . . . . . . . . . . . . . . . . . . . . . 201
10.22.3.2 Inclusive plus-scan across sub-partitions of 8 threads . . . . . . . . . . . . . . . . 202
10.22.3.3 Reduction across a warp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
10.23 Nanosleep Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
10.23.1 Synopsis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
10.23.2 Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
10.23.3 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
10.24 Warp Matrix Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
10.24.1 Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
10.24.2 Alternate Floating Point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
10.24.3 Double Precision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
10.24.4 Sub-byte Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
10.24.5 Restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
10.24.6 Element Types and Matrix Sizes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
10.24.7 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
v
10.25 DPX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
10.25.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
10.26 Asynchronous Barrier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
10.26.1 Simple Synchronization Pattern . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
10.26.2 Temporal Splitting and Five Stages of Synchronization . . . . . . . . . . . . . . . . . . . 212
10.26.3 Bootstrap Initialization, Expected Arrival Count, and Participation . . . . . . . . . . . . 213
10.26.4 A Barrier’s Phase: Arrival, Countdown, Completion, and Reset . . . . . . . . . . . . . . 214
10.26.5 Spatial Partitioning (also known as Warp Specialization) . . . . . . . . . . . . . . . . . . 215
10.26.6 Early Exit (Dropping out of Participation) . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
10.26.7 Completion Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
10.26.8 Memory Barrier Primitives Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
10.26.8.1 Data Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
10.26.8.2 Memory Barrier Primitives API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
10.27 Asynchronous Data Copies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
10.27.1 memcpy_async API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
10.27.2 Copy and Compute Pattern - Staging Data Through Shared Memory . . . . . . . . . . 220
10.27.3 Without memcpy_async . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
10.27.4 With memcpy_async . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
10.27.5 Asynchronous Data Copies using cuda::barrier . . . . . . . . . . . . . . . . . . . . . 223
10.27.6 Performance Guidance for memcpy_async . . . . . . . . . . . . . . . . . . . . . . . . . . 224
10.27.6.1 Alignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
10.27.6.2 Trivially copyable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
10.27.6.3 Warp Entanglement - Commit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
10.27.6.4 Warp Entanglement - Wait . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
10.27.6.5 Warp Entanglement - Arrive-On . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
10.27.6.6 Keep Commit and Arrive-On Operations Converged . . . . . . . . . . . . . . . . . . 226
10.28 Asynchronous Data Copies using cuda::pipeline . . . . . . . . . . . . . . . . . . . . . . 226
10.28.1 Single-Stage Asynchronous Data Copies using cuda::pipeline . . . . . . . . . . . . 226
10.28.2 Multi-Stage Asynchronous Data Copies using cuda::pipeline . . . . . . . . . . . . 228
10.28.3 Pipeline Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
10.28.4 Pipeline Primitives Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
10.28.4.1 memcpy_async Primitive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
10.28.4.2 Commit Primitive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
10.28.4.3 Wait Primitive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
10.28.4.4 Arrive On Barrier Primitive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
10.29 Asynchronous Data Copies using Tensor Memory Access (TMA) . . . . . . . . . . . . . . . 235
10.29.1 Using TMA to transfer one-dimensional arrays . . . . . . . . . . . . . . . . . . . . . . . . 236
10.29.1.1 One-dimensional TMA PTX wrappers . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
10.29.2 Using TMA to transfer multi-dimensional arrays . . . . . . . . . . . . . . . . . . . . . . . 240
10.29.2.1 Multi-dimensional TMA PTX wrappers . . . . . . . . . . . . . . . . . . . . . . . . . . 244
10.30 Profiler Counter Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
10.31 Assertion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
10.32 Trap function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
10.33 Breakpoint Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
10.34 Formatted Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
10.34.1 Format Specifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
10.34.2 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
10.34.3 Associated Host-Side API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
10.34.4 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
10.35 Dynamic Global Memory Allocation and Operations . . . . . . . . . . . . . . . . . . . . . . . 250
10.35.1 Heap Memory Allocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
10.35.2 Interoperability with Host Memory API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
10.35.3 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
10.35.3.1 Per Thread Allocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
vi
10.35.3.2 Per Thread Block Allocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
10.35.3.3 Allocation Persisting Between Kernel Launches . . . . . . . . . . . . . . . . . . . . 253
10.36 Execution Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
10.37 Launch Bounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
10.38 Maximum Number of Registers per Thread . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
10.39 #pragma unroll . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
10.40 SIMD Video Instructions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
10.41 Diagnostic Pragmas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
vii
12.2.1.3 Synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
12.2.1.4 Streams and Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
12.2.1.5 Ordering and Concurrency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
12.2.1.6 Device Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
12.2.2 Memory Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
12.2.2.1 Coherence and Consistency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
12.3 Programming Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
12.3.1 CUDA C++ Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
12.3.1.1 Device-Side Kernel Launch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
12.3.1.2 Streams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
12.3.1.3 Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302
12.3.1.4 Synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
12.3.1.5 Device Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
12.3.1.6 Memory Declarations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
12.3.1.7 API Errors and Launch Failures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
12.3.1.8 API Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306
12.3.2 Device-side Launch from PTX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308
12.3.2.1 Kernel Launch APIs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308
12.3.2.2 Parameter Buffer Layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
12.3.3 Toolkit Support for Dynamic Parallelism . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
12.3.3.1 Including Device Runtime API in CUDA Code . . . . . . . . . . . . . . . . . . . . . . 309
12.3.3.2 Compiling and Linking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310
12.4 Programming Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310
12.4.1 Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310
12.4.2 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
12.4.2.1 Dynamic-parallelism-enabled Kernel Overhead . . . . . . . . . . . . . . . . . . . . . 311
12.4.3 Implementation Restrictions and Limitations . . . . . . . . . . . . . . . . . . . . . . . . . 311
12.4.3.1 Runtime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
12.5 CDP2 vs CDP1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314
12.5.1 Differences Between CDP1 and CDP2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314
12.5.2 Compatibility and Interoperability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315
12.6 Legacy CUDA Dynamic Parallelism (CDP1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315
12.6.1 Execution Environment and Memory Model (CDP1) . . . . . . . . . . . . . . . . . . . . . 315
12.6.1.1 Execution Environment (CDP1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
12.6.1.2 Memory Model (CDP1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
12.6.2 Programming Interface (CDP1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
12.6.2.1 CUDA C++ Reference (CDP1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
12.6.2.2 Device-side Launch from PTX (CDP1) . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
12.6.2.3 Toolkit Support for Dynamic Parallelism (CDP1) . . . . . . . . . . . . . . . . . . . . 332
12.6.3 Programming Guidelines (CDP1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
12.6.3.1 Basics (CDP1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
12.6.3.2 Performance (CDP1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
12.6.3.3 Implementation Restrictions and Limitations (CDP1) . . . . . . . . . . . . . . . . . 335
viii
13.7 Controlling Access Rights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
ix
17.3 C++17 Language Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388
17.4 C++20 Language Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389
17.5 Restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389
17.5.1 Host Compiler Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389
17.5.2 Preprocessor Symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389
17.5.2.1 __CUDA_ARCH__ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389
17.5.3 Qualifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391
17.5.3.1 Device Memory Space Specifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391
17.5.3.2 __managed__ Memory Space Specifier . . . . . . . . . . . . . . . . . . . . . . . . . . 392
17.5.3.3 Volatile Qualifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393
17.5.4 Pointers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394
17.5.5 Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394
17.5.5.1 Assignment Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394
17.5.5.2 Address Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394
17.5.6 Run Time Type Information (RTTI) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394
17.5.7 Exception Handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394
17.5.8 Standard Library . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
17.5.9 Namespace Reservations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
17.5.10 Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396
17.5.10.1 External Linkage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396
17.5.10.2 Implicitly-declared and explicitly-defaulted functions . . . . . . . . . . . . . . . . . 396
17.5.10.3 Function Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397
17.5.10.4 Static Variables within Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400
17.5.10.5 Function Pointers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401
17.5.10.6 Function Recursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401
17.5.10.7 Friend Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401
17.5.10.8 Operator Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402
17.5.11 Classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402
17.5.11.1 Data Members . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402
17.5.11.2 Function Members . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402
17.5.11.3 Virtual Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402
17.5.11.4 Virtual Base Classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403
17.5.11.5 Anonymous Unions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403
17.5.11.6 Windows-Specific . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403
17.5.12 Templates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403
17.5.13 Trigraphs and Digraphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
17.5.14 Const-qualified variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
17.5.15 Long Double . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
17.5.16 Deprecation Annotation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
17.5.17 Noreturn Annotation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
17.5.18 [[likely]] / [[unlikely]] Standard Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
17.5.19 const and pure GNU Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
17.5.20 Intel Host Compiler Specific . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407
17.5.21 C++11 Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407
17.5.21.1 Lambda Expressions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407
17.5.21.2 std::initializer_list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408
17.5.21.3 Rvalue references . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409
17.5.21.4 Constexpr functions and function templates . . . . . . . . . . . . . . . . . . . . . . 409
17.5.21.5 Constexpr variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409
17.5.21.6 Inline namespaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410
17.5.21.7 thread_local . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411
17.5.21.8 __global__ functions and function templates . . . . . . . . . . . . . . . . . . . . . . 412
17.5.21.9 __managed__ and __shared__ variables . . . . . . . . . . . . . . . . . . . . . . . . . 413
17.5.21.10 Defaulted functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413
x
17.5.22 C++14 Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414
17.5.22.1 Functions with deduced return type . . . . . . . . . . . . . . . . . . . . . . . . . . . 414
17.5.22.2 Variable templates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415
17.5.23 C++17 Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415
17.5.23.1 Inline Variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416
17.5.23.2 Structured Binding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416
17.5.24 C++20 Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416
17.5.24.1 Module support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416
17.5.24.2 Coroutine support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417
17.5.24.3 Three-way comparison operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417
17.5.24.4 Consteval functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417
17.6 Polymorphic Function Wrappers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418
17.7 Extended Lambdas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 420
17.7.1 Extended Lambda Type Traits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422
17.7.2 Extended Lambda Restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423
17.7.3 Notes on __host__ __device__ lambdas . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432
17.7.4 *this Capture By Value . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433
17.7.5 Additional Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435
17.8 Code Samples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436
17.8.1 Data Aggregation Class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436
17.8.2 Derived Class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436
17.8.3 Class Template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437
17.8.4 Function Template . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437
17.8.5 Functor Class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438
xi
19.8.3 Shared Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464
19.8.4 Features Accelerating Specialized Computations . . . . . . . . . . . . . . . . . . . . . . 464
xii
22.2.8.7 Minimizing TLB cache misses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 514
22.2.8.8 Avoid frequent writes to GPU-resident memory from the CPU . . . . . . . . . . . 517
22.2.8.9 Exploiting asynchronous access to system memory . . . . . . . . . . . . . . . . . . 518
22.2.8.10 Granularity of memory transfers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519
22.3 Unified memory on devices without full CUDA Unified Memory support . . . . . . . . . . 520
22.3.1 Unified memory on devices with only CUDA Managed Memory support . . . . . . . . 520
22.3.2 Unified memory on Windows or devices with compute capability 5.x . . . . . . . . . . 520
22.3.2.1 Data Migration and Coherency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520
22.3.2.2 GPU Memory Oversubscription . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521
22.3.2.3 Multi-GPU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521
22.3.2.4 Coherency and Concurrency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521
24 Notices 535
24.1 Notice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535
24.2 OpenCL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536
24.3 Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536
xiii
xiv
CUDA C++ Programming Guide, Release 12.4
▶ Added Section Asynchronous Data Copies using Tensor Memory Access ( TMA ).
▶ Added Section Added Unified memory programming guide supporting Grace Hopper with Ad-
dress Translation Service ( ATS ) and Heterogenous Memory Management ( HMM ) on x86..
Contents 1
CUDA C++ Programming Guide, Release 12.4
2 Contents
Chapter 1. The Benefits of Using GPUs
The Graphics Processing Unit (GPU)6 provides much higher instruction throughput and memory band-
width than the CPU within a similar price and power envelope. Many applications leverage these higher
capabilities to run faster on the GPU than on the CPU (see GPU Applications). Other computing de-
vices, like FPGAs, are also very energy efficient, but offer much less programming flexibility than GPUs.
This difference in capabilities between the GPU and the CPU exists because they are designed with
different goals in mind. While the CPU is designed to excel at executing a sequence of operations,
called a thread, as fast as possible and can execute a few tens of these threads in parallel, the GPU
is designed to excel at executing thousands of them in parallel (amortizing the slower single-thread
performance to achieve greater throughput).
The GPU is specialized for highly parallel computations and therefore designed such that more transis-
tors are devoted to data processing rather than data caching and flow control. The schematic Figure
1 shows an example distribution of chip resources for a CPU versus a GPU.
Devoting more transistors to data processing, for example, floating-point computations, is beneficial
for highly parallel computations; the GPU can hide memory access latencies with computation, instead
6 The graphics qualifier comes from the fact that when the GPU was originally created, two decades ago, it was designed as
a specialized processor to accelerate graphics rendering. Driven by the insatiable market demand for real-time, high-definition,
3D graphics, it has evolved into a general processor used for many more workloads than just graphics rendering.
3
CUDA C++ Programming Guide, Release 12.4
of relying on large data caches and complex flow control to avoid long memory access latencies, both
of which are expensive in terms of transistors.
In general, an application has a mix of parallel parts and sequential parts, so systems are designed with
a mix of GPUs and CPUs in order to maximize overall performance. Applications with a high degree of
parallelism can exploit this massively parallel nature of the GPU to achieve higher performance than
on the CPU.
In November 2006, NVIDIA® introduced CUDA® , a general purpose parallel computing platform and
programming model that leverages the parallel compute engine in NVIDIA GPUs to solve many complex
computational problems in a more efficient way than on a CPU.
CUDA comes with a software environment that allows developers to use C++ as a high-level program-
ming language. As illustrated by Figure 2, other languages, application programming interfaces, or
directives-based approaches are supported, such as FORTRAN, DirectCompute, OpenACC.
5
CUDA C++ Programming Guide, Release 12.4
Figure 2: GPU Computing Applications. CUDA is designed to support various languages and application
programming interfaces.
The advent of multicore CPUs and manycore GPUs means that mainstream processor chips are now
parallel systems. The challenge is to develop application software that transparently scales its paral-
lelism to leverage the increasing number of processor cores, much as 3D graphics applications trans-
parently scale their parallelism to manycore GPUs with widely varying numbers of cores.
The CUDA parallel programming model is designed to overcome this challenge while maintaining a low
learning curve for programmers familiar with standard programming languages such as C.
At its core are three key abstractions — a hierarchy of thread groups, shared memories, and barrier
synchronization — that are simply exposed to the programmer as a minimal set of language extensions.
These abstractions provide fine-grained data parallelism and thread parallelism, nested within coarse-
grained data parallelism and task parallelism. They guide the programmer to partition the problem
into coarse sub-problems that can be solved independently in parallel by blocks of threads, and each
sub-problem into finer pieces that can be solved cooperatively in parallel by all threads within the block.
This decomposition preserves language expressivity by allowing threads to cooperate when solving
each sub-problem, and at the same time enables automatic scalability. Indeed, each block of threads
can be scheduled on any of the available multiprocessors within a GPU, in any order, concurrently or
sequentially, so that a compiled CUDA program can execute on any number of multiprocessors as
illustrated by Figure 3, and only the runtime system needs to know the physical multiprocessor count.
This scalable programming model allows the GPU architecture to span a wide market range by simply
scaling the number of multiprocessors and memory partitions: from the high-performance enthusiast
GeForce GPUs and professional Quadro and Tesla computing products to a variety of inexpensive,
mainstream GeForce GPUs (see CUDA-Enabled GPUs for a list of all CUDA-enabled GPUs).
7
CUDA C++ Programming Guide, Release 12.4
Note: A GPU is built around an array of Streaming Multiprocessors (SMs) (see Hardware Implementation for
more details). A multithreaded program is partitioned into blocks of threads that execute independently from
each other, so that a GPU with more multiprocessors will automatically execute the program in less time than a
GPU with fewer multiprocessors.
9
CUDA C++ Programming Guide, Release 12.4
This chapter introduces the main concepts behind the CUDA programming model by outlining how
they are exposed in C++.
An extensive description of CUDA C++ is given in Programming Interface.
Full code for the vector addition example used in this chapter and the next can be found in the vec-
torAdd CUDA sample.
5.1. Kernels
CUDA C++ extends C++ by allowing the programmer to define C++ functions, called kernels, that, when
called, are executed N times in parallel by N different CUDA threads, as opposed to only once like regular
C++ functions.
A kernel is defined using the __global__ declaration specifier and the number of CUDA threads that
execute that kernel for a given kernel call is specified using a new <<<...>>>execution configuration
syntax (see C++ Language Extensions). Each thread that executes the kernel is given a unique thread
ID that is accessible within the kernel through built-in variables.
As an illustration, the following sample code, using the built-in variable threadIdx, adds two vectors
A and B of size N and stores the result into vector C:
∕∕ Kernel definition
__global__ void VecAdd(float* A, float* B, float* C)
{
int i = threadIdx.x;
C[i] = A[i] + B[i];
}
int main()
{
...
∕∕ Kernel invocation with N threads
VecAdd<<<1, N>>>(A, B, C);
...
}
Here, each of the N threads that execute VecAdd() performs one pair-wise addition.
11
CUDA C++ Programming Guide, Release 12.4
int main()
{
...
∕∕ Kernel invocation with one block of N * N * 1 threads
int numBlocks = 1;
dim3 threadsPerBlock(N, N);
MatAdd<<<numBlocks, threadsPerBlock>>>(A, B, C);
...
}
There is a limit to the number of threads per block, since all threads of a block are expected to reside
on the same streaming multiprocessor core and must share the limited memory resources of that
core. On current GPUs, a thread block may contain up to 1024 threads.
However, a kernel can be executed by multiple equally-shaped thread blocks, so that the total number
of threads is equal to the number of threads per block times the number of blocks.
Blocks are organized into a one-dimensional, two-dimensional, or three-dimensional grid of thread
blocks as illustrated by Figure 4. The number of thread blocks in a grid is usually dictated by the size
of the data being processed, which typically exceeds the number of processors in the system.
The number of threads per block and the number of blocks per grid specified in the <<<...>>> syntax
can be of type int or dim3. Two-dimensional blocks or grids can be specified as in the example above.
Each block within the grid can be identified by a one-dimensional, two-dimensional, or three-
dimensional unique index accessible within the kernel through the built-in blockIdx variable. The
dimension of the thread block is accessible within the kernel through the built-in blockDim variable.
Extending the previous MatAdd() example to handle multiple blocks, the code becomes as follows.
∕∕ Kernel definition
__global__ void MatAdd(float A[N][N], float B[N][N],
float C[N][N])
(continues on next page)
int main()
{
...
∕∕ Kernel invocation
dim3 threadsPerBlock(16, 16);
dim3 numBlocks(N ∕ threadsPerBlock.x, N ∕ threadsPerBlock.y);
MatAdd<<<numBlocks, threadsPerBlock>>>(A, B, C);
...
}
A thread block size of 16x16 (256 threads), although arbitrary in this case, is a common choice. The
grid is created with enough blocks to have one thread per matrix element as before. For simplicity,
this example assumes that the number of threads per grid in each dimension is evenly divisible by the
number of threads per block in that dimension, although that need not be the case.
Thread blocks are required to execute independently: It must be possible to execute them in any order,
in parallel or in series. This independence requirement allows thread blocks to be scheduled in any order
across any number of cores as illustrated by Figure 3, enabling programmers to write code that scales
with the number of cores.
Threads within a block can cooperate by sharing data through some shared memory and by synchroniz-
ing their execution to coordinate memory accesses. More precisely, one can specify synchronization
points in the kernel by calling the __syncthreads() intrinsic function; __syncthreads() acts as a
barrier at which all threads in the block must wait before any is allowed to proceed. Shared Memory
gives an example of using shared memory. In addition to __syncthreads(), the Cooperative Groups
API provides a rich set of thread-synchronization primitives.
For efficient cooperation, the shared memory is expected to be a low-latency memory near each pro-
cessor core (much like an L1 cache) and __syncthreads() is expected to be lightweight.
Note: In a kernel launched using cluster support, the gridDim variable still denotes the size in terms
of number of thread blocks, for compatibility purposes. The rank of a block in a cluster can be found
using the Cluster Group API.
A thread block cluster can be enabled in a kernel either using a compiler time kernel attribute using
__cluster_dims__(X,Y,Z) or using the CUDA kernel launch API cudaLaunchKernelEx. The exam-
ple below shows how to launch a cluster using compiler time kernel attribute. The cluster size using
kernel attribute is fixed at compile time and then the kernel can be launched using the classical <<<
, >>>. If a kernel uses compile-time cluster size, the cluster size cannot be modified when launching
the kernel.
∕∕ Kernel definition
∕∕ Compile time cluster size 2 in X-dimension and 1 in Y and Z dimension
__global__ void __cluster_dims__(2, 1, 1) cluster_kernel(float *input, float* output)
{
int main()
{
(continues on next page)
∕∕ The grid dimension is not affected by cluster launch, and is still enumerated
∕∕ using number of blocks.
∕∕ The grid dimension must be a multiple of cluster size.
cluster_kernel<<<numBlocks, threadsPerBlock>>>(input, output);
}
A thread block cluster size can also be set at runtime and the kernel can be launched using the CUDA
kernel launch API cudaLaunchKernelEx. The code example below shows how to launch a cluster
kernel using the extensible API.
∕∕ Kernel definition
∕∕ No compile time attribute attached to the kernel
__global__ void cluster_kernel(float *input, float* output)
{
int main()
{
float *input, *output;
dim3 threadsPerBlock(16, 16);
dim3 numBlocks(N ∕ threadsPerBlock.x, N ∕ threadsPerBlock.y);
cudaLaunchAttribute attribute[1];
attribute[0].id = cudaLaunchAttributeClusterDimension;
attribute[0].val.clusterDim.x = 2; ∕∕ Cluster size in X-dimension
attribute[0].val.clusterDim.y = 1;
attribute[0].val.clusterDim.z = 1;
config.attrs = attribute;
config.numAttrs = 1;
In GPUs with compute capability 9.0, all the thread blocks in the cluster are guaranteed to be co-
scheduled on a single GPU Processing Cluster (GPC) and allow thread blocks in the cluster to perform
hardware-supported synchronization using the Cluster Group API cluster.sync(). Cluster group
also provides member functions to query cluster group size in terms of number of threads or number
of blocks using num_threads() and num_blocks() API respectively. The rank of a thread or block in
the cluster group can be queried using dim_threads() and dim_blocks() API respectively.
Thread blocks that belong to a cluster have access to the Distributed Shared Memory. Thread blocks
in a cluster have the ability to read, write, and perform atomics to any address in the distributed shared
memory. Distributed Shared Memory gives an example of performing histograms in distributed shared
memory.
Note: Serial code executes on the host while parallel code executes on the device.
These thread scopes are implemented as extensions to standard C++ in the CUDA Standard C++ li-
brary.
Note: The compute capability version of a particular GPU should not be confused with the CUDA
version (for example, CUDA 7.5, CUDA 8, CUDA 9), which is the version of the CUDA software platform.
The CUDA platform is used by application developers to create applications that run on many genera-
tions of GPU architectures, including future GPU architectures yet to be invented. While new versions
of the CUDA platform often add native support for a new GPU architecture by supporting the com-
pute capability version of that architecture, new versions of the CUDA platform typically also include
software features that are independent of hardware generation.
The Tesla and Fermi architectures are no longer supported starting with CUDA 7.0 and CUDA 9.0, re-
spectively.
CUDA C++ provides a simple path for users familiar with the C++ programming language to easily write
programs for execution by the device.
It consists of a minimal set of extensions to the C++ language and a runtime library.
The core language extensions have been introduced in Programming Model. They allow programmers
to define a kernel as a C++ function and use some new syntax to specify the grid and block dimension
each time the function is called. A complete description of all extensions can be found in C++ Language
Extensions. Any source file that contains some of these extensions must be compiled with nvcc as
outlined in Compilation with NVCC.
The runtime is introduced in CUDA Runtime. It provides C and C++ functions that execute on the host
to allocate and deallocate device memory, transfer data between host memory and device memory,
manage systems with multiple devices, etc. A complete description of the runtime can be found in
the CUDA reference manual.
The runtime is built on top of a lower-level C API, the CUDA driver API, which is also accessible by the
application. The driver API provides an additional level of control by exposing lower-level concepts such
as CUDA contexts - the analogue of host processes for the device - and CUDA modules - the analogue
of dynamically loaded libraries for the device. Most applications do not use the driver API as they do
not need this additional level of control and when using the runtime, context and module management
are implicit, resulting in more concise code. As the runtime is interoperable with the driver API, most
applications that need some driver API features can default to use the runtime API and only use the
driver API where needed. The driver API is introduced in Driver API and fully described in the reference
manual.
21
CUDA C++ Programming Guide, Release 12.4
Source files compiled with nvcc can include a mix of host code (i.e., code that executes on the host)
and device code (i.e., code that executes on the device). nvcc’s basic workflow consists in separating
device code from host code and then:
▶ compiling the device code into an assembly form (PTX code) and/or binary form (cubin object),
▶ and modifying the host code by replacing the <<<...>>> syntax introduced in Kernels (and de-
scribed in more details in Execution Configuration) by the necessary CUDA runtime function calls
to load and launch each compiled kernel from the PTX code and/or cubin object.
The modified host code is output either as C++ code that is left to be compiled using another tool or
as object code directly by letting nvcc invoke the host compiler during the last compilation stage.
Applications can then:
▶ Either link to the compiled host code (this is the most common case),
▶ Or ignore the modified host code (if any) and use the CUDA driver API (see Driver API) to load and
execute the PTX code or cubin object.
Any PTX code loaded by an application at runtime is compiled further to binary code by the device
driver. This is called just-in-time compilation. Just-in-time compilation increases application load time,
but allows the application to benefit from any new compiler improvements coming with each new
device driver. It is also the only way for applications to run on devices that did not exist at the time the
application was compiled, as detailed in Application Compatibility.
When the device driver just-in-time compiles some PTX code for some application, it automatically
caches a copy of the generated binary code in order to avoid repeating the compilation in subsequent
invocations of the application. The cache - referred to as compute cache - is automatically invalidated
when the device driver is upgraded, so that applications can benefit from the improvements in the
new just-in-time compiler built into the device driver.
Environment variables are available to control just-in-time compilation as described in CUDA Environ-
ment Variables
As an alternative to using nvcc to compile CUDA C++ device code, NVRTC can be used to compile
CUDA C++ device code to PTX at runtime. NVRTC is a runtime compilation library for CUDA C++; more
information can be found in the NVRTC User guide.
Note: Binary compatibility is supported only for the desktop. It is not supported for Tegra. Also, the
binary compatibility between desktop and Tegra is not supported.
embeds binary code compatible with compute capability 5.0 and 6.0 (first and second -gencode op-
tions) and PTX and binary code compatible with compute capability 7.0 (third -gencode option).
Host code is generated to automatically select at runtime the most appropriate code to load and
execute, which, in the above example, will be:
▶ 5.0 binary code for devices with compute capability 5.0 and 5.2,
▶ 6.0 binary code for devices with compute capability 6.0 and 6.1,
▶ 7.0 binary code for devices with compute capability 7.0 and 7.5,
▶ PTX code which is compiled to binary code at runtime for devices with compute capability 8.0
and 8.6.
x.cu can have an optimized code path that uses warp reduction operations, for example, which are
only supported in devices of compute capability 8.0 and higher. The __CUDA_ARCH__ macro can be
used to differentiate various code paths based on compute capability. It is only defined for device
code. When compiling with -arch=compute_80 for example, __CUDA_ARCH__ is equal to 800.
If x.cu is compiled for architecture conditional features example with sm_90a or compute_90a, the
code can only run on devices with compute capability 9.0.
Applications using the driver API must compile code to separate files and explicitly load and execute
the most appropriate file at runtime.
The Volta architecture introduces Independent Thread Scheduling which changes the way threads are
scheduled on the GPU. For code relying on specific behavior of SIMT scheduling in previous architec-
tures, Independent Thread Scheduling may alter the set of participating threads, leading to incorrect
results. To aid migration while implementing the corrective actions detailed in Independent Thread
Scheduling, Volta developers can opt-in to Pascal’s thread scheduling with the compiler option com-
bination -arch=compute_60 -code=sm_70.
The nvcc user manual lists various shorthands for the -arch, -code, and -gencode compiler op-
tions. For example, -arch=sm_70 is a shorthand for -arch=compute_70 -code=compute_70,
sm_70 (which is the same as -gencode arch=compute_70,code=\"compute_70,sm_70\").
Shared Memory illustrates the use of shared memory, introduced in Thread Hierarchy, to maximize
performance.
Page-Locked Host Memory introduces page-locked host memory that is required to overlap kernel
execution with data transfers between host and device memory.
Asynchronous Concurrent Execution describes the concepts and API used to enable asynchronous
concurrent execution at various levels in the system.
Multi-Device System shows how the programming model extends to a system with multiple devices
attached to the same host.
Error Checking describes how to properly check the errors generated by the runtime.
Call Stack mentions the runtime functions used to manage the CUDA C++ call stack.
Texture and Surface Memory presents the texture and surface memory spaces that provide another
way to access device memory; they also expose a subset of the GPU texturing hardware.
Graphics Interoperability introduces the various functions the runtime provides to interoperate with
the two main graphics APIs, OpenGL and Direct3D.
6.2.1. Initialization
As of CUDA 12.0, the cudaInitDevice() and cudaSetDevice() calls initialize the runtime and the
primary context associated with the specified device. Absent these calls, the runtime will implicitly
use device 0 and self-initialize as needed to process other runtime API requests. One needs to keep
this in mind when timing runtime function calls and when interpreting the error code from the first
call into the runtime. Before 12.0, cudaSetDevice() would not initialize the runtime and applications
would often use the no-op runtime call cudaFree(0) to isolate the runtime initialization from other
api activity (both for the sake of timing and error handling).
The runtime creates a CUDA context for each device in the system (see Context for more details on
CUDA contexts). This context is the primary context for this device and is initialized at the first runtime
function which requires an active context on this device. It is shared among all the host threads of the
application. As part of this context creation, the device code is just-in-time compiled if necessary (see
Just-in-Time Compilation) and loaded into device memory. This all happens transparently. If needed,
for example, for driver API interoperability, the primary context of a device can be accessed from the
driver API as described in Interoperability between Runtime and Driver APIs.
When a host thread calls cudaDeviceReset(), this destroys the primary context of the device the
host thread currently operates on (i.e., the current device as defined in Device Selection). The next
runtime function call made by any host thread that has this device as current will create a new primary
context for this device.
Note: The CUDA interfaces use global state that is initialized during host program initiation and
destroyed during host program termination. The CUDA runtime and driver cannot detect if this state is
invalid, so using any of these interfaces (implicitly or explicitly) during program initiation or termination
after main) will result in undefined behavior.
As of CUDA 12.0, cudaSetDevice() will now explicitly initialize the runtime after changing the current
device for the host thread. Previous versions of CUDA delayed runtime initialization on the new device
until the first runtime call was made after cudaSetDevice(). This change means that it is now very
important to check the return value of cudaSetDevice() for initialization errors.
The runtime functions from the error handling and version management sections of the reference
manual do not initialize the runtime.
Note: On devices of compute capability 5.3 (Maxwell) and earlier, the CUDA driver creates an un-
committed 40bit virtual address reservation to ensure that memory allocations (pointers) fall into the
supported range. This reservation appears as reserved virtual memory, but does not occupy any phys-
ical memory until the program actually allocates memory.
Linear memory is typically allocated using cudaMalloc() and freed using cudaFree() and data trans-
fer between host memory and device memory are typically done using cudaMemcpy(). In the vector
addition code sample of Kernels, the vectors need to be copied from host memory to device memory:
∕∕ Device code
__global__ void VecAdd(float* A, float* B, float* C, int N)
{
int i = blockDim.x * blockIdx.x + threadIdx.x;
if (i < N)
C[i] = A[i] + B[i];
}
∕∕ Host code
int main()
{
int N = ...;
size_t size = N * sizeof(float);
∕∕ Invoke kernel
int threadsPerBlock = 256;
int blocksPerGrid =
(N + threadsPerBlock - 1) ∕ threadsPerBlock;
VecAdd<<<blocksPerGrid, threadsPerBlock>>>(d_A, d_B, d_C, N);
Linear memory can also be allocated through cudaMallocPitch() and cudaMalloc3D(). These
functions are recommended for allocations of 2D or 3D arrays as it makes sure that the allocation is ap-
propriately padded to meet the alignment requirements described in Device Memory Accesses, there-
fore ensuring best performance when accessing the row addresses or performing copies between
2D arrays and other regions of device memory (using the cudaMemcpy2D() and cudaMemcpy3D()
functions). The returned pitch (or stride) must be used to access array elements. The following code
sample allocates a width x height 2D array of floating-point values and shows how to loop over the
array elements in device code:
∕∕ Host code
int width = 64, height = 64;
float* devPtr;
size_t pitch;
cudaMallocPitch(&devPtr, &pitch,
width * sizeof(float), height);
MyKernel<<<100, 512>>>(devPtr, pitch, width, height);
∕∕ Device code
__global__ void MyKernel(float* devPtr,
(continues on next page)
The following code sample allocates a width x height x depth 3D array of floating-point values and
shows how to loop over the array elements in device code:
∕∕ Host code
int width = 64, height = 64, depth = 64;
cudaExtent extent = make_cudaExtent(width * sizeof(float),
height, depth);
cudaPitchedPtr devPitchedPtr;
cudaMalloc3D(&devPitchedPtr, extent);
MyKernel<<<100, 512>>>(devPitchedPtr, width, height, depth);
∕∕ Device code
__global__ void MyKernel(cudaPitchedPtr devPitchedPtr,
int width, int height, int depth)
{
char* devPtr = devPitchedPtr.ptr;
size_t pitch = devPitchedPtr.pitch;
size_t slicePitch = pitch * height;
for (int z = 0; z < depth; ++z) {
char* slice = devPtr + z * slicePitch;
for (int y = 0; y < height; ++y) {
float* row = (float*)(slice + y * pitch);
for (int x = 0; x < width; ++x) {
float element = row[x];
}
}
}
}
Note: To avoid allocating too much memory and thus impacting system-wide performance, request
the allocation parameters from the user based on the problem size. If the allocation fails, you can
fallback to other slower memory types (cudaMallocHost(), cudaHostRegister(), etc.), or return
an error telling the user how much memory was needed that was denied. If your application cannot
request the allocation parameters for some reason, we recommend using cudaMallocManaged() for
platforms that support it.
The reference manual lists all the various functions used to copy memory between linear memory allo-
cated with cudaMalloc(), linear memory allocated with cudaMallocPitch() or cudaMalloc3D(),
CUDA arrays, and memory allocated for variables declared in global or constant memory space.
The following code sample illustrates various ways of accessing global variables via the runtime API:
__constant__ float constData[256];
float data[256];
(continues on next page)
cudaGetSymbolAddress() is used to retrieve the address pointing to the memory allocated for a
variable declared in global memory space. The size of the allocated memory is obtained through cud-
aGetSymbolSize().
A portion of the L2 cache can be set aside to be used for persisting data accesses to global mem-
ory. Persisting accesses have prioritized use of this set-aside portion of L2 cache, whereas normal or
streaming, accesses to global memory can only utilize this portion of L2 when it is unused by persisting
accesses.
The L2 cache set-aside size for persisting accesses may be adjusted, within limits:
cudaGetDeviceProperties(&prop, device_id);
size_t size = min(int(prop.l2CacheSize * 0.75), prop.persistingL2CacheMaxSize);
cudaDeviceSetLimit(cudaLimitPersistingL2CacheSize, size); ∕* set-aside 3∕4 of L2 cache�
,→for persisting accesses or the max allowed*∕
When the GPU is configured in Multi-Instance GPU (MIG) mode, the L2 cache set-aside functionality
is disabled.
When using the Multi-Process Service (MPS), the L2 cache set-aside size cannot be changed by cud-
aDeviceSetLimit. Instead, the set-aside size can only be specified at start up of MPS server through
the environment variable CUDA_DEVICE_DEFAULT_PERSISTING_L2_CACHE_PERCENTAGE_LIMIT.
An access policy window specifies a contiguous region of global memory and a persistence property
in the L2 cache for accesses within that region.
The code example below shows how to set an L2 persisting access window using a CUDA Stream.
CUDA Stream Example
cudaStreamAttrValue stream_attribute; ∕∕�
,→Stream level attributes data structure
∕∕ (Must�
,→be less than cudaDeviceProp::accessPolicyMaxWindowSize)
When a kernel subsequently executes in CUDA stream, memory accesses within the global memory
extent [ptr..ptr+num_bytes) are more likely to persist in the L2 cache than accesses to other
global memory locations.
L2 persistence can also be set for a CUDA Graph Kernel Node as shown in the example below:
CUDA GraphKernelNode Example
cudaKernelNodeAttrValue node_attribute; ∕∕ Kernel�
,→level attributes data structure
∕∕ (Must�
,→be less than cudaDeviceProp::accessPolicyMaxWindowSize)
The hitRatio parameter can be used to specify the fraction of accesses that receive the hitProp
property. In both of the examples above, 60% of the memory accesses in the global memory region
[ptr..ptr+num_bytes) have the persisting property and 40% of the memory accesses have the
streaming property. Which specific memory accesses are classified as persisting (the hitProp) is
random with a probability of approximately hitRatio; the probability distribution depends upon the
hardware architecture and the memory extent.
For example, if the L2 set-aside cache size is 16KB and the num_bytes in the accessPolicyWindow
is 32KB:
▶ With a hitRatio of 0.5, the hardware will select, at random, 16KB of the 32KB window to be
designated as persisting and cached in the set-aside L2 cache area.
▶ With a hitRatio of 1.0, the hardware will attempt to cache the whole 32KB window in the set-
aside L2 cache area. Since the set-aside area is smaller than the window, cache lines will be
evicted to keep the most recently used 16KB of the 32KB data in the set-aside portion of the L2
cache.
The hitRatio can therefore be used to avoid thrashing of cache lines and overall reduce the amount
of data moved into and out of the L2 cache.
A hitRatio value below 1.0 can be used to manually control the amount of data different accessPol-
icyWindows from concurrent CUDA streams can cache in L2. For example, let the L2 set-aside cache
size be 16KB; two concurrent kernels in two different CUDA streams, each with a 16KB accessPol-
icyWindow, and both with hitRatio value 1.0, might evict each others’ cache lines when competing
for the shared L2 resource. However, if both accessPolicyWindows have a hitRatio value of 0.5, they
will be less likely to evict their own or each others’ persisting cache lines.
Three types of access properties are defined for different global memory data accesses:
1. cudaAccessPropertyStreaming: Memory accesses that occur with the streaming property
are less likely to persist in the L2 cache because these accesses are preferentially evicted.
2. cudaAccessPropertyPersisting: Memory accesses that occur with the persisting property
are more likely to persist in the L2 cache because these accesses are preferentially retained in
the set-aside portion of L2 cache.
3. cudaAccessPropertyNormal: This access property forcibly resets previously applied persisting
access property to a normal status. Memory accesses with the persisting property from previ-
ous CUDA kernels may be retained in L2 cache long after their intended use. This persistence-
after-use reduces the amount of L2 cache available to subsequent kernels that do not use the
persisting property. Resetting an access property window with the cudaAccessPropertyNor-
mal property removes the persisting (preferential retention) status of the prior access, as if the
prior access had been without an access property.
The following example shows how to set-aside L2 cache for persistent accesses, use the set-aside L2
cache in CUDA kernels via CUDA Stream and then reset the L2 cache.
cudaStream_t stream;
cudaStreamCreate(&stream); �
,→ ∕∕ Create CUDA stream
cudaDeviceProp prop; �
,→ ∕∕ CUDA device properties variable
cudaGetDeviceProperties( &prop, device_id); �
,→ ∕∕ Query GPU properties
(continues on next page)
cudaStreamAttrValue stream_attribute; �
,→ ∕∕ Stream level attributes data structure
stream_attribute.accessPolicyWindow.base_ptr = reinterpret_cast<void*>(data1); �
,→ ∕∕ Global Memory data pointer
stream_attribute.accessPolicyWindow.num_bytes = window_size; �
,→ ∕∕ Number of bytes for persistence access
stream_attribute.accessPolicyWindow.hitRatio = 0.6; �
,→ ∕∕ Hint for cache hit ratio
stream_attribute.accessPolicyWindow.hitProp = cudaAccessPropertyPersisting; �
,→ ∕∕ Persistence Property
stream_attribute.accessPolicyWindow.missProp = cudaAccessPropertyStreaming; �
,→ ∕∕ Type of access property on cache miss
stream_attribute.accessPolicyWindow.num_bytes = 0; �
,→ ∕∕ Setting the window size to 0 disable it
cudaStreamSetAttribute(stream, cudaStreamAttributeAccessPolicyWindow, &stream_
,→attribute); ∕∕ Overwrite the access policy attribute to a CUDA Stream
cudaCtxResetPersistingL2Cache(); �
,→ ∕∕ Remove any persistent lines in L2
cuda_kernelC<<<grid_size,block_size,0,stream>>>(data2); �
,→ ∕∕ data2 can now benefit from full L2 in normal mode
A persisting L2 cache line from a previous CUDA kernel may persist in L2 long after it has been used.
Hence, a reset to normal for L2 cache is important for streaming or normal memory accesses to utilize
the L2 cache with normal priority. There are three ways a persisting access can be reset to normal
status.
1. Reset a previous persisting memory region with the access property, cudaAccessProper-
tyNormal.
2. Reset all persisting L2 cache lines to normal by calling cudaCtxResetPersistingL2Cache().
3. Eventually untouched lines are automatically reset to normal. Reliance on automatic reset is
strongly discouraged because of the undetermined length of time required for automatic reset
to occur.
Multiple CUDA kernels executing concurrently in different CUDA streams may have a different access
policy window assigned to their streams. However, the L2 set-aside cache portion is shared among
all these concurrent CUDA kernels. As a result, the net utilization of this set-aside cache portion is
the sum of all the concurrent kernels’ individual use. The benefits of designating memory accesses as
persisting diminish as the volume of persisting accesses exceeds the set-aside L2 cache capacity.
To manage utilization of the set-aside L2 cache portion, an application must consider the following:
▶ Size of L2 set-aside cache.
▶ CUDA kernels that may concurrently execute.
▶ The access policy window for all the CUDA kernels that may concurrently execute.
▶ When and how L2 reset is required to allow normal or streaming accesses to utilize the previously
set-aside L2 cache with equal priority.
Properties related to L2 cache are a part of cudaDeviceProp struct and can be queried using CUDA
runtime API cudaGetDeviceProperties
CUDA Device Properties include:
▶ l2CacheSize: The amount of available L2 cache on the GPU.
▶ persistingL2CacheMaxSize: The maximum amount of L2 cache that can be set-aside for per-
sisting memory accesses.
▶ accessPolicyMaxWindowSize: The maximum size of the access policy window.
The L2 set-aside cache size for persisting memory accesses is queried using CUDA runtime API cu-
daDeviceGetLimit and set using CUDA runtime API cudaDeviceSetLimit as a cudaLimit. The
maximum value for setting this limit is cudaDeviceProp::persistingL2CacheMaxSize.
enum cudaLimit {
∕* other fields not shown *∕
cudaLimitPersistingL2CacheSize
};
∕∕ Invoke kernel
dim3 dimBlock(BLOCK_SIZE, BLOCK_SIZE);
dim3 dimGrid(B.width ∕ dimBlock.x, A.height ∕ dimBlock.y);
MatMulKernel<<<dimGrid, dimBlock>>>(d_A, d_B, d_C);
(continues on next page)
The following code sample is an implementation of matrix multiplication that does take advantage of
shared memory. In this implementation, each thread block is responsible for computing one square
sub-matrix Csub of C and each thread within the block is responsible for computing one element of
Csub. As illustrated in Figure 9, Csub is equal to the product of two rectangular matrices: the sub-
matrix of A of dimension (A.width, block_size) that has the same row indices as Csub, and the sub-
matrix of B of dimension (block_size, A.width )that has the same column indices as Csub. In order to fit
into the device’s resources, these two rectangular matrices are divided into as many square matrices of
dimension block_size as necessary and Csub is computed as the sum of the products of these square
matrices. Each of these products is performed by first loading the two corresponding square matrices
from global memory to shared memory with one thread loading one element of each matrix, and then
by having each thread compute one element of the product. Each thread accumulates the result of
each of these products into a register and once done writes the result to global memory.
By blocking the computation this way, we take advantage of fast shared memory and save a lot of
global memory bandwidth since A is only read (B.width / block_size) times from global memory and B
is read (A.height / block_size) times.
The Matrix type from the previous code sample is augmented with a stride field, so that sub-matrices
can be efficiently represented with the same type. __device__ functions are used to get and set ele-
ments and build any sub-matrix from a matrix.
∕∕ Matrices are stored in row-major order:
∕∕ M(row, col) = *(M.elements + row * M.stride + col)
typedef struct {
int width;
int height;
int stride;
float* elements;
} Matrix;
∕∕ Get a matrix element
__device__ float GetElement(const Matrix A, int row, int col)
(continues on next page)
size_t array_size)
{
extern __shared__ int smem[];
namespace cg = cooperative_groups;
int tid = cg::this_grid().thread_rank();
∕∕ Perform global memory histogram, using the local distributed memory histogram
int *lbins = bins + cluster.block_rank() * bins_per_block;
for (int i = threadIdx.x; i < bins_per_block; i += blockDim.x)
{
atomicAdd(&lbins[i], smem[i]);
}
}
The above kernel can be launched at runtime with a cluster size depending on the amount of dis-
tributed shared memory required. If histogram is small enough to fit in shared memory of just one
block, user can launch kernel with cluster size 1. The code snippet below shows how to launch a clus-
ter kernel dynamically based depending on shared memory requirements.
∕∕ Launch via extensible launch
{
cudaLaunchConfig_t config = {0};
config.gridDim = array_size ∕ threads_per_block;
config.blockDim = threads_per_block;
CUDA_CHECK(::cudaFuncSetAttribute((void *)clusterHist_kernel,�
,→ cudaFuncAttributeMaxDynamicSharedMemorySize, config.dynamicSmemBytes));
cudaLaunchAttribute attribute[1];
attribute[0].id = cudaLaunchAttributeClusterDimension;
attribute[0].val.clusterDim.x = cluster_size;
attribute[0].val.clusterDim.y = 1;
attribute[0].val.clusterDim.z = 1;
config.numAttrs = 1;
config.attrs = attribute;
Note: Page-locked host memory is not cached on non I/O coherent Tegra devices. Also, cuda-
HostRegister() is not supported on non I/O coherent Tegra devices.
The simple zero-copy CUDA sample comes with a detailed document on the page-locked memory APIs.
A block of page-locked memory can be used in conjunction with any device in the system (see Multi-
Device System for more details on multi-device systems), but by default, the benefits of using page-
locked memory described above are only available in conjunction with the device that was current
when the block was allocated (and with all devices sharing the same unified address space, if any, as
described in Unified Virtual Address Space). To make these advantages available to all devices, the
block needs to be allocated by passing the flag cudaHostAllocPortable to cudaHostAlloc() or
page-locked by passing the flag cudaHostRegisterPortable to cudaHostRegister().
By default page-locked host memory is allocated as cacheable. It can optionally be allocated as write-
combining instead by passing flag cudaHostAllocWriteCombined to cudaHostAlloc(). Write-
combining memory frees up the host’s L1 and L2 cache resources, making more cache available to the
rest of the application. In addition, write-combining memory is not snooped during transfers across
the PCI Express bus, which can improve transfer performance by up to 40%.
Reading from write-combining memory from the host is prohibitively slow, so write-combining memory
should in general be used for memory that the host only writes to.
Using CPU atomic instructions on WC memory should be avoided because not all CPU implementations
guarantee that functionality.
A block of page-locked host memory can also be mapped into the address space of the device by pass-
ing flag cudaHostAllocMapped to cudaHostAlloc() or by passing flag cudaHostRegisterMapped
to cudaHostRegister(). Such a block has therefore in general two addresses: one in host memory
that is returned by cudaHostAlloc() or malloc(), and one in device memory that can be retrieved
using cudaHostGetDevicePointer() and then used to access the block from within a kernel. The
only exception is for pointers allocated with cudaHostAlloc() and when a unified address space is
used for the host and the device as mentioned in Unified Virtual Address Space.
Accessing host memory directly from within a kernel does not provide the same bandwidth as device
memory, but does have some advantages:
▶ There is no need to allocate a block in device memory and copy data between this block and the
block in host memory; data transfers are implicitly performed as needed by the kernel;
▶ There is no need to use streams (see Concurrent Data Transfers) to overlap data transfers with
kernel execution; the kernel-originated data transfers automatically overlap with kernel execu-
tion.
Since mapped page-locked memory is shared between host and device however, the application must
synchronize memory accesses using streams or events (see Asynchronous Concurrent Execution) to
avoid any potential read-after-write, write-after-read, or write-after-write hazards.
To be able to retrieve the device pointer to any mapped page-locked memory, page-locked memory
mapping must be enabled by calling cudaSetDeviceFlags() with the cudaDeviceMapHost flag be-
fore any other CUDA call is performed. Otherwise, cudaHostGetDevicePointer() will return an
error.
cudaHostGetDevicePointer() also returns an error if the device does not support mapped page-
locked host memory. Applications may query this capability by checking the canMapHostMemory de-
vice property (see Device Enumeration), which is equal to 1 for devices that support mapped page-
locked host memory.
Note that atomic functions (see Atomic Functions) operating on mapped page-locked memory are not
atomic from the point of view of the host or other devices.
Also note that CUDA runtime requires that 1-byte, 2-byte, 4-byte, and 8-byte naturally aligned loads
and stores to host memory initiated from the device are preserved as single accesses from the point
of view of the host and other devices. On some platforms, atomics to memory may be broken by
the hardware into separate load and store operations. These component load and store operations
have the same requirements on preservation of naturally aligned accesses. As an example, the CUDA
runtime does not support a PCI Express bus topology where a PCI Express bridge splits 8-byte naturally
aligned writes into two 4-byte writes between the device and the host.
Some CUDA applications may see degraded performance due to memory fence/flush operations wait-
ing on more transactions than those necessitated by the CUDA memory consistency model.
__managed__ int x = 0;
__device__ cuda::atomic
,→<int, cuda::thread_scope_
,→device> a(0);
__managed__ cuda::atomic
,→<int, cuda::thread_scope_
,→system> b(0);
Consider the example above. The CUDA memory consistency model guarantees that the asserted
condition will be true, so the write to x from thread 1 must be visible to thread 3, before the write to
b from thread 2.
The memory ordering provided by the release and acquire of a is only sufficient to make x visible to
thread 2, not thread 3, as it is a device-scope operation. The system-scope ordering provided by release
and acquire of b, therefore, needs to ensure not only writes issued from thread 2 itself are visible to
thread 3, but also writes from other threads that are visible to thread 2. This is known as cumulativity.
As the GPU cannot know at the time of execution which writes have been guaranteed at the source
level to be visible and which are visible only by chance timing, it must cast a conservatively wide net
for in-flight memory operations.
This sometimes leads to interference: because the GPU is waiting on memory operations it is not
required to at the source level, the fence/flush may take longer than necessary.
Note that fences may occur explicitly as intrinsics or atomics in code, like in the example, or implicitly
to implement synchronizes-with relationships at task boundaries.
A common example is when a kernel is performing computation in local GPU memory, and a parallel
kernel (e.g. from NCCL) is performing communications with a peer. Upon completion, the local ker-
nel will implicitly flush its writes to satisfy any synchronizes-with relationships to downstream work.
This may unnecessarily wait, fully or partially, on slower nvlink or PCIe writes from the communication
kernel.
Beginning with Hopper architecture GPUs and CUDA 12.0, the memory synchronization domains fea-
ture provides a way to alleviate such interference. In exchange for explicit assistance from code, the
GPU can reduce the net cast by a fence operation. Each kernel launch is given a domain ID. Writes
and fences are tagged with the ID, and a fence will only order writes matching the fence’s domain. In
the concurrent compute vs communication example, the communication kernels can be placed in a
different domain.
When using domains, code must abide by the rule that ordering or synchronization between distinct
domains on the same GPU requires system-scope fencing. Within a domain, device-scope fencing
remains sufficient. This is necessary for cumulativity as one kernel’s writes will not be encompassed
by a fence issued from a kernel in another domain. In essence, cumulativity is satisfied by ensuring
that cross-domain traffic is flushed to the system scope ahead of time.
Note that this modifies the definition of thread_scope_device. However, because kernels will de-
fault to domain 0 as described below, backward compatibility is maintained.
Domains are accessible via the new launch attributes cudaLaunchAttributeMemSyncDomain and
cudaLaunchAttributeMemSyncDomainMap. The former selects between logical domains cud-
aLaunchMemSyncDomainDefault and cudaLaunchMemSyncDomainRemote, and the latter provides
a mapping from logical to physical domains. The remote domain is intended for kernels performing
remote memory access in order to isolate their memory traffic from local kernels. Note, however, the
selection of a particular domain does not affect what memory access a kernel may legally perform.
The domain count can be queried via device attribute cudaDevAttrMemSyncDomainCount. Hopper
has 4 domains. To facilitate portable code, domains functionality can be used on all devices and CUDA
will report a count of 1 prior to Hopper.
Having logical domains eases application composition. An individual kernel launch at a low level in the
stack, such as from NCCL, can select a semantic logical domain without concern for the surrounding
application architecture. Higher levels can steer logical domains using the mapping. The default value
for the logical domain if it is not set is the default domain, and the default mapping is to map the
default domain to 0 and the remote domain to 1 (on GPUs with more than 1 domain). Specific libraries
may tag launches with the remote domain in CUDA 12.0 and later; for example, NCCL 2.16 will do so.
Together, this provides a beneficial use pattern for common applications out of the box, with no code
changes needed in other components, frameworks, or at application level. An alternative use pattern,
for example in an application using nvshmem or with no clear separation of kernel types, could be to
partition parallel streams. Stream A may map both logical domains to physical domain 0, stream B to
1, and so on.
∕∕ Example of launching a kernel with the remote logical domain
cudaLaunchAttribute domainAttr;
domainAttr.id = cudaLaunchAttrMemSyncDomain;
domainAttr.val = cudaLaunchMemSyncDomainRemote;
cudaLaunchConfig_t config;
(continues on next page)
As with other launch attributes, these are exposed uniformly on CUDA streams, individual launches us-
ing cudaLaunchKernelEx, and kernel nodes in CUDA graphs. A typical use would set the mapping at
stream level and the logical domain at launch level (or bracketing a section of stream use) as described
above.
Both attributes are copied to graph nodes during stream capture. Graphs take both attributes from
the node itself, essentially an indirect way of specifying a physical domain. Domain-related attributes
set on the stream a graph is launched into are not used in execution of the graph.
Concurrent host execution is facilitated through asynchronous library functions that return control
to the host thread before the device completes the requested task. Using asynchronous calls, many
device operations can be queued up together to be executed by the CUDA driver when appropriate de-
vice resources are available. This relieves the host thread of much of the responsibility to manage the
device, leaving it free for other tasks. The following device operations are asynchronous with respect
to the host:
▶ Kernel launches;
▶ Memory copies within a single device’s memory;
▶ Memory copies from host to device of a memory block of 64 KB or less;
▶ Memory copies performed by functions that are suffixed with Async;
▶ Memory set function calls.
Programmers can globally disable asynchronicity of kernel launches for all CUDA applications running
on a system by setting the CUDA_LAUNCH_BLOCKING environment variable to 1. This feature is pro-
vided for debugging purposes only and should not be used as a way to make production software run
reliably.
Kernel launches are synchronous if hardware counters are collected via a profiler (Nsight, Visual Pro-
filer) unless concurrent kernel profiling is enabled. Async memory copies might also be synchronous
if they involve host memory that is not page-locked.
Some devices of compute capability 2.x and higher can execute multiple kernels concurrently. Appli-
cations may query this capability by checking the concurrentKernels device property (see Device
Enumeration), which is equal to 1 for devices that support it.
The maximum number of kernel launches that a device can execute concurrently depends on its com-
pute capability and is listed in Table 21.
A kernel from one CUDA context cannot execute concurrently with a kernel from another CUDA con-
text. The GPU may time slice to provide forward progress to each context. If a user wants to run
kernels from multiple process simultaneously on the SM, one must enable MPS.
Kernels that use many textures or a large amount of local memory are less likely to execute concur-
rently with other kernels.
Some devices can perform an asynchronous memory copy to or from the GPU concurrently with kernel
execution. Applications may query this capability by checking the asyncEngineCount device property
(see Device Enumeration), which is greater than zero for devices that support it. If host memory is
involved in the copy, it must be page-locked.
It is also possible to perform an intra-device copy simultaneously with kernel execution (on devices
that support the concurrentKernels device property) and/or with copies to or from the device (for
devices that support the asyncEngineCount property). Intra-device copies are initiated using the
standard memory copy functions with destination and source addresses residing on the same device.
Some devices of compute capability 2.x and higher can overlap copies to and from the device. Ap-
plications may query this capability by checking the asyncEngineCount device property (see Device
Enumeration), which is equal to 2 for devices that support it. In order to be overlapped, any host
memory involved in the transfers must be page-locked.
6.2.8.5 Streams
Applications manage the concurrent operations described above through streams. A stream is a se-
quence of commands (possibly issued by different host threads) that execute in order. Different
streams, on the other hand, may execute their commands out of order with respect to one another
or concurrently; this behavior is not guaranteed and should therefore not be relied upon for correct-
ness (for example, inter-kernel communication is undefined). The commands issued on a stream may
execute when all the dependencies of the command are met. The dependencies could be previously
launched commands on same stream or dependencies from other streams. The successful completion
of synchronize call guarantees that all the commands launched are completed.
A stream is defined by creating a stream object and specifying it as the stream parameter to a se-
quence of kernel launches and host <-> device memory copies. The following code sample creates
two streams and allocates an array hostPtr of float in page-locked memory.
cudaStream_t stream[2];
for (int i = 0; i < 2; ++i)
cudaStreamCreate(&stream[i]);
float* hostPtr;
cudaMallocHost(&hostPtr, 2 * size);
Each of these streams is defined by the following code sample as a sequence of one memory copy
from host to device, one kernel launch, and one memory copy from device to host:
for (int i = 0; i < 2; ++i) {
cudaMemcpyAsync(inputDevPtr + i * size, hostPtr + i * size,
size, cudaMemcpyHostToDevice, stream[i]);
MyKernel <<<100, 512, 0, stream[i]>>>
(outputDevPtr + i * size, inputDevPtr + i * size, size);
cudaMemcpyAsync(hostPtr + i * size, outputDevPtr + i * size,
size, cudaMemcpyDeviceToHost, stream[i]);
}
Each stream copies its portion of input array hostPtr to array inputDevPtr in device memory, pro-
cesses inputDevPtr on the device by calling MyKernel(), and copies the result outputDevPtr back
to the same portion of hostPtr. Overlapping Behavior describes how the streams overlap in this ex-
ample depending on the capability of the device. Note that hostPtr must point to page-locked host
memory for any overlap to occur.
Streams are released by calling cudaStreamDestroy().
for (int i = 0; i < 2; ++i)
cudaStreamDestroy(stream[i]);
In case the device is still doing work in the stream when cudaStreamDestroy() is called, the function
will return immediately and the resources associated with the stream will be released automatically
once the device has completed all work in the stream.
Kernel launches and host <-> device memory copies that do not specify any stream parameter, or
equivalently that set the stream parameter to zero, are issued to the default stream. They are therefore
executed in order.
For code that is compiled using the --default-stream per-thread compilation flag (or that defines
the CUDA_API_PER_THREAD_DEFAULT_STREAM macro before including CUDA headers (cuda.h and
cuda_runtime.h)), the default stream is a regular stream and each host thread has its own default
stream.
For code that is compiled using the --default-stream legacy compilation flag, the default stream
is a special stream called the NULL stream and each device has a single NULL stream used for all
host threads. The NULL stream is special as it causes implicit synchronization as described in Implicit
Synchronization.
For code that is compiled without specifying a --default-stream compilation flag,
--default-stream legacy is assumed as the default.
There are various ways to explicitly synchronize streams with each other.
cudaDeviceSynchronize() waits until all preceding commands in all streams of all host threads
have completed.
cudaStreamSynchronize()takes a stream as a parameter and waits until all preceding commands
in the given stream have completed. It can be used to synchronize the host with a specific stream,
allowing other streams to continue executing on the device.
cudaStreamWaitEvent()takes a stream and an event as parameters (see Events for a description of
events)and makes all the commands added to the given stream after the call to cudaStreamWait-
Event()delay their execution until the given event has completed.
cudaStreamQuery()provides applications with a way to know if all preceding commands in a stream
have completed.
Two commands from different streams cannot run concurrently if any one of the following operations
is issued in-between them by the host thread:
▶ a page-locked host memory allocation,
▶ a device memory allocation,
▶ a device memory set,
▶ a memory copy between two addresses to the same device memory,
▶ any CUDA command to the NULL stream,
▶ a switch between the L1/shared memory configurations described in Compute Capability 7.x.
Operations that require a dependency check include any other commands within the same stream as
the launch being checked and any call to cudaStreamQuery() on that stream. Therefore, applications
should follow these guidelines to improve their potential for concurrent kernel execution:
▶ All independent operations should be issued before dependent operations,
▶ Synchronization of any kind should be delayed as long as possible.
The amount of execution overlap between two streams depends on the order in which the commands
are issued to each stream and whether or not the device supports overlap of data transfer and ker-
nel execution (see Overlap of Data Transfer and Kernel Execution), concurrent kernel execution (see
Concurrent Kernel Execution), and/or concurrent data transfers (see Concurrent Data Transfers).
For example, on devices that do not support concurrent data transfers, the two streams of the code
sample of Creation and Destruction do not overlap at all because the memory copy from host to device
is issued to stream[1] after the memory copy from device to host is issued to stream[0], so it can only
start once the memory copy from device to host issued to stream[0] has completed. If the code is
rewritten the following way (and assuming the device supports overlap of data transfer and kernel
execution)
for (int i = 0; i < 2; ++i)
cudaMemcpyAsync(inputDevPtr + i * size, hostPtr + i * size,
size, cudaMemcpyHostToDevice, stream[i]);
for (int i = 0; i < 2; ++i)
MyKernel<<<100, 512, 0, stream[i]>>>
(outputDevPtr + i * size, inputDevPtr + i * size, size);
for (int i = 0; i < 2; ++i)
cudaMemcpyAsync(hostPtr + i * size, outputDevPtr + i * size,
size, cudaMemcpyDeviceToHost, stream[i]);
then the memory copy from host to device issued to stream[1] overlaps with the kernel launch issued
to stream[0].
On devices that do support concurrent data transfers, the two streams of the code sample of Creation
and Destruction do overlap: The memory copy from host to device issued to stream[1] overlaps with
the memory copy from device to host issued to stream[0] and even with the kernel launch issued to
stream[0] (assuming the device supports overlap of data transfer and kernel execution).
The runtime provides a way to insert a CPU function call at any point into a stream via cudaLaunch-
HostFunc(). The provided function is executed on the host once all commands issued to the stream
before the callback have completed.
The following code sample adds the host function MyCallback to each of two streams after issuing a
host-to-device memory copy, a kernel launch and a device-to-host memory copy into each stream. The
function will begin execution on the host after each of the device-to-host memory copies completes.
void CUDART_CB MyCallback(void *data){
printf("Inside callback %d\n", (size_t)data);
}
...
for (size_t i = 0; i < 2; ++i) {
cudaMemcpyAsync(devPtrIn[i], hostPtr[i], size, cudaMemcpyHostToDevice, stream[i]);
MyKernel<<<100, 512, 0, stream[i]>>>(devPtrOut[i], devPtrIn[i], size);
cudaMemcpyAsync(hostPtr[i], devPtrOut[i], size, cudaMemcpyDeviceToHost,�
,→stream[i]);
The commands that are issued in a stream after a host function do not start executing before the
function has completed.
A host function enqueued into a stream must not make CUDA API calls (directly or indirectly), as it
might end up waiting on itself if it makes such a call leading to a deadlock.
The Programmatic Dependent Launch mechanism allows for a dependent secondary kernel to launch
before the primary kernel it depends on in the same CUDA stream has finished executing. Available
starting with devices of compute capability 9.0, this technique can provide performance benefits when
the secondary kernel can complete significant work that does not depend on the results of the primary
kernel.
6.2.8.6.1 Background
A CUDA application utilizes the GPU by launching and executing multiple kernels on it. A typical GPU
activity timeline is shown in Figure 10.
Here, secondary_kernel is launched after primary_kernel finishes its execution. Serialized exe-
cution is usually necessary because secondary_kernel depends on result data produced by pri-
mary_kernel. If secondary_kernel has no dependency on primary_kernel, both of them can
be launched concurrently by using CUDA streams. Even if secondary_kernel is dependent on pri-
mary_kernel, there is some potential for concurrent execution. For example, almost all the kernels
have some sort of preamble section during which tasks such as zeroing buffers or loading constant
values are performed.
Figure 11 demonstrates the portion of secondary_kernel that could be executed concurrently with-
out impacting the application. Note that concurrent launch also allows us to hide the launch latency
of secondary_kernel behind the execution of primary_kernel.
The concurrent launch and execution of secondary_kernel shown in Figure 12 is achievable using
Programmatic Dependent Launch.
Programmatic Dependent Launch introduces changes to the CUDA kernel launch APIs as explained in
following section. These APIs require at least compute capability 9.0 to provide overlapping execution.
In Programmatic Dependent Launch, a primary and a secondary kernel are launched in the same CUDA
stream. The primary kernel should execute cudaTriggerProgrammaticLaunchCompletion with
all thread blocks when it’s ready for the secondary kernel to launch. The secondary kernel must be
launched using the extensible launch API as shown.
__global__ void primary_kernel() {
∕∕ Initial work that should finish before starting secondary kernel
∕∕ Will block until all primary kernels the secondary kernel is dependent on have�
,→ completed and flushed results to global memory
cudaGridDependencySynchronize();
∕∕ Dependent work
}
cudaLaunchAttribute attribute[1];
attribute[0].id = cudaLaunchAttributeProgrammaticStreamSerialization;
attribute[0].val.programmaticStreamSerializationAllowed = 1;
configSecondary.attrs = attribute;
configSecondary.numAttrs = 1;
In either case, the secondary thread blocks might launch before data written by the primary kernel
is visible. As such, when the secondary kernel is configured with Programmatic Dependent Launch, it
must always use cudaGridDependencySynchronize or other means to verify that the result data
from the primary is available.
Please note that these methods provide the opportunity for the primary and secondary kernels to
execute concurrently, however this behavior is opportunistic and not guaranteed to lead to concurrent
kernel execution. Reliance on concurrent execution in this manner is unsafe and can lead to deadlock.
Programmatic Dependent Launch can be used in CUDA Graphs via stream capture or directly via edge
data. To program this feature in a CUDA Graph with edge data, use a cudaGraphDependencyType
value of cudaGraphDependencyTypeProgrammatic on an edge connecting two kernel nodes. This
edge type makes the upstream kernel visible to a cudaGridDependencySynchronize() in the down-
stream kernel. This type must be used with an outgoing port of either cudaGraphKernelNodePort-
LaunchCompletion or cudaGraphKernelNodePortProgrammatic.
The resulting graph equivalents for stream capture are as follows:
,→ edgeData.from_port =�
attribute.val. ,→cudaGraphKernelNodePortProgrammatic;
,→programmaticStreamSerializationAllowed�
,→= 1;
attribute.val.programmaticEvent. edgeData.from_port =�
,→triggerAtBlockStart = 0; ,→cudaGraphKernelNodePortProgrammatic;
attribute.val.programmaticEvent. edgeData.from_port =�
,→triggerAtBlockStart = 1; ,→cudaGraphKernelNodePortLaunchCompletion;
,→
CUDA Graphs present a new model for work submission in CUDA. A graph is a series of operations,
such as kernel launches, connected by dependencies, which is defined separately from its execution.
This allows a graph to be defined once and then launched repeatedly. Separating out the definition
of a graph from its execution enables a number of optimizations: first, CPU launch costs are reduced
compared to streams, because much of the setup is done in advance; second, presenting the whole
workflow to CUDA enables optimizations which might not be possible with the piecewise work sub-
mission mechanism of streams.
To see the optimizations possible with graphs, consider what happens in a stream: when you place a
kernel into a stream, the host driver performs a sequence of operations in preparation for the execu-
tion of the kernel on the GPU. These operations, necessary for setting up and launching the kernel,
are an overhead cost which must be paid for each kernel that is issued. For a GPU kernel with a short
execution time, this overhead cost can be a significant fraction of the overall end-to-end execution
time.
Work submission using graphs is separated into three distinct stages: definition, instantiation, and
execution.
▶ During the definition phase, a program creates a description of the operations in the graph along
with the dependencies between them.
▶ Instantiation takes a snapshot of the graph template, validates it, and performs much of the
setup and initialization of work with the aim of minimizing what needs to be done at launch. The
resulting instance is known as an executable graph.
▶ An executable graph may be launched into a stream, similar to any other CUDA work. It may be
launched any number of times without repeating the instantiation.
An operation forms a node in a graph. The dependencies between the operations are the edges. These
dependencies constrain the execution sequence of the operations.
An operation may be scheduled at any time once the nodes on which it depends are complete. Schedul-
ing is left up to the CUDA system.
▶ conditional node
▶ child graph: To execute a separate nested graph, as shown in the following figure.
CUDA 12.3 introduced edge data on CUDA Graphs. Edge data modifies a dependency specified by an
edge and consists of three parts: an outgoing port, an incoming port, and a type. An outgoing port
specifies when an associated edge is triggered. An incoming port specifies what portion of a node is
dependent on an associated edge. A type modifies the relation between the endpoints.
Port values are specific to node type and direction, and edge types may be restricted to specific node
types. In all cases, zero-initialized edge data represents default behavior. Outgoing port 0 waits on an
entire task, incoming port 0 blocks an entire task, and edge type 0 is associated with a full dependency
with memory synchronizing behavior.
Edge data is optionally specified in various graph APIs via a parallel array to the associated nodes. If
it is omitted as an input parameter, zero-initialized data is used. If it is omitted as an output (query)
parameter, the API accepts this if the edge data being ignored is all zero-initialized, and returns cud-
aErrorLossyQuery if the call would discard information.
Edge data is also available in some stream capture APIs: cudaStreamBeginCaptureToGraph(), cu-
daStreamGetCaptureInfo(), and cudaStreamUpdateCaptureDependencies(). In these cases,
there is not yet a downstream node. The data is associated with a dangling edge (half edge) which
will either be connected to a future captured node or discarded at termination of stream capture.
Note that some edge types do not wait on full completion of the upstream node. These edges are
ignored when considering if a stream capture has been fully rejoined to the origin stream, and cannot
be discarded at the end of capture. See Creating a Graph Using Stream Capture.
Currently, no node types define additional incoming ports, and only kernel nodes define additional out-
going ports. There is one non-default dependency type, cudaGraphDependencyTypeProgrammatic,
which enables Programmatic Dependent Launch between two kernel nodes.
Graphs can be created via two mechanisms: explicit API and stream capture. The following is an ex-
ample of creating and executing the below graph.
Stream capture provides a mechanism to create a graph from existing stream-based APIs. A section
of code which launches work into streams, including existing code, can be bracketed with calls to
cudaStreamBeginCapture() and cudaStreamEndCapture(). See below.
cudaGraph_t graph;
cudaStreamBeginCapture(stream);
cudaStreamEndCapture(stream, &graph);
Stream capture can handle cross-stream dependencies expressed with cudaEventRecord() and cu-
daStreamWaitEvent(), provided the event being waited upon was recorded into the same capture
graph.
When an event is recorded in a stream that is in capture mode, it results in a captured event. A captured
event represents a set of nodes in a capture graph.
When a captured event is waited on by a stream, it places the stream in capture mode if it is not already,
and the next item in the stream will have additional dependencies on the nodes in the captured event.
The two streams are then being captured to the same capture graph.
When cross-stream dependencies are present in stream capture, cudaStreamEndCapture() must
still be called in the same stream where cudaStreamBeginCapture() was called; this is the origin
stream. Any other streams which are being captured to the same capture graph, due to event-based
dependencies, must also be joined back to the origin stream. This is illustrated below. All streams being
captured to the same capture graph are taken out of capture mode upon cudaStreamEndCapture().
Failure to rejoin to the origin stream will result in failure of the overall capture operation.
∕∕ stream1 is the origin stream
cudaStreamBeginCapture(stream1);
Note: When a stream is taken out of capture mode, the next non-captured item in the stream (if any)
will still have a dependency on the most recent prior non-captured item, despite intermediate items
having been removed.
It is invalid to synchronize or query the execution status of a stream which is being captured or a
captured event, because they do not represent items scheduled for execution. It is also invalid to
query the execution status of or synchronize a broader handle which encompasses an active stream
capture, such as a device or context handle when any associated stream is in capture mode.
When any stream in the same context is being captured, and it was not created with cudaStream-
NonBlocking, any attempted use of the legacy stream is invalid. This is because the legacy stream
handle at all times encompasses these other streams; enqueueing to the legacy stream would cre-
ate a dependency on the streams being captured, and querying it or synchronizing it would query or
synchronize the streams being captured.
It is therefore also invalid to call synchronous APIs in this case. Synchronous APIs, such as cudaMem-
cpy(), enqueue work to the legacy stream and synchronize it before returning.
Note: As a general rule, when a dependency relation would connect something that is captured with
something that was not captured and instead enqueued for execution, CUDA prefers to return an error
rather than ignore the dependency. An exception is made for placing a stream into or out of capture
mode; this severs a dependency relation between items added to the stream immediately before and
after the mode transition.
It is invalid to merge two separate capture graphs by waiting on a captured event from a stream which
is being captured and is associated with a different capture graph than the event. It is invalid to wait
on a non-captured event from a stream which is being captured without specifying the cudaEventWai-
tExternal flag.
A small number of APIs that enqueue asynchronous operations into streams are not currently sup-
ported in graphs and will return an error if called with a stream which is being captured, such as cud-
aStreamAttachMemAsync().
6.2.8.7.3.3 Invalidation
When an invalid operation is attempted during stream capture, any associated capture graphs are
invalidated. When a capture graph is invalidated, further use of any streams which are being captured
or captured events associated with the graph is invalid and will return an error, until stream capture
is ended with cudaStreamEndCapture(). This call will take the associated streams out of capture
mode, but will also return an error value and a NULL graph.
CUDA User Objects can be used to help manage the lifetime of resources used by asynchronous work
in CUDA. In particular, this feature is useful for CUDA Graphs and stream capture.
Various resource management schemes are not compatible with CUDA graphs. Consider for example
an event-based pool or a synchronous-create, asynchronous-destroy scheme.
∕∕ Library API with pool allocation
void libraryWork(cudaStream_t stream) {
auto &resource = pool.claimTemporaryResource();
resource.waitOnReadyEventInStream(stream);
launchWork(stream, resource);
resource.recordReadyEvent(stream);
}
These schemes are difficult with CUDA graphs because of the non-fixed pointer or handle for the
resource which requires indirection or graph update, and the synchronous CPU code needed each time
the work is submitted. They also do not work with stream capture if these considerations are hidden
from the caller of the library, and because of use of disallowed APIs during capture. Various solutions
exist such as exposing the resource to the caller. CUDA user objects present another approach.
A CUDA user object associates a user-specified destructor callback with an internal refcount, similar to
C++ shared_ptr. References may be owned by user code on the CPU and by CUDA graphs. Note that
for user-owned references, unlike C++ smart pointers, there is no object representing the reference;
users must track user-owned references manually. A typical use case would be to immediately move
the sole user-owned reference to a CUDA graph after the user object is created.
When a reference is associated to a CUDA graph, CUDA will manage the graph operations automat-
ically. A cloned cudaGraph_t retains a copy of every reference owned by the source cudaGraph_t,
with the same multiplicity. An instantiated cudaGraphExec_t retains a copy of every reference in
the source cudaGraph_t. When a cudaGraphExec_t is destroyed without being synchronized, the
references are retained until the execution is completed.
Here is an example use.
cudaGraph_t graph; ∕∕ Preexisting graph
Object *object = new Object; ∕∕ C++ object with possibly nontrivial destructor
cudaUserObject_t cuObject;
cudaUserObjectCreate(
&cuObject,
object, ∕∕ Here we use a CUDA-provided template wrapper for this API,
∕∕ which supplies a callback to delete the C++ object pointer
1, ∕∕ Initial refcount
cudaUserObjectNoDestructorSync ∕∕ Acknowledge that the callback cannot be
∕∕ waited on via CUDA
);
cudaGraphRetainUserObject(
graph,
cuObject,
1, ∕∕ Number of references
cudaGraphUserObjectMove ∕∕ Transfer a reference owned by the caller (do
∕∕ not modify the total reference count)
);
∕∕ No more references owned by this thread; no need to call release API
cudaGraphExec_t graphExec;
cudaGraphInstantiate(&graphExec, graph, nullptr, nullptr, 0); ∕∕ Will retain a
∕∕ new reference
cudaGraphDestroy(graph); ∕∕ graphExec still owns a reference
cudaGraphLaunch(graphExec, 0); ∕∕ Async launch has access to the user objects
cudaGraphExecDestroy(graphExec); ∕∕ Launch is not synchronized; the release
∕∕ will be deferred if needed
cudaStreamSynchronize(0); ∕∕ After the launch is synchronized, the remaining
∕∕ reference is released and the destructor will
∕∕ execute. Note this happens asynchronously.
∕∕ If the destructor callback had signaled a synchronization object, it would
∕∕ be safe to wait on it at this point.
References owned by graphs in child graph nodes are associated to the child graphs, not the parents. If
a child graph is updated or deleted, the references change accordingly. If an executable graph or child
graph is updated with cudaGraphExecUpdate or cudaGraphExecChildGraphNodeSetParams, the
references in the new source graph are cloned and replace the references in the target graph. In either
case, if previous launches are not synchronized, any references which would be released are held until
the launches have finished executing.
There is not currently a mechanism to wait on user object destructors via a CUDA API. Users may signal
a synchronization object manually from the destructor code. In addition, it is not legal to call CUDA
APIs from the destructor, similar to the restriction on cudaLaunchHostFunc. This is to avoid blocking
a CUDA internal shared thread and preventing forward progress. It is legal to signal another thread to
perform an API call, if the dependency is one way and the thread doing the call cannot block forward
progress of CUDA work.
User objects are created with cudaUserObjectCreate, which is a good starting point to browse re-
lated APIs.
Work submission using graphs is separated into three distinct stages: definition, instantiation, and ex-
ecution. In situations where the workflow is not changing, the overhead of definition and instantiation
can be amortized over many executions, and graphs provide a clear advantage over streams.
A graph is a snapshot of a workflow, including kernels, parameters, and dependencies, in order to replay
it as rapidly and efficiently as possible. In situations where the workflow changes the graph becomes
out of date and must be modified. Major changes to graph structure such as topology or types of
nodes will require re-instantiation of the source graph because various topology-related optimization
techniques must be re-applied.
The cost of repeated instantiation can reduce the overall performance benefit from graph execution,
but it is common for only node parameters, such as kernel parameters and cudaMemcpy addresses,
to change while graph topology remains the same. For this case, CUDA provides a lightweight mecha-
nism known as “Graph Update,” which allows certain node parameters to be modified in-place without
having to rebuild the entire graph. This is much more efficient than re-instantiation.
Updates will take effect the next time the graph is launched, so they will not impact previous graph
launches, even if they are running at the time of the update. A graph may be updated and relaunched
repeatedly, so multiple updates/launches can be queued on a stream.
CUDA provides two mechanisms for updating instantiated graph parameters, whole graph update and
individual node update. Whole graph update allows the user to supply a topologically identical cud-
aGraph_t object whose nodes contain updated parameters. Individual node update allows the user
to explicitly update the parameters of individual nodes. Using an updated cudaGraph_t is more con-
venient when a large number of nodes are being updated, or when the graph topology is unknown to
the caller (i.e., The graph resulted from stream capture of a library call). Using individual node update
is preferred when the number of changes is small and the user has the handles to the nodes requiring
updates. Individual node update skips the topology checks and comparisons for unchanged nodes, so
it can be more efficient in many cases.
CUDA also provides a mechanism for enabling and disabling individual nodes without affecting their
current parameters.
The following sections explain each approach in more detail.
Kernel nodes:
▶ The owning context of the function cannot change.
▶ A node whose function originally did not use CUDA dynamic parallelism cannot be updated to a
function which uses CUDA dynamic parallelism.
cudaMemset and cudaMemcpy nodes:
▶ The CUDA device(s) to which the operand(s) was allocated/mapped cannot change.
▶ The source/destination memory must be allocated from the same context as the original
source/destination memory.
▶ Only 1D cudaMemset/cudaMemcpy nodes can be changed.
Additional memcpy node restrictions:
▶ Changing either the source or destination memory type (i.e., cudaPitchedPtr, cudaArray_t,
etc.), or the type of transfer (i.e., cudaMemcpyKind) is not supported.
cudaGraphExecUpdate() allows an instantiated graph (the “original graph”) to be updated with the
parameters from a topologically identical graph (the “updating” graph). The topology of the updating
graph must be identical to the original graph used to instantiate the cudaGraphExec_t. In addition,
the order in which the dependencies are specified must match. Finally, CUDA needs to consistently
order the sink nodes (nodes with no dependencies). CUDA relies on the order of specific api calls to
achieve consistent sink node ordering.
More explicitly, following the following rules will cause cudaGraphExecUpdate() to pair the nodes in
the original graph and the updating graph deterministically:
1. For any capturing stream, the API calls operating on that stream must be made in the same order,
including event wait and other api calls not directly corresponding to node creation.
2. The API calls which directly manipulate a given graph node’s incoming edges (including captured
stream APIs, node add APIs, and edge addition / removal APIs) must be made in the same or-
der. Moreover, when dependencies are specified in arrays to these APIs, the order in which the
dependencies are specified inside those arrays must match.
3. Sink nodes must be consistently ordered. Sink nodes are nodes without dependent nodes / out-
going edges in the final graph at the time of the cudaGraphExecUpdate() invocation. The fol-
lowing operations affect sink node ordering (if present) and must (as a combined set) be made
in the same order:
▶ Node add APIs resulting in a sink node.
▶ Edge removal resulting in a node becoming a sink node.
▶ cudaStreamUpdateCaptureDependencies(), if it removes a sink node from a capturing
stream’s dependency set.
▶ cudaStreamEndCapture().
The following example shows how the API could be used to update an instantiated graph:
cudaGraphExec_t graphExec = NULL;
cudaStreamEndCapture(stream, &graph);
cudaGraphDestroy(graph);
cudaGraphLaunch(graphExec, stream);
cudaStreamSynchronize(stream);
}
A typical workflow is to create the initial cudaGraph_t using either the stream capture or graph API.
The cudaGraph_t is then instantiated and launched as normal. After the initial launch, a new cud-
aGraph_t is created using the same method as the initial graph and cudaGraphExecUpdate() is
called. If the graph update is successful, indicated by the updateResult parameter in the above
example, the updated cudaGraphExec_t is launched. If the update fails for any reason, the cud-
aGraphExecDestroy() and cudaGraphInstantiate() are called to destroy the original cuda-
GraphExec_t and instantiate a new one.
It is also possible to update the cudaGraph_t nodes directly (i.e., Using cudaGraphKernelNodeSet-
Params()) and subsequently update the cudaGraphExec_t, however it is more efficient to use the
explicit node update APIs covered in the next section.
Conditional handle flags and default values are updated as part of the graph update.
Please see the Graph API for more information on usage and current limitations.
Instantiated graph node parameters can be updated directly. This eliminates the overhead of instanti-
ation as well as the overhead of creating a new cudaGraph_t. If the number of nodes requiring update
is small relative to the total number of nodes in the graph, it is better to update the nodes individually.
The following methods are available for updating cudaGraphExec_t nodes:
▶ cudaGraphExecKernelNodeSetParams()
▶ cudaGraphExecMemcpyNodeSetParams()
▶ cudaGraphExecMemsetNodeSetParams()
▶ cudaGraphExecHostNodeSetParams()
▶ cudaGraphExecChildGraphNodeSetParams()
▶ cudaGraphExecEventRecordNodeSetEvent()
▶ cudaGraphExecEventWaitNodeSetEvent()
▶ cudaGraphExecExternalSemaphoresSignalNodeSetParams()
▶ cudaGraphExecExternalSemaphoresWaitNodeSetParams()
Please see the Graph API for more information on usage and current limitations.
Kernel, memset and memcpy nodes in an instantiated graph can be enabled or disabled using the
cudaGraphNodeSetEnabled() API. This allows the creation of a graph which contains a superset of the
desired functionality which can be customized for each launch. The enable state of a node can be
queried using the cudaGraphNodeGetEnabled() API.
A disabled node is functionally equivalent to empty node until it is reenabled. Node parameters are not
affected by enabling/disabling a node. Enable state is unaffected by individual node update or whole
graph update with cudaGraphExecUpdate(). Parameter updates while the node is disabled will take
effect when the node is reenabled.
The following methods are available for enabling/disabling cudaGraphExec_t nodes, as well as query-
ing their status :
▶ cudaGraphNodeSetEnabled()
▶ cudaGraphNodeGetEnabled()
Please see the Graph API for more information on usage and current limitations.
cudaGraph_t objects are not thread-safe. It is the responsibility of the user to ensure that multiple
threads do not concurrently access the same cudaGraph_t.
A cudaGraphExec_t cannot run concurrently with itself. A launch of a cudaGraphExec_t will be
ordered after previous launches of the same executable graph.
Graph execution is done in streams for ordering with other asynchronous work. However, the stream
is for ordering only; it does not constrain the internal parallelism of the graph, nor does it affect where
graph nodes execute.
There are many workflows which need to make data-dependent decisions during runtime and execute
different operations depending on those decisions. Rather than offloading this decision-making pro-
cess to the host, which may require a round-trip from the device, users may prefer to perform it on
the device. To that end, CUDA provides a mechanism to launch graphs from the device.
Device graph launch provides a convenient way to perform dynamic control flow from the device, be
it something as simple as a loop or as complex as a device-side work scheduler. This functionality is
only available on systems which support unified addressing.
Graphs which can be launched from the device will henceforth be referred to as device graphs, and
graphs which cannot be launched from the device will be referred to as host graphs.
Device graphs can be launched from both the host and device, whereas host graphs can only be
launched from the host. Unlike host launches, launching a device graph from the device while a previ-
ous launch of the graph is running will result in an error, returning cudaErrorInvalidValue; there-
fore, a device graph cannot be launched twice from the device at the same time. Launching a device
graph from the host and device simultaneously will result in undefined behavior.
In order for a graph to be launched from the device, it must be instantiated explicitly for device
launch. This is achieved by passing the cudaGraphInstantiateFlagDeviceLaunch flag to the cud-
aGraphInstantiate() call. As is the case for host graphs, device graph structure is fixed at time
of instantiation and cannot be updated without re-instantiation, and instantiation can only be per-
formed on the host. In order for a graph to be able to be instantiated for device launch, it must adhere
to various requirements.
General requirements:
▶ The graph’s nodes must all reside on a single device.
▶ The graph can only contain kernel nodes, memcpy nodes, memset nodes, and child graph nodes.
Kernel nodes:
▶ Use of CUDA Dynamic Parallelism by kernels in the graph is not permitted.
▶ Cooperative launches are permitted so long as MPS is not in use.
Memcpy nodes:
▶ Only copies involving device memory and/or pinned device-mapped host memory are permitted.
▶ Copies involving CUDA arrays are not permitted.
▶ Both operands must be accessible from the current device at time of instantiation. Note that
the copy operation will be performed from the device on which the graph resides, even if it is
targeting memory on another device.
In order to launch a graph on the device, it must first be uploaded to the device to populate the nec-
essary device resources. This can be achieved in one of two ways.
Firstly, the graph can be uploaded explicitly, either via cudaGraphUpload() or by requesting an upload
as part of instantiation via cudaGraphInstantiateWithParams().
Alternatively, the graph can first be launched from the host, which will perform this upload step im-
plicitly as part of the launch.
Examples of all three methods can be seen below:
∕∕ Explicit upload after instantiation
cudaGraphInstantiate(&deviceGraphExec1, deviceGraph1,�
,→cudaGraphInstantiateFlagDeviceLaunch);
cudaGraphUpload(deviceGraphExec1, stream);
instantiateParams.uploadStream = stream;
cudaGraphInstantiateWithParams(&deviceGraphExec2, deviceGraph2, &instantiateParams);
cudaGraphLaunch(deviceGraphExec3, stream);
Device graphs can only be updated from the host, and must be re-uploaded to the device upon exe-
cutable graph update in order for the changes to take effect. This can be achieved using the same
methods outlined in the previous section. Unlike host graphs, launching a device graph from the device
while an update is being applied will result in undefined behavior.
Device graphs can be launched from both the host and the device via cudaGraphLaunch(), which
has the same signature on the device as on the host. Device graphs are launched via the same handle
on the host and the device. Device graphs must be launched from another graph when launched from
the device.
Device-side graph launch is per-thread and multiple launches may occur from different threads at the
same time, so the user will need to select a single thread from which to launch a given graph.
Unlike host launch, device graphs cannot be launched into regular CUDA streams, and can only be
launched into distinct named streams, which each denote a specific launch mode:
As the name suggests, a fire and forget launch is submitted to the GPU immediately, and it runs in-
dependently of the launching graph. In a fire-and-forget scenario, the launching graph is the parent,
and the launched graph is the child.
void graphSetup() {
cudaGraphExec_t gExec1, gExec2;
cudaGraph_t g1, g2;
∕∕ Launch the host graph, which will in turn launch the device graph.
cudaGraphLaunch(gExec1, stream);
}
A graph can have up to 120 total fire-and-forget graphs during the course of its execution. This total
resets between launches of the same parent graph.
In order to fully understand the device-side synchronization model, it is first necessary to understand
the concept of an execution environment.
When a graph is launched from the device, it is launched into its own execution environment. The
execution environment of a given graph encapsulates all work in the graph as well as all generated fire
and forget work. The graph can be considered complete when it has completed execution and when
all generated child work is complete.
The below diagram shows the environment encapsulation that would be generated by the fire-and-
forget sample code in the previous section.
These environments are also hierarchical, so a graph environment can include multiple levels of child-
environments from fire and forget launches.
When a graph is launched from the host, there exists a stream environment that parents the execution
environment of the launched graph. The stream environment encapsulates all work generated as part
of the overall launch. The stream launch is complete (i.e. downstream dependent work may now run)
when the overall stream environment is marked as complete.
Unlike on the host, it is not possible to synchronize with device graphs from the GPU via traditional
methods such as cudaDeviceSynchronize() or cudaStreamSynchronize(). Rather, in order to
enable serial work dependencies, a different launch mode - tail launch - is offered, to provide similar
functionality.
A tail launch executes when a graph’s environment is considered complete - ie, when the graph and
all its children are complete. When a graph completes, the environment of the next graph in the tail
launch list will replace the completed environment as a child of the parent environment. Like fire-and-
forget launches, a graph can have multiple graphs enqueued for tail launch.
The above execution flow can be generated by the code below:
__global__ void launchTailGraph(cudaGraphExec_t graph) {
cudaGraphLaunch(graph, cudaStreamGraphTailLaunch);
(continues on next page)
void graphSetup() {
cudaGraphExec_t gExec1, gExec2;
cudaGraph_t g1, g2;
∕∕ Launch the host graph, which will in turn launch the device graph.
cudaGraphLaunch(gExec1, stream);
}
Tail launches enqueued by a given graph will execute one at a time, in order of when they were en-
queued. So the first enqueued graph will run first, and then the second, and so on.
Tail launches enqueued by a tail graph will execute before tail launches enqueued by previous graphs
in the tail launch list. These new tail launches will execute in the order they are enqueued.
A graph can have up to 255 pending tail launches.
Figure 21: Tail launch ordering when enqueued from multiple graphs
It is possible for a device graph to enqueue itself for a tail launch, although a given graph can only have
one self-launch enqueued at a time. In order to query the currently running device graph so that it can
be relaunched, a new device-side function is added:
cudaGraphExec_t cudaGetCurrentGraphExec();
This function returns the handle of the currently running graph if it is a device graph. If the currently
executing kernel is not a node within a device graph, this function will return NULL.
Below is sample code showing usage of this function for a relaunch loop:
__device__ int relaunchCount = 0;
if (threadIdx.x == 0) {
if (relaunchCount < relaunchMax) {
cudaGraphLaunch(cudaGetCurrentGraphExec(), cudaStreamGraphTailLaunch);
}
relaunchCount++;
}
}
Sibling launch is a variation of fire-and-forget launch in which the graph is launched not as a child
of the launching graph’s execution environment, but rather as a child of the launching graph’s parent
environment. Sibling launch is equivalent to a fire-and-forget launch from the launching graph’s parent
environment.
void graphSetup() {
cudaGraphExec_t gExec1, gExec2;
cudaGraph_t g1, g2;
∕∕ Launch the host graph, which will in turn launch the device graph.
cudaGraphLaunch(gExec1, stream);
}
Since sibling launches are not launched into the launching graph’s execution environment, they will
not gate tail launches enqueued by the launching graph.
Conditional nodes allow conditional execution and looping of a graph contained within the conditional
node. This allows dynamic and iterative workflows to be represented completely within a graph and
frees up the host CPU to perform other work in parallel.
Evaluation of the condition value is performed on the device when the dependencies of the conditional
node have been met. Conditional nodes can be one of the following types:
▶ Conditional IF nodes execute their body graph once if the condition value is non-zero when the
node is executed.
▶ Conditional WHILE nodes execute their body graph if the condition value is non-zero when the
node is executed and will continue to execute their body graph until the condition value is zero.
A condition value is accessed by a conditional handle , which must be created before the node. The
condition value can be set by device code using cudaGraphSetConditional(). A default value, ap-
plied on each graph launch, can also be specified when the handle is created.
When the conditional node is created, an empty graph is created and the handle is returned to the
user so that the graph can be populated. This conditional body graph can be populated using either
the graph APIs or cudaStreamBeginCaptureToGraph() .
Conditional nodes can be nested.
General requirements:
▶ The graph’s nodes must all reside on a single device.
▶ The graph can only contain kernel nodes, empty nodes, memcpy nodes, memset nodes, child
graph nodes, and conditional nodes.
Kernel nodes:
▶ Use of CUDA Dynamic Parallelism by kernels in the graph is not permitted.
▶ Cooperative launches are permitted so long as MPS is not in use.
Memcpy/Memset nodes:
▶ Only copies/memsets involving device memory and/or pinned device-mapped host memory are
permitted.
▶ Copies/memsets involving CUDA arrays are not permitted.
▶ Both operands must be accessible from the current device at time of instantiation. Note that
the copy operation will be performed from the device on which the graph resides, even if it is
targeting memory on another device.
The body graph of an IF node will be executed once if the condition is non-zero when the node is
executed. The following diagram depicts a 3 node graph where the middle node, B, is a conditional
node:
The following code illustrates the creation of a graph containing an IF conditional node. The default
value of the condition is set using an upstream kernel. The body of the conditional is populated using
the graph API .
__global__ void setHandle(cudaGraphConditionalHandle handle)
{
...
cudaGraphSetConditional(handle, value);
...
}
cudaGraphCreate(&graph, 0);
cudaGraphConditionalHandle handle;
cudaGraphConditionalHandleCreate(&handle, graph);
cudaGraphExecDestroy(graphExec);
cudaGraphDestroy(graph);
}
The body graph of a WHILE node will be executed until the condition is non-zero. The condition will be
evaluated when the node is executed and after completion of the body graph. The following diagram
depicts a 3 node graph where the middle node, B, is a conditional node:
The following code illustrates the creation of a graph containing a WHILE conditional node. The handle
is created using cudaGraphCondAssignDefault to avoid the need for an upstream kernel. The body of
the conditional is populated using the graph API .
__global__ void loopKernel(cudaGraphConditionalHandle handle)
{
static int count = 10;
(continues on next page)
void graphSetup() {
cudaGraph_t graph;
cudaGraphExec_t graphExec;
cudaGraphNode_t node;
void *kernelArgs[1];
cuGraphCreate(&graph, 0);
cudaGraphConditionalHandle handle;
cudaGraphConditionalHandleCreate(&handle, graph, 1, cudaGraphCondAssignDefault);
cudaGraphExecDestroy(graphExec);
cudaGraphDestroy(graph);
}
6.2.8.8 Events
The runtime also provides a way to closely monitor the device’s progress, as well as perform accurate
timing, by letting the application asynchronously record events at any point in the program, and query
when these events are completed. An event has completed when all tasks - or optionally, all commands
in a given stream - preceding the event have completed. Events in stream zero are completed after all
preceding tasks and commands in all streams are completed.
The events created in Creation and Destruction can be used to time the code sample of Creation and
Destruction the following way:
cudaEventRecord(start, 0);
for (int i = 0; i < 2; ++i) {
cudaMemcpyAsync(inputDev + i * size, inputHost + i * size,
size, cudaMemcpyHostToDevice, stream[i]);
MyKernel<<<100, 512, 0, stream[i]>>>
(outputDev + i * size, inputDev + i * size, size);
cudaMemcpyAsync(outputHost + i * size, outputDev + i * size,
size, cudaMemcpyDeviceToHost, stream[i]);
}
cudaEventRecord(stop, 0);
cudaEventSynchronize(stop);
float elapsedTime;
cudaEventElapsedTime(&elapsedTime, start, stop);
When a synchronous function is called, control is not returned to the host thread before the device has
completed the requested task. Whether the host thread will then yield, block, or spin can be specified
by calling cudaSetDeviceFlags()with some specific flags (see reference manual for details) before
any other CUDA call is performed by the host thread.
A host system can have multiple devices. The following code sample shows how to enumerate these
devices, query their properties, and determine the number of CUDA-enabled devices.
int deviceCount;
cudaGetDeviceCount(&deviceCount);
int device;
for (device = 0; device < deviceCount; ++device) {
cudaDeviceProp deviceProp;
cudaGetDeviceProperties(&deviceProp, device);
printf("Device %d has compute capability %d.%d.\n",
device, deviceProp.major, deviceProp.minor);
}
A host thread can set the device it operates on at any time by calling cudaSetDevice(). Device
memory allocations and kernel launches are made on the currently set device; streams and events
are created in association with the currently set device. If no call to cudaSetDevice() is made, the
current device is device 0.
The following code sample illustrates how setting the current device affects memory allocation and
kernel execution.
size_t size = 1024 * sizeof(float);
cudaSetDevice(0); ∕∕ Set device 0 as current
float* p0;
cudaMalloc(&p0, size); ∕∕ Allocate memory on device 0
MyKernel<<<1000, 128>>>(p0); ∕∕ Launch kernel on device 0
cudaSetDevice(1); ∕∕ Set device 1 as current
float* p1;
cudaMalloc(&p1, size); ∕∕ Allocate memory on device 1
MyKernel<<<1000, 128>>>(p1); ∕∕ Launch kernel on device 1
A kernel launch will fail if it is issued to a stream that is not associated to the current device as illus-
trated in the following code sample.
cudaSetDevice(0); ∕∕ Set device 0 as current
cudaStream_t s0;
cudaStreamCreate(&s0); ∕∕ Create stream s0 on device 0
MyKernel<<<100, 64, 0, s0>>>(); ∕∕ Launch kernel on device 0 in s0
cudaSetDevice(1); ∕∕ Set device 1 as current
cudaStream_t s1;
cudaStreamCreate(&s1); ∕∕ Create stream s1 on device 1
MyKernel<<<100, 64, 0, s1>>>(); ∕∕ Launch kernel on device 1 in s1
A memory copy will succeed even if it is issued to a stream that is not associated to the current device.
cudaEventRecord() will fail if the input event and input stream are associated to different devices.
cudaEventElapsedTime() will fail if the two input events are associated to different devices.
cudaEventSynchronize() and cudaEventQuery() will succeed even if the input event is associ-
ated to a device that is different from the current device.
cudaStreamWaitEvent() will succeed even if the input stream and input event are associated to
different devices. cudaStreamWaitEvent() can therefore be used to synchronize multiple devices
with each other.
Each device has its own default stream (see Default Stream), so commands issued to the default
stream of a device may execute out of order or concurrently with respect to commands issued to
the default stream of any other device.
Depending on the system properties, specifically the PCIe and/or NVLINK topology, devices are able
to address each other’s memory (i.e., a kernel executing on one device can dereference a pointer to
the memory of the other device). This peer-to-peer memory access feature is supported between two
devices if cudaDeviceCanAccessPeer() returns true for these two devices.
Peer-to-peer memory access is only supported in 64-bit applications and must be enabled between
two devices by calling cudaDeviceEnablePeerAccess() as illustrated in the following code sample.
On non-NVSwitch enabled systems, each device can support a system-wide maximum of eight peer
connections.
A unified address space is used for both devices (see Unified Virtual Address Space), so the same
pointer can be used to address memory from both devices as shown in the code sample below.
cudaSetDevice(0); ∕∕ Set device 0 as current
float* p0;
size_t size = 1024 * sizeof(float);
cudaMalloc(&p0, size); ∕∕ Allocate memory on device 0
MyKernel<<<1000, 128>>>(p0); ∕∕ Launch kernel on device 0
cudaSetDevice(1); ∕∕ Set device 1 as current
cudaDeviceEnablePeerAccess(0, 0); ∕∕ Enable peer-to-peer access
∕∕ with device 0
On Linux only, CUDA and the display driver does not support IOMMU-enabled bare-metal PCIe peer to
peer memory copy. However, CUDA and the display driver does support IOMMU via VM pass through.
As a consequence, users on Linux, when running on a native bare metal system, should disable the
IOMMU. The IOMMU should be enabled and the VFIO driver be used as a PCIe pass through for virtual
machines.
On Windows the above limitation does not exist.
See also Allocating DMA Buffers on 64-bit Platforms.
Memory copies can be performed between the memories of two different devices.
When a unified address space is used for both devices (see Unified Virtual Address Space), this is done
using the regular memory copy functions mentioned in Device Memory.
Otherwise, this is done using cudaMemcpyPeer(), cudaMemcpyPeerAsync(), cudaMem-
cpy3DPeer(), or cudaMemcpy3DPeerAsync() as illustrated in the following code sample.
cudaSetDevice(0); ∕∕ Set device 0 as current
float* p0;
size_t size = 1024 * sizeof(float);
cudaMalloc(&p0, size); ∕∕ Allocate memory on device 0
cudaSetDevice(1); ∕∕ Set device 1 as current
float* p1;
cudaMalloc(&p1, size); ∕∕ Allocate memory on device 1
cudaSetDevice(0); ∕∕ Set device 0 as current
MyKernel<<<1000, 128>>>(p0); ∕∕ Launch kernel on device 0
cudaSetDevice(1); ∕∕ Set device 1 as current
cudaMemcpyPeer(p1, 1, p0, 0, size); ∕∕ Copy p0 to p1
MyKernel<<<1000, 128>>>(p1); ∕∕ Launch kernel on device 1
A copy (in the implicit NULL stream) between the memories of two different devices:
▶ does not start until all commands previously issued to either device have completed and
▶ runs to completion before any commands (see Asynchronous Concurrent Execution) issued after
the copy to either device can start.
Consistent with the normal behavior of streams, an asynchronous copy between the memories of two
devices may overlap with copies or kernels in another stream.
Note that if peer-to-peer access is enabled between two devices via cudaDeviceEnablePeerAc-
cess() as described in Peer-to-Peer Memory Access, peer-to-peer memory copy between these two
devices no longer needs to be staged through the host and is therefore faster.
Applications may query if the unified address space is used for a particular device by checking that
the unifiedAddressing device property (see Device Enumeration) is equal to 1.
Note: Since CUDA 11.5, only events-sharing IPC APIs are supported on L4T and embedded Linux Tegra
devices with compute capability 7.x and higher. The memory-sharing IPC APIs are still not supported
on Tegra platforms.
Texture memory is read from kernels using the device functions described in Texture Functions. The
process of reading a texture calling one of these functions is called a texture fetch. Each texture fetch
specifies a parameter called a texture object for the texture object API.
The texture object specifies:
▶ The texture, which is the piece of texture memory that is fetched. Texture objects are created
at runtime and the texture is specified when creating the texture object as described in Texture
Object API.
▶ Its dimensionality that specifies whether the texture is addressed as a one dimensional array
using one texture coordinate, a two-dimensional array using two texture coordinates, or a three-
dimensional array using three texture coordinates. Elements of the array are called texels, short
for texture elements. The texture width, height, and depth refer to the size of the array in each
dimension. Table 21 lists the maximum texture width, height, and depth depending on the com-
pute capability of the device.
▶ The type of a texel, which is restricted to the basic integer and single-precision floating-point
types and any of the 1-, 2-, and 4-component vector types defined in Built-in Vector Types that
are derived from the basic integer and single-precision floating-point types.
▶ The read mode, which is equal to cudaReadModeNormalizedFloat or cudaReadModeElement-
Type. If it is cudaReadModeNormalizedFloat and the type of the texel is a 16-bit or 8-bit inte-
ger type, the value returned by the texture fetch is actually returned as floating-point type and
the full range of the integer type is mapped to [0.0, 1.0] for unsigned integer type and [-1.0, 1.0]
for signed integer type; for example, an unsigned 8-bit texture element with the value 0xff reads
as 1. If it is cudaReadModeElementType, no conversion is performed.
▶ Whether texture coordinates are normalized or not. By default, textures are referenced (by the
functions of Texture Functions) using floating-point coordinates in the range [0, N-1] where N
is the size of the texture in the dimension corresponding to the coordinate. For example, a tex-
ture that is 64x32 in size will be referenced with coordinates in the range [0, 63] and [0, 31] for
the x and y dimensions, respectively. Normalized texture coordinates cause the coordinates to
be specified in the range [0.0, 1.0-1/N] instead of [0, N-1], so the same 64x32 texture would be
addressed by normalized coordinates in the range [0, 1-1/N] in both the x and y dimensions. Nor-
malized texture coordinates are a natural fit to some applications’ requirements, if it is preferable
for the texture coordinates to be independent of the texture size.
▶ The addressing mode. It is valid to call the device functions of Section B.8 with coordinates that
are out of range. The addressing mode defines what happens in that case. The default address-
ing mode is to clamp the coordinates to the valid range: [0, N) for non-normalized coordinates
and [0.0, 1.0) for normalized coordinates. If the border mode is specified instead, texture fetches
with out-of-range texture coordinates return zero. For normalized coordinates, the wrap mode
and the mirror mode are also available. When using the wrap mode, each coordinate x is con-
verted to frac(x)=x - floor(x) where floor(x) is the largest integer not greater than x. When us-
ing the mirror mode, each coordinate x is converted to frac(x) if floor(x) is even and 1-frac(x) if
floor(x) is odd. The addressing mode is specified as an array of size three whose first, second,
and third elements specify the addressing mode for the first, second, and third texture coor-
dinates, respectively; the addressing mode are cudaAddressModeBorder, cudaAddressMod-
eClamp, cudaAddressModeWrap, and cudaAddressModeMirror; cudaAddressModeWrap and
cudaAddressModeMirror are only supported for normalized texture coordinates
▶ The filtering mode which specifies how the value returned when fetching the texture is com-
puted based on the input texture coordinates. Linear texture filtering may be done only for tex-
tures that are configured to return floating-point data. It performs low-precision interpolation
between neighboring texels. When enabled, the texels surrounding a texture fetch location are
read and the return value of the texture fetch is interpolated based on where the texture co-
ordinates fell between the texels. Simple linear interpolation is performed for one-dimensional
textures, bilinear interpolation for two-dimensional textures, and trilinear interpolation for three-
dimensional textures. Texture Fetching gives more details on texture fetching. The filtering mode
is equal to cudaFilterModePoint or cudaFilterModeLinear. If it is cudaFilterModePoint,
the returned value is the texel whose texture coordinates are the closest to the input texture co-
ordinates. If it is cudaFilterModeLinear, the returned value is the linear interpolation of the
two (for a one-dimensional texture), four (for a two dimensional texture), or eight (for a three
dimensional texture) texels whose texture coordinates are the closest to the input texture coor-
dinates. cudaFilterModeLinear is only valid for returned values of floating-point type.
Texture Object API introduces the texture object API.
16-Bit Floating-Point Textures explains how to deal with 16-bit floating-point textures.
Textures can also be layered as described in Layered Textures.
Cubemap Textures and Cubemap Layered Textures describe a special type of texture, the cubemap
texture.
float u = x ∕ (float)width;
float v = y ∕ (float)height;
∕∕ Transform coordinates
u -= 0.5f;
v -= 0.5f;
float tu = u * cosf(theta) - v * sinf(theta) + 0.5f;
float tv = v * cosf(theta) + u * sinf(theta) + 0.5f;
∕∕ Host code
int main()
{
const int height = 1024;
const int width = 1024;
float angle = 0.5;
∕∕ Set pitch of the source (the width in memory in bytes of the 2D array pointed
∕∕ to by src, including padding), we dont have any padding
const size_t spitch = width * sizeof(float);
∕∕ Copy data located at address h_data in host memory to device memory
cudaMemcpy2DToArray(cuArray, 0, 0, h_data, spitch, width * sizeof(float),
height, cudaMemcpyHostToDevice);
∕∕ Specify texture
struct cudaResourceDesc resDesc;
memset(&resDesc, 0, sizeof(resDesc));
resDesc.resType = cudaResourceTypeArray;
resDesc.res.array.array = cuArray;
∕∕ Invoke kernel
dim3 threadsperBlock(16, 16);
dim3 numBlocks((width + threadsperBlock.x - 1) ∕ threadsperBlock.x,
(height + threadsperBlock.y - 1) ∕ threadsperBlock.y);
transformKernel<<<numBlocks, threadsperBlock>>>(output, texObj, width, height,
angle);
∕∕ Copy data from device back to host
cudaMemcpy(h_data, output, width * height * sizeof(float),
cudaMemcpyDeviceToHost);
return 0;
}
The 16-bit floating-point or half format supported by CUDA arrays is the same as the IEEE 754-2008
binary2 format.
CUDA C++ does not support a matching data type, but provides intrinsic functions to convert to and
from the 32-bit floating-point format via the unsigned short type: __float2half_rn(float) and
__half2float(unsigned short). These functions are only supported in device code. Equivalent
functions for the host code can be found in the OpenEXR library, for example.
16-bit floating-point components are promoted to 32 bit float during texture fetching before any fil-
tering is performed.
A channel description for the 16-bit floating-point format can be created by calling one of the cud-
aCreateChannelDescHalf*() functions.
A one-dimensional or two-dimensional layered texture (also known as texture array in Direct3D and
array texture in OpenGL) is a texture made up of a sequence of layers, all of which are regular textures
of same dimensionality, size, and data type.
A one-dimensional layered texture is addressed using an integer index and a floating-point texture
coordinate; the index denotes a layer within the sequence and the coordinate addresses a texel within
that layer. A two-dimensional layered texture is addressed using an integer index and two floating-
point texture coordinates; the index denotes a layer within the sequence and the coordinates address
a texel within that layer.
A layered texture can only be a CUDA array by calling cudaMalloc3DArray() with the cudaArray-
Layered flag (and a height of zero for one-dimensional layered texture).
Layered textures are fetched using the device functions described in tex1DLayered() and
tex2DLayered(). Texture filtering (see Texture Fetching) is done only within a layer, not across layers.
Layered textures are only supported on devices of compute capability 2.0 and higher.
A cubemap texture is a special type of two-dimensional layered texture that has six layers representing
the faces of a cube:
▶ The width of a layer is equal to its height.
▶ The cubemap is addressed using three texture coordinates x, y, and z that are interpreted as a
direction vector emanating from the center of the cube and pointing to one face of the cube and
a texel within the layer corresponding to that face. More specifically, the face is selected by the
coordinate with largest magnitude m and the corresponding layer is addressed using coordinates
(s/m+1)/2 and (t/m+1)/2 where s and t are defined in Table 3.
A cubemap texture can only be a CUDA array by calling cudaMalloc3DArray() with the cudaArray-
Cubemap flag.
Cubemap textures are fetched using the device function described in texCubemap().
Cubemap textures are only supported on devices of compute capability 2.0 and higher.
A cubemap layered texture is a layered texture whose layers are cubemaps of same dimension.
A cubemap layered texture is addressed using an integer index and three floating-point texture coor-
dinates; the index denotes a cubemap within the sequence and the coordinates address a texel within
that cubemap.
A cubemap layered texture can only be a CUDA array by calling cudaMalloc3DArray() with the cu-
daArrayLayered and cudaArrayCubemap flags.
Cubemap layered textures are fetched using the device function described in texCubemapLayered().
Texture filtering (see Texture Fetching) is done only within a layer, not across layers.
Cubemap layered textures are only supported on devices of compute capability 2.0 and higher.
Texture gather is a special texture fetch that is available for two-dimensional textures only. It is per-
formed by the tex2Dgather() function, which has the same parameters as tex2D(), plus an addi-
tional comp parameter equal to 0, 1, 2, or 3 (see tex2Dgather()). It returns four 32-bit numbers that
correspond to the value of the component comp of each of the four texels that would have been used
for bilinear filtering during a regular texture fetch. For example, if these texels are of values (253,
20, 31, 255), (250, 25, 29, 254), (249, 16, 37, 253), (251, 22, 30, 250), and comp is 2, tex2Dgather()
returns (31, 29, 37, 30).
Note that texture coordinates are computed with only 8 bits of fractional precision. tex2Dgather()
may therefore return unexpected results for cases where tex2D() would use 1.0 for one of its weights
(� or �, see Linear Filtering). For example, with an x texture coordinate of 2.49805: xB=x-0.5=1.99805,
however the fractional part of xB is stored in an 8-bit fixed-point format. Since 0.99805 is closer to
256.f/256.f than it is to 255.f/256.f, xB has the value 2. A tex2Dgather() in this case would therefore
return indices 2 and 3 in x, instead of indices 1 and 2.
Texture gather is only supported for CUDA arrays created with the cudaArrayTextureGather flag
and of width and height less than the maximum specified in Table 21 for texture gather, which is
smaller than for regular texture fetch.
Texture gather is only supported on devices of compute capability 2.0 and higher.
For devices of compute capability 2.0 and higher, a CUDA array (described in Cubemap Surfaces), cre-
ated with the cudaArraySurfaceLoadStore flag, can be read and written via a surface object using
the functions described in Surface Functions.
Table 21 lists the maximum surface width, height, and depth depending on the compute capability of
the device.
∕∕ Host code
int main()
{
const int height = 1024;
const int width = 1024;
∕∕ Set pitch of the source (the width in memory in bytes of the 2D array
∕∕ pointed to by src, including padding), we dont have any padding
const size_t spitch = 4 * width * sizeof(unsigned char);
∕∕ Copy data located at address h_data in host memory to device memory
cudaMemcpy2DToArray(cuInputArray, 0, 0, h_data, spitch,
4 * width * sizeof(unsigned char), height,
cudaMemcpyHostToDevice);
∕∕ Specify surface
struct cudaResourceDesc resDesc;
memset(&resDesc, 0, sizeof(resDesc));
resDesc.resType = cudaResourceTypeArray;
∕∕ Invoke kernel
dim3 threadsperBlock(16, 16);
return 0;
}
CUDA arrays are opaque memory layouts optimized for texture fetching. They are one dimensional, two
dimensional, or three-dimensional and composed of elements, each of which has 1, 2 or 4 components
that may be signed or unsigned 8-, 16-, or 32-bit integers, 16-bit floats, or 32-bit floats. CUDA arrays
are only accessible by kernels through texture fetching as described in Texture Memory or surface
reading and writing as described in Surface Memory.
The texture and surface memory is cached (see Device Memory Accesses) and within the same kernel
call, the cache is not kept coherent with respect to global memory writes and surface memory writes,
so any texture fetch or surface read to an address that has been written to via a global write or a
surface write in the same kernel call returns undefined data. In other words, a thread can safely read
some texture or surface memory location only if this memory location has been updated by a previous
kernel call or memory copy, but not if it has been previously updated by the same thread or another
thread from the same kernel call.
The OpenGL resources that may be mapped into the address space of CUDA are OpenGL buffer, tex-
ture, and renderbuffer objects.
A buffer object is registered using cudaGraphicsGLRegisterBuffer(). In CUDA, it appears as a
device pointer and can therefore be read and written by kernels or via cudaMemcpy() calls.
A texture or renderbuffer object is registered using cudaGraphicsGLRegisterImage(). In CUDA, it
appears as a CUDA array. Kernels can read from the array by binding it to a texture or surface reference.
They can also write to it via the surface write functions if the resource has been registered with the
cudaGraphicsRegisterFlagsSurfaceLoadStore flag. The array can also be read and written via
cudaMemcpy2D() calls. cudaGraphicsGLRegisterImage() supports all texture formats with 1, 2,
or 4 components and an internal type of float (for example, GL_RGBA_FLOAT32), normalized integer
(for example, GL_RGBA8, GL_INTENSITY16), and unnormalized integer (for example, GL_RGBA8UI)
(please note that since unnormalized integer formats require OpenGL 3.0, they can only be written by
shaders, not the fixed function pipeline).
The OpenGL context whose resources are being shared has to be current to the host thread making
any OpenGL interoperability API calls.
Please note: When an OpenGL texture is made bindless (say for example by requesting an image or
texture handle using the glGetTextureHandle*/glGetImageHandle* APIs) it cannot be registered
with CUDA. The application needs to register the texture for interop before requesting an image or
texture handle.
The following code sample uses a kernel to dynamically modify a 2D width x height grid of vertices
stored in a vertex buffer object:
GLuint positionsVBO;
struct cudaGraphicsResource* positionsVBO_CUDA;
int main()
{
∕∕ Initialize OpenGL and GLUT for device 0
∕∕ and make the OpenGL context current
...
glutDisplayFunc(display);
...
}
void display()
{
∕∕ Map buffer object for writing from CUDA
float4* positions;
cudaGraphicsMapResources(1, &positionsVBO_CUDA, 0);
size_t num_bytes;
cudaGraphicsResourceGetMappedPointer((void**)&positions,
&num_bytes,
positionsVBO_CUDA));
∕∕ Execute kernel
dim3 dimBlock(16, 16, 1);
dim3 dimGrid(width ∕ dimBlock.x, height ∕ dimBlock.y, 1);
createVertices<<<dimGrid, dimBlock>>>(positions, time,
width, height);
∕∕ Swap buffers
glutSwapBuffers();
glutPostRedisplay();
}
void deleteVBO()
{
cudaGraphicsUnregisterResource(positionsVBO_CUDA);
glDeleteBuffers(1, &positionsVBO);
}
∕∕ Calculate uv coordinates
float u = x ∕ (float)width;
float v = y ∕ (float)height;
u = u * 2.0f - 1.0f;
v = v * 2.0f - 1.0f;
∕∕ Write positions
positions[y * width + x] = make_float4(u, w, v, 1.0f);
}
On Windows and for Quadro GPUs, cudaWGLGetDevice() can be used to retrieve the CUDA device
associated to the handle returned by wglEnumGpusNV(). Quadro GPUs offer higher performance
OpenGL interoperability than GeForce and Tesla GPUs in a multi-GPU configuration where OpenGL
rendering is performed on the Quadro GPU and CUDA computations are performed on other GPUs in
the system.
Direct3D interoperability is supported for Direct3D 9Ex, Direct3D 10, and Direct3D 11.
A CUDA context may interoperate only with Direct3D devices that fulfill the following criteria: Direct3D
9Ex devices must be created with DeviceType set to D3DDEVTYPE_HAL and BehaviorFlags with
the D3DCREATE_HARDWARE_VERTEXPROCESSING flag; Direct3D 10 and Direct3D 11 devices must be
created with DriverType set to D3D_DRIVER_TYPE_HARDWARE.
The Direct3D resources that may be mapped into the address space of CUDA are Direct3D buffers, tex-
tures, and surfaces. These resources are registered using cudaGraphicsD3D9RegisterResource(),
cudaGraphicsD3D10RegisterResource(), and cudaGraphicsD3D11RegisterResource().
The following code sample uses a kernel to dynamically modify a 2D width x height grid of vertices
stored in a vertex buffer object.
IDirect3D9* D3D;
IDirect3DDevice9* device;
struct CUSTOMVERTEX {
FLOAT x, y, z;
DWORD color;
};
IDirect3DVertexBuffer9* positionsVB;
struct cudaGraphicsResource* positionsVB_CUDA;
int main()
{
int dev;
∕∕ Initialize Direct3D
D3D = Direct3DCreate9Ex(D3D_SDK_VERSION);
∕∕ Create device
...
D3D->CreateDeviceEx(adapter, D3DDEVTYPE_HAL, hWnd,
D3DCREATE_HARDWARE_VERTEXPROCESSING,
¶ms, NULL, &device);
void Render()
{
∕∕ Map vertex buffer for writing from CUDA
float4* positions;
cudaGraphicsMapResources(1, &positionsVB_CUDA, 0);
size_t num_bytes;
cudaGraphicsResourceGetMappedPointer((void**)&positions,
&num_bytes,
positionsVB_CUDA));
∕∕ Execute kernel
dim3 dimBlock(16, 16, 1);
dim3 dimGrid(width ∕ dimBlock.x, height ∕ dimBlock.y, 1);
createVertices<<<dimGrid, dimBlock>>>(positions, time,
width, height);
void releaseVB()
{
cudaGraphicsUnregisterResource(positionsVB_CUDA);
positionsVB->Release();
}
∕∕ Calculate uv coordinates
float u = x ∕ (float)width;
float v = y ∕ (float)height;
u = u * 2.0f - 1.0f;
v = v * 2.0f - 1.0f;
∕∕ Write positions
positions[y * width + x] =
make_float4(u, w, v, __int_as_float(0xff00ff00));
}
ID3D10Device* device;
struct CUSTOMVERTEX {
FLOAT x, y, z;
DWORD color;
};
ID3D10Buffer* positionsVB;
struct cudaGraphicsResource* positionsVB_CUDA;
int main()
{
int dev;
∕∕ Get a CUDA-enabled adapter
IDXGIFactory* factory;
CreateDXGIFactory(__uuidof(IDXGIFactory), (void**)&factory);
IDXGIAdapter* adapter = 0;
for (unsigned int i = 0; !adapter; ++i) {
if (FAILED(factory->EnumAdapters(i, &adapter))
break;
if (cudaD3D10GetDevice(&dev, adapter) == cudaSuccess)
break;
adapter->Release();
}
factory->Release();
cudaGraphicsMapFlagsWriteDiscard);
void Render()
{
∕∕ Map vertex buffer for writing from CUDA
float4* positions;
cudaGraphicsMapResources(1, &positionsVB_CUDA, 0);
size_t num_bytes;
cudaGraphicsResourceGetMappedPointer((void**)&positions,
&num_bytes,
positionsVB_CUDA));
∕∕ Execute kernel
dim3 dimBlock(16, 16, 1);
dim3 dimGrid(width ∕ dimBlock.x, height ∕ dimBlock.y, 1);
createVertices<<<dimGrid, dimBlock>>>(positions, time,
width, height);
void releaseVB()
{
cudaGraphicsUnregisterResource(positionsVB_CUDA);
positionsVB->Release();
}
∕∕ Calculate uv coordinates
(continues on next page)
∕∕ Write positions
positions[y * width + x] =
make_float4(u, w, v, __int_as_float(0xff00ff00));
}
ID3D11Device* device;
struct CUSTOMVERTEX {
FLOAT x, y, z;
DWORD color;
};
ID3D11Buffer* positionsVB;
struct cudaGraphicsResource* positionsVB_CUDA;
int main()
{
int dev;
∕∕ Get a CUDA-enabled adapter
IDXGIFactory* factory;
CreateDXGIFactory(__uuidof(IDXGIFactory), (void**)&factory);
IDXGIAdapter* adapter = 0;
for (unsigned int i = 0; !adapter; ++i) {
if (FAILED(factory->EnumAdapters(i, &adapter))
break;
if (cudaD3D11GetDevice(&dev, adapter) == cudaSuccess)
break;
adapter->Release();
}
factory->Release();
void Render()
{
∕∕ Map vertex buffer for writing from CUDA
float4* positions;
cudaGraphicsMapResources(1, &positionsVB_CUDA, 0);
size_t num_bytes;
cudaGraphicsResourceGetMappedPointer((void**)&positions,
&num_bytes,
positionsVB_CUDA));
∕∕ Execute kernel
dim3 dimBlock(16, 16, 1);
dim3 dimGrid(width ∕ dimBlock.x, height ∕ dimBlock.y, 1);
createVertices<<<dimGrid, dimBlock>>>(positions, time,
width, height);
void releaseVB()
{
cudaGraphicsUnregisterResource(positionsVB_CUDA);
positionsVB->Release();
}
(continues on next page)
∕∕ Calculate uv coordinates
float u = x ∕ (float)width;
float v = y ∕ (float)height;
u = u * 2.0f - 1.0f;
v = v * 2.0f - 1.0f;
∕∕ Write positions
positions[y * width + x] =
make_float4(u, w, v, __int_as_float(0xff00ff00));
}
In a system with multiple GPUs, all CUDA-enabled GPUs are accessible via the CUDA driver and runtime
as separate devices. There are however special considerations as described below when the system is
in SLI mode.
First, an allocation in one CUDA device on one GPU will consume memory on other GPUs that are part
of the SLI configuration of the Direct3D or OpenGL device. Because of this, allocations may fail earlier
than otherwise expected.
Second, applications should create multiple CUDA contexts, one for each GPU in the SLI configura-
tion. While this is not a strict requirement, it avoids unnecessary data transfers between devices. The
application can use the cudaD3D[9|10|11]GetDevices() for Direct3D and cudaGLGetDevices()
for OpenGL set of calls to identify the CUDA device handle(s) for the device(s) that are performing the
rendering in the current and next frame. Given this information the application will typically choose
the appropriate device and map Direct3D or OpenGL resources to the CUDA device returned by cu-
daD3D[9|10|11]GetDevices() or cudaGLGetDevices() when the deviceList parameter is set
to cudaD3D[9|10|11]DeviceListCurrentFrame or cudaGLDeviceListCurrentFrame.
Please note that resource returned from cudaGraphicsD9D[9|10|11]RegisterResource and
cudaGraphicsGLRegister[Buffer|Image] must be only used on device the registration happened.
Therefore on SLI configurations when data for different frames is computed on different CUDA de-
vices it is necessary to register the resources for each separately.
See Direct3D Interoperability and OpenGL Interoperability for details on how the CUDA runtime inter-
operate with Direct3D and OpenGL, respectively.
When importing memory and synchronization objects exported by Vulkan, they must be imported
and mapped on the same device as they were created on. The CUDA device that corresponds to the
Vulkan physical device on which the objects were created can be determined by comparing the UUID
of a CUDA device with that of the Vulkan physical device, as shown in the following code sample. Note
that the Vulkan physical device should not be part of a device group that contains more than one
Vulkan physical device. The device group as returned by vkEnumeratePhysicalDeviceGroups that
contains the given Vulkan physical device must have a physical device count of 1.
int getCudaDeviceForVulkanPhysicalDevice(VkPhysicalDevice vkPhysicalDevice) {
VkPhysicalDeviceIDProperties vkPhysicalDeviceIDProperties = {};
vkPhysicalDeviceIDProperties.sType = VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_ID_
,→PROPERTIES;
vkPhysicalDeviceIDProperties.pNext = NULL;
vkGetPhysicalDeviceProperties2(vkPhysicalDevice, &vkPhysicalDeviceProperties2);
(continues on next page)
int cudaDeviceCount;
cudaGetDeviceCount(&cudaDeviceCount);
return cudaDevice;
}
}
return cudaInvalidDeviceId;
}
On Linux and Windows 10, both dedicated and non-dedicated memory objects exported by Vulkan
can be imported into CUDA. On Windows 7, only dedicated memory objects can be imported. When
importing a Vulkan dedicated memory object, the flag cudaExternalMemoryDedicated must be set.
A Vulkan memory object exported using VK_EXTERNAL_MEMORY_HANDLE_TYPE_OPAQUE_FD_BIT can
be imported into CUDA using the file descriptor associated with that object as shown below. Note that
CUDA assumes ownership of the file descriptor once it is imported. Using the file descriptor after a
successful import results in undefined behavior.
cudaExternalMemory_t importVulkanMemoryObjectFromFileDescriptor(int fd, unsigned long�
,→long size, bool isDedicated) {
memset(&desc, 0, sizeof(desc));
desc.type = cudaExternalMemoryHandleTypeOpaqueFd;
desc.handle.fd = fd;
desc.size = size;
if (isDedicated) {
desc.flags |= cudaExternalMemoryDedicated;
}
cudaImportExternalMemory(&extMem, &desc);
∕∕ Input parameter 'fd' should not be used beyond this point as CUDA has assumed�
,→ ownership of it
return extMem;
}
memset(&desc, 0, sizeof(desc));
desc.type = cudaExternalMemoryHandleTypeOpaqueWin32;
desc.handle.win32.handle = handle;
desc.size = size;
if (isDedicated) {
desc.flags |= cudaExternalMemoryDedicated;
}
cudaImportExternalMemory(&extMem, &desc);
return extMem;
}
memset(&desc, 0, sizeof(desc));
desc.type = cudaExternalMemoryHandleTypeOpaqueWin32;
desc.handle.win32.name = (void *)name;
desc.size = size;
if (isDedicated) {
desc.flags |= cudaExternalMemoryDedicated;
}
cudaImportExternalMemory(&extMem, &desc);
return extMem;
}
memset(&desc, 0, sizeof(desc));
desc.type = cudaExternalMemoryHandleTypeOpaqueWin32Kmt;
(continues on next page)
cudaImportExternalMemory(&extMem, &desc);
return extMem;
}
A device pointer can be mapped onto an imported memory object as shown below. The offset and
size of the mapping must match that specified when creating the mapping using the corresponding
Vulkan API. All mapped device pointers must be freed using cudaFree().
void * mapBufferOntoExternalMemory(cudaExternalMemory_t extMem, unsigned long long�
,→offset, unsigned long long size) {
memset(&desc, 0, sizeof(desc));
desc.offset = offset;
desc.size = size;
return ptr;
A CUDA mipmapped array can be mapped onto an imported memory object as shown below. The
offset, dimensions, format and number of mip levels must match that specified when creating the
mapping using the corresponding Vulkan API. Additionally, if the mipmapped array is bound as a color
target in Vulkan, the flagcudaArrayColorAttachment must be set. All mapped mipmapped arrays
must be freed using cudaFreeMipmappedArray(). The following code sample shows how to convert
Vulkan parameters into the corresponding CUDA parameters when mapping mipmapped arrays onto
imported memory objects.
cudaMipmappedArray_t mapMipmappedArrayOntoExternalMemory(cudaExternalMemory_t extMem,�
,→unsigned long long offset, cudaChannelFormatDesc *formatDesc, cudaExtent *extent,�
memset(&desc, 0, sizeof(desc));
desc.offset = offset;
desc.formatDesc = *formatDesc;
desc.extent = *extent;
desc.flags = flags;
desc.numLevels = numLevels;
return mipmap;
}
memset(&d, 0, sizeof(d));
switch (format) {
case VK_FORMAT_R8_UINT: d.x = 8; d.y = 0; d.z = 0; d.w = 0; d.f =�
,→cudaChannelFormatKindUnsigned; break;
case VK_FORMAT_R16G16B16A16_SINT: d.x = 16; d.y = 16; d.z = 16; d.w = 16; d.f =�
,→cudaChannelFormatKindSigned; break;
case VK_FORMAT_R32_UINT: d.x = 32; d.y = 0; d.z = 0; d.w = 0; d.f =�
,→cudaChannelFormatKindUnsigned; break;
case VK_FORMAT_R32G32B32A32_SINT: d.x = 32; d.y = 32; d.z = 32; d.w = 32; d.f =�
,→cudaChannelFormatKindSigned; break;
case VK_FORMAT_R32G32B32A32_SFLOAT: d.x = 32; d.y = 32; d.z = 32; d.w = 32; d.f =�
,→cudaChannelFormatKindFloat; break;
default: assert(0);
}
return d;
}
cudaExtent e = { 0, 0, 0 };
switch (vkImageViewType) {
case VK_IMAGE_VIEW_TYPE_1D: e.width = vkExt.width; e.height = 0; �
,→ e.depth = 0; break;
case VK_IMAGE_VIEW_TYPE_2D: e.width = vkExt.width; e.height = vkExt.
,→height; e.depth = 0; break;
case VK_IMAGE_VIEW_TYPE_3D: e.width = vkExt.width; e.height = vkExt.
,→height; e.depth = vkExt.depth; break;
default: assert(0);
}
return e;
}
switch (vkImageViewType) {
case VK_IMAGE_VIEW_TYPE_CUBE: flags |= cudaArrayCubemap; �
,→break;
default: break;
}
if (allowSurfaceLoadStore) {
flags |= cudaArraySurfaceLoadStore;
}
return flags;
}
memset(&desc, 0, sizeof(desc));
desc.type = cudaExternalSemaphoreHandleTypeOpaqueFd;
desc.handle.fd = fd;
cudaImportExternalSemaphore(&extSem, &desc);
∕∕ Input parameter 'fd' should not be used beyond this point as CUDA has assumed�
,→ ownership of it
return extSem;
}
memset(&desc, 0, sizeof(desc));
desc.type = cudaExternalSemaphoreHandleTypeOpaqueWin32;
desc.handle.win32.handle = handle;
cudaImportExternalSemaphore(&extSem, &desc);
return extSem;
}
memset(&desc, 0, sizeof(desc));
desc.type = cudaExternalSemaphoreHandleTypeOpaqueWin32;
desc.handle.win32.name = (void *)name;
cudaImportExternalSemaphore(&extSem, &desc);
return extSem;
}
memset(&desc, 0, sizeof(desc));
desc.type = cudaExternalSemaphoreHandleTypeOpaqueWin32Kmt;
desc.handle.win32.handle = (void *)handle;
cudaImportExternalSemaphore(&extSem, &desc);
return extSem;
}
An imported Vulkan semaphore object can be signaled as shown below. Signaling such a semaphore
object sets it to the signaled state. The corresponding wait that waits on this signal must be issued in
Vulkan. Additionally, the wait that waits on this signal must be issued after this signal has been issued.
void signalExternalSemaphore(cudaExternalSemaphore_t extSem, cudaStream_t stream) {
cudaExternalSemaphoreSignalParams params = {};
memset(¶ms, 0, sizeof(params));
An imported Vulkan semaphore object can be waited on as shown below. Waiting on such a semaphore
object waits until it reaches the signaled state and then resets it back to the unsignaled state. The
corresponding signal that this wait is waiting on must be issued in Vulkan. Additionally, the signal must
be issued before this wait can be issued.
void waitExternalSemaphore(cudaExternalSemaphore_t extSem, cudaStream_t stream) {
cudaExternalSemaphoreWaitParams params = {};
memset(¶ms, 0, sizeof(params));
Traditional OpenGL-CUDA interop as outlined in OpenGL Interoperability works by CUDA directly con-
suming handles created in OpenGL. However, since OpenGL can also consume memory and synchro-
nization objects created in Vulkan, there exists an alternative approach to doing OpenGL-CUDA in-
terop. Essentially, memory and synchronization objects exported by Vulkan could be imported into
both, OpenGL and CUDA, and then used to coordinate memory accesses between OpenGL and CUDA.
Please refer to the following OpenGL extensions for further details on how to import memory and
synchronization objects exported by Vulkan:
▶ GL_EXT_memory_object
▶ GL_EXT_memory_object_fd
▶ GL_EXT_memory_object_win32
▶ GL_EXT_semaphore
▶ GL_EXT_semaphore_fd
▶ GL_EXT_semaphore_win32
When importing memory and synchronization objects exported by Direct3D 12, they must be imported
and mapped on the same device as they were created on. The CUDA device that corresponds to the
Direct3D 12 device on which the objects were created can be determined by comparing the LUID of
a CUDA device with that of the Direct3D 12 device, as shown in the following code sample. Note that
the Direct3D 12 device must not be created on a linked node adapter. I.e. the node count as returned
by ID3D12Device::GetNodeCount must be 1.
int getCudaDeviceForD3D12Device(ID3D12Device *d3d12Device) {
LUID d3d12Luid = d3d12Device->GetAdapterLuid();
int cudaDeviceCount;
cudaGetDeviceCount(&cudaDeviceCount);
return cudaDevice;
}
}
return cudaInvalidDeviceId;
}
A shareable Direct3D 12 heap memory object, created by setting the flag D3D12_HEAP_FLAG_SHARED
in the call to ID3D12Device::CreateHeap, can be imported into CUDA using the NT handle associ-
ated with that object as shown below. Note that it is the application’s responsibility to close the NT
handle when it is not required anymore. The NT handle holds a reference to the resource, so it must
be explicitly freed before the underlying memory can be freed.
cudaExternalMemory_t importD3D12HeapFromNTHandle(HANDLE handle, unsigned long long�
,→size) {
memset(&desc, 0, sizeof(desc));
desc.type = cudaExternalMemoryHandleTypeD3D12Heap;
desc.handle.win32.handle = (void *)handle;
desc.size = size;
cudaImportExternalMemory(&extMem, &desc);
return extMem;
}
A shareable Direct3D 12 heap memory object can also be imported using a named handle if one exists
as shown below.
cudaExternalMemory_t importD3D12HeapFromNamedNTHandle(LPCWSTR name, unsigned long�
,→long size) {
memset(&desc, 0, sizeof(desc));
desc.type = cudaExternalMemoryHandleTypeD3D12Heap;
desc.handle.win32.name = (void *)name;
desc.size = size;
cudaImportExternalMemory(&extMem, &desc);
return extMem;
}
memset(&desc, 0, sizeof(desc));
desc.type = cudaExternalMemoryHandleTypeD3D12Resource;
desc.handle.win32.handle = (void *)handle;
desc.size = size;
desc.flags |= cudaExternalMemoryDedicated;
cudaImportExternalMemory(&extMem, &desc);
return extMem;
}
A shareable Direct3D 12 committed resource can also be imported using a named handle if one exists
as shown below.
cudaExternalMemory_t importD3D12CommittedResourceFromNamedNTHandle(LPCWSTR name,�
,→unsigned long long size) {
memset(&desc, 0, sizeof(desc));
desc.type = cudaExternalMemoryHandleTypeD3D12Resource;
desc.handle.win32.name = (void *)name;
desc.size = size;
desc.flags |= cudaExternalMemoryDedicated;
cudaImportExternalMemory(&extMem, &desc);
return extMem;
}
A device pointer can be mapped onto an imported memory object as shown below. The offset and
size of the mapping must match that specified when creating the mapping using the corresponding
Direct3D 12 API. All mapped device pointers must be freed using cudaFree().
void * mapBufferOntoExternalMemory(cudaExternalMemory_t extMem, unsigned long long�
,→offset, unsigned long long size) {
memset(&desc, 0, sizeof(desc));
desc.offset = offset;
desc.size = size;
A CUDA mipmapped array can be mapped onto an imported memory object as shown below. The
offset, dimensions, format and number of mip levels must match that specified when creating the
mapping using the corresponding Direct3D 12 API. Additionally, if the mipmapped array can be bound
as a render target in Direct3D 12, the flag cudaArrayColorAttachment must be set. All mapped
mipmapped arrays must be freed using cudaFreeMipmappedArray(). The following code sample
shows how to convert Vulkan parameters into the corresponding CUDA parameters when mapping
mipmapped arrays onto imported memory objects.
cudaMipmappedArray_t mapMipmappedArrayOntoExternalMemory(cudaExternalMemory_t extMem,�
,→unsigned long long offset, cudaChannelFormatDesc *formatDesc, cudaExtent *extent,�
desc.offset = offset;
desc.formatDesc = *formatDesc;
desc.extent = *extent;
desc.flags = flags;
desc.numLevels = numLevels;
return mipmap;
}
memset(&d, 0, sizeof(d));
switch (dxgiFormat) {
case DXGI_FORMAT_R8_UINT: d.x = 8; d.y = 0; d.z = 0; d.w = 0; d.f�
,→ = cudaChannelFormatKindUnsigned; break;
case DXGI_FORMAT_R8_SINT: d.x = 8; d.y = 0; d.z = 0; d.w = 0; d.f�
,→ = cudaChannelFormatKindSigned; break;
case DXGI_FORMAT_R8G8_UINT: d.x = 8; d.y = 8; d.z = 0; d.w = 0; d.f�
,→ = cudaChannelFormatKindUnsigned; break;
case DXGI_FORMAT_R8G8_SINT: d.x = 8; d.y = 8; d.z = 0; d.w = 0; d.f�
,→ = cudaChannelFormatKindSigned; break;
case DXGI_FORMAT_R8G8B8A8_UINT: d.x = 8; d.y = 8; d.z = 8; d.w = 8; d.f�
,→ = cudaChannelFormatKindUnsigned; break;
case DXGI_FORMAT_R8G8B8A8_SINT: d.x = 8; d.y = 8; d.z = 8; d.w = 8; d.f�
,→ = cudaChannelFormatKindSigned; break;
case DXGI_FORMAT_R16_UINT: d.x = 16; d.y = 0; d.z = 0; d.w = 0; d.f�
,→ = cudaChannelFormatKindUnsigned; break;
case DXGI_FORMAT_R16_SINT: d.x = 16; d.y = 0; d.z = 0; d.w = 0; d.f�
,→ = cudaChannelFormatKindSigned; break;
case DXGI_FORMAT_R16G16_UINT: d.x = 16; d.y = 16; d.z = 0; d.w = 0; d.f�
,→ = cudaChannelFormatKindUnsigned; break;
case DXGI_FORMAT_R16G16_SINT: d.x = 16; d.y = 16; d.z = 0; d.w = 0; d.f�
,→ = cudaChannelFormatKindSigned; break;
case DXGI_FORMAT_R16G16B16A16_UINT: d.x = 16; d.y = 16; d.z = 16; d.w = 16; d.f�
,→ = cudaChannelFormatKindUnsigned; break;
case DXGI_FORMAT_R16G16B16A16_SINT: d.x = 16; d.y = 16; d.z = 16; d.w = 16; d.f�
,→ = cudaChannelFormatKindSigned; break;
case DXGI_FORMAT_R32_UINT: d.x = 32; d.y = 0; d.z = 0; d.w = 0; d.f�
,→ = cudaChannelFormatKindUnsigned; break;
case DXGI_FORMAT_R32_SINT: d.x = 32; d.y = 0; d.z = 0; d.w = 0; d.f�
,→ = cudaChannelFormatKindSigned; break;
case DXGI_FORMAT_R32_FLOAT: d.x = 32; d.y = 0; d.z = 0; d.w = 0; d.f�
,→ = cudaChannelFormatKindFloat; break;
case DXGI_FORMAT_R32G32_UINT: d.x = 32; d.y = 32; d.z = 0; d.w = 0; d.f�
,→ = cudaChannelFormatKindUnsigned; break;
case DXGI_FORMAT_R32G32_SINT: d.x = 32; d.y = 32; d.z = 0; d.w = 0; d.f�
,→ = cudaChannelFormatKindSigned; break;
case DXGI_FORMAT_R32G32_FLOAT: d.x = 32; d.y = 32; d.z = 0; d.w = 0; d.f�
,→ = cudaChannelFormatKindFloat; break;
(continues on next page)
case DXGI_FORMAT_R32G32B32A32_SINT: d.x = 32; d.y = 32; d.z = 32; d.w = 32; d.f�
,→= cudaChannelFormatKindSigned; break;
case DXGI_FORMAT_R32G32B32A32_FLOAT: d.x = 32; d.y = 32; d.z = 32; d.w = 32; d.f�
,→= cudaChannelFormatKindFloat; break;
default: assert(0);
return d;
}
cudaExtent e = { 0, 0, 0 };
switch (d3d12SRVDimension) {
case D3D12_SRV_DIMENSION_TEXTURE1D: e.width = width; e.height = 0; e.
,→depth = 0; break;
case D3D12_SRV_DIMENSION_TEXTURE2D: e.width = width; e.height = height; e.
,→depth = 0; break;
case D3D12_SRV_DIMENSION_TEXTURE3D: e.width = width; e.height = height; e.
,→depth = depthOrArraySize; break;
default: assert(0);
}
return e;
}
,→allowSurfaceLoadStore) {
switch (d3d12SRVDimension) {
case D3D12_SRV_DIMENSION_TEXTURECUBE: flags |= cudaArrayCubemap; �
,→ break;
case D3D12_SRV_DIMENSION_TEXTURECUBEARRAY: flags |= cudaArrayCubemap |�
,→cudaArrayLayered; break;
return flags;
}
A shareable Direct3D 12 fence object, created by setting the flag D3D12_FENCE_FLAG_SHARED in the
call to ID3D12Device::CreateFence, can be imported into CUDA using the NT handle associated
with that object as shown below. Note that it is the application’s responsibility to close the handle when
it is not required anymore. The NT handle holds a reference to the resource, so it must be explicitly
freed before the underlying semaphore can be freed.
cudaExternalSemaphore_t importD3D12FenceFromNTHandle(HANDLE handle) {
cudaExternalSemaphore_t extSem = NULL;
cudaExternalSemaphoreHandleDesc desc = {};
memset(&desc, 0, sizeof(desc));
desc.type = cudaExternalSemaphoreHandleTypeD3D12Fence;
desc.handle.win32.handle = handle;
cudaImportExternalSemaphore(&extSem, &desc);
return extSem;
}
A shareable Direct3D 12 fence object can also be imported using a named handle if one exists as
shown below.
cudaExternalSemaphore_t importD3D12FenceFromNamedNTHandle(LPCWSTR name) {
cudaExternalSemaphore_t extSem = NULL;
cudaExternalSemaphoreHandleDesc desc = {};
memset(&desc, 0, sizeof(desc));
desc.type = cudaExternalSemaphoreHandleTypeD3D12Fence;
desc.handle.win32.name = (void *)name;
cudaImportExternalSemaphore(&extSem, &desc);
return extSem;
}
An imported Direct3D 12 fence object can be signaled as shown below. Signaling such a fence object
sets its value to the one specified. The corresponding wait that waits on this signal must be issued in
Direct3D 12. Additionally, the wait that waits on this signal must be issued after this signal has been
issued.
void signalExternalSemaphore(cudaExternalSemaphore_t extSem, unsigned long long value,
,→ cudaStream_t stream) {
memset(¶ms, 0, sizeof(params));
params.params.fence.value = value;
An imported Direct3D 12 fence object can be waited on as shown below. Waiting on such a fence
object waits until its value becomes greater than or equal to the specified value. The corresponding
signal that this wait is waiting on must be issued in Direct3D 12. Additionally, the signal must be issued
before this wait can be issued.
void waitExternalSemaphore(cudaExternalSemaphore_t extSem, unsigned long long value,�
,→cudaStream_t stream) {
memset(¶ms, 0, sizeof(params));
params.params.fence.value = value;
When importing memory and synchronization objects exported by Direct3D 11, they must be imported
and mapped on the same device as they were created on. The CUDA device that corresponds to the
Direct3D 11 device on which the objects were created can be determined by comparing the LUID of a
CUDA device with that of the Direct3D 11 device, as shown in the following code sample.
int getCudaDeviceForD3D11Device(ID3D11Device *d3d11Device) {
IDXGIDevice *dxgiDevice;
d3d11Device->QueryInterface(__uuidof(IDXGIDevice), (void **)&dxgiDevice);
IDXGIAdapter *dxgiAdapter;
dxgiDevice->GetAdapter(&dxgiAdapter);
DXGI_ADAPTER_DESC dxgiAdapterDesc;
dxgiAdapter->GetDesc(&dxgiAdapterDesc);
int cudaDeviceCount;
cudaGetDeviceCount(&cudaDeviceCount);
return cudaDevice;
}
}
return cudaInvalidDeviceId;
}
memset(&desc, 0, sizeof(desc));
desc.type = cudaExternalMemoryHandleTypeD3D11Resource;
desc.handle.win32.handle = (void *)handle;
desc.size = size;
desc.flags |= cudaExternalMemoryDedicated;
cudaImportExternalMemory(&extMem, &desc);
return extMem;
}
A shareable Direct3D 11 resource can also be imported using a named handle if one exists as shown
below.
cudaExternalMemory_t importD3D11ResourceFromNamedNTHandle(LPCWSTR name, unsigned long�
,→long size) {
memset(&desc, 0, sizeof(desc));
desc.type = cudaExternalMemoryHandleTypeD3D11Resource;
desc.handle.win32.name = (void *)name;
desc.size = size;
desc.flags |= cudaExternalMemoryDedicated;
cudaImportExternalMemory(&extMem, &desc);
return extMem;
}
memset(&desc, 0, sizeof(desc));
desc.type = cudaExternalMemoryHandleTypeD3D11ResourceKmt;
desc.handle.win32.handle = (void *)handle;
desc.size = size;
desc.flags |= cudaExternalMemoryDedicated;
cudaImportExternalMemory(&extMem, &desc);
return extMem;
}
A device pointer can be mapped onto an imported memory object as shown below. The offset and
size of the mapping must match that specified when creating the mapping using the corresponding
Direct3D 11 API. All mapped device pointers must be freed using cudaFree().
void * mapBufferOntoExternalMemory(cudaExternalMemory_t extMem, unsigned long long�
,→offset, unsigned long long size) {
memset(&desc, 0, sizeof(desc));
(continues on next page)
desc.offset = offset;
desc.size = size;
A CUDA mipmapped array can be mapped onto an imported memory object as shown below. The
offset, dimensions, format and number of mip levels must match that specified when creating the
mapping using the corresponding Direct3D 11 API. Additionally, if the mipmapped array can be bound
as a render target in Direct3D 12, the flag cudaArrayColorAttachment must be set. All mapped
mipmapped arrays must be freed using cudaFreeMipmappedArray(). The following code sample
shows how to convert Direct3D 11 parameters into the corresponding CUDA parameters when map-
ping mipmapped arrays onto imported memory objects.
cudaMipmappedArray_t mapMipmappedArrayOntoExternalMemory(cudaExternalMemory_t extMem,�
,→unsigned long long offset, cudaChannelFormatDesc *formatDesc, cudaExtent *extent,�
memset(&desc, 0, sizeof(desc));
desc.offset = offset;
desc.formatDesc = *formatDesc;
desc.extent = *extent;
desc.flags = flags;
desc.numLevels = numLevels;
return mipmap;
}
return d;
}
cudaExtent e = { 0, 0, 0 };
switch (d3d11SRVDimension) {
case D3D11_SRV_DIMENSION_TEXTURE1D: e.width = width; e.height = 0; e.
,→depth = 0; break;
case D3D11_SRV_DIMENSION_TEXTURE2D: e.width = width; e.height = height; e.
,→depth = 0; break;
case D3D11_SRV_DIMENSION_TEXTURE3D: e.width = width; e.height = height; e.
,→depth = depthOrArraySize; break;
switch (d3d11SRVDimension) {
case D3D11_SRV_DIMENSION_TEXTURECUBE: flags |= cudaArrayCubemap; �
,→ break;
case D3D11_SRV_DIMENSION_TEXTURECUBEARRAY: flags |= cudaArrayCubemap |�
,→cudaArrayLayered; break;
if (allowSurfaceLoadStore) {
flags |= cudaArraySurfaceLoadStore;
}
return flags;
}
A shareable Direct3D 11 fence object, created by setting the flag D3D11_FENCE_FLAG_SHARED in the
call to ID3D11Device5::CreateFence, can be imported into CUDA using the NT handle associated
with that object as shown below. Note that it is the application’s responsibility to close the handle when
it is not required anymore. The NT handle holds a reference to the resource, so it must be explicitly
freed before the underlying semaphore can be freed.
cudaExternalSemaphore_t importD3D11FenceFromNTHandle(HANDLE handle) {
cudaExternalSemaphore_t extSem = NULL;
cudaExternalSemaphoreHandleDesc desc = {};
memset(&desc, 0, sizeof(desc));
desc.type = cudaExternalSemaphoreHandleTypeD3D11Fence;
desc.handle.win32.handle = handle;
cudaImportExternalSemaphore(&extSem, &desc);
A shareable Direct3D 11 fence object can also be imported using a named handle if one exists as
shown below.
cudaExternalSemaphore_t importD3D11FenceFromNamedNTHandle(LPCWSTR name) {
cudaExternalSemaphore_t extSem = NULL;
cudaExternalSemaphoreHandleDesc desc = {};
memset(&desc, 0, sizeof(desc));
desc.type = cudaExternalSemaphoreHandleTypeD3D11Fence;
desc.handle.win32.name = (void *)name;
cudaImportExternalSemaphore(&extSem, &desc);
return extSem;
}
A shareable Direct3D 11 keyed mutex object associated with a shareable Direct3D 11 resource, viz,
IDXGIKeyedMutex, created by setting the flag D3D11_RESOURCE_MISC_SHARED_KEYEDMUTEX, can
be imported into CUDA using the NT handle associated with that object as shown below. Note that it
is the application’s responsibility to close the handle when it is not required anymore. The NT handle
holds a reference to the resource, so it must be explicitly freed before the underlying semaphore can
be freed.
cudaExternalSemaphore_t importD3D11KeyedMutexFromNTHandle(HANDLE handle) {
cudaExternalSemaphore_t extSem = NULL;
cudaExternalSemaphoreHandleDesc desc = {};
memset(&desc, 0, sizeof(desc));
desc.type = cudaExternalSemaphoreHandleTypeKeyedMutex;
desc.handle.win32.handle = handle;
cudaImportExternalSemaphore(&extSem, &desc);
return extSem;
}
A shareable Direct3D 11 keyed mutex object can also be imported using a named handle if one exists
as shown below.
cudaExternalSemaphore_t importD3D11KeyedMutexFromNamedNTHandle(LPCWSTR name) {
cudaExternalSemaphore_t extSem = NULL;
cudaExternalSemaphoreHandleDesc desc = {};
memset(&desc, 0, sizeof(desc));
desc.type = cudaExternalSemaphoreHandleTypeKeyedMutex;
desc.handle.win32.name = (void *)name;
return extSem;
}
A shareable Direct3D 11 keyed mutex object can be imported into CUDA using the globally shared
D3DKMT handle associated with that object as shown below. Since a globally shared D3DKMT han-
dle does not hold a reference to the underlying memory it is automatically destroyed when all other
references to the resource are destroyed.
cudaExternalSemaphore_t importD3D11FenceFromKMTHandle(HANDLE handle) {
cudaExternalSemaphore_t extSem = NULL;
cudaExternalSemaphoreHandleDesc desc = {};
memset(&desc, 0, sizeof(desc));
desc.type = cudaExternalSemaphoreHandleTypeKeyedMutexKmt;
desc.handle.win32.handle = handle;
cudaImportExternalSemaphore(&extSem, &desc);
return extSem;
}
An imported Direct3D 11 fence object can be signaled as shown below. Signaling such a fence object
sets its value to the one specified. The corresponding wait that waits on this signal must be issued in
Direct3D 11. Additionally, the wait that waits on this signal must be issued after this signal has been
issued.
void signalExternalSemaphore(cudaExternalSemaphore_t extSem, unsigned long long value,
,→ cudaStream_t stream) {
memset(¶ms, 0, sizeof(params));
params.params.fence.value = value;
An imported Direct3D 11 fence object can be waited on as shown below. Waiting on such a fence
object waits until its value becomes greater than or equal to the specified value. The corresponding
signal that this wait is waiting on must be issued in Direct3D 11. Additionally, the signal must be issued
before this wait can be issued.
void waitExternalSemaphore(cudaExternalSemaphore_t extSem, unsigned long long value,�
,→cudaStream_t stream) {
params.params.fence.value = value;
An imported Direct3D 11 keyed mutex object can be signaled as shown below. Signaling such a keyed
mutex object by specifying a key value releases the keyed mutex for that value. The corresponding
wait that waits on this signal must be issued in Direct3D 11 with the same key value. Additionally, the
Direct3D 11 wait must be issued after this signal has been issued.
void signalExternalSemaphore(cudaExternalSemaphore_t extSem, unsigned long long key,�
,→cudaStream_t stream) {
memset(¶ms, 0, sizeof(params));
params.params.keyedmutex.key = key;
An imported Direct3D 11 keyed mutex object can be waited on as shown below. A timeout value in
milliseconds is needed when waiting on such a keyed mutex. The wait operation waits until the keyed
mutex value is equal to the specified key value or until the timeout has elapsed. The timeout interval
can also be an infinite value. In case an infinite value is specified the timeout never elapses. The
windows INFINITE macro must be used to specify an infinite timeout. The corresponding signal that
this wait is waiting on must be issued in Direct3D 11. Additionally, the Direct3D 11 signal must be
issued before this wait can be issued.
void waitExternalSemaphore(cudaExternalSemaphore_t extSem, unsigned long long key,�
,→unsigned int timeoutMs, cudaStream_t stream) {
memset(¶ms, 0, sizeof(params));
params.params.keyedmutex.key = key;
params.params.keyedmutex.timeoutMs = timeoutMs;
NvSciBuf and NvSciSync are interfaces developed for serving the following purposes:
▶ NvSciBuf: Allows applications to allocate and exchange buffers in memory
▶ NvSciSync: Allows applications to manage synchronization objects at operation boundaries
More details on these interfaces are available at: https://docs.nvidia.com/drive.
For allocating an NvSciBuf object compatible with a given CUDA device, the corresponding GPU id
must be set with NvSciBufGeneralAttrKey_GpuId in the NvSciBuf attribute list as shown below.
Optionally, applications can specify the following attributes -
▶ NvSciBufGeneralAttrKey_NeedCpuAccess: Specifies if CPU access is required for the buffer
▶ NvSciBufRawBufferAttrKey_Align: Specifies the alignment requirement of NvS-
ciBufType_RawBuffer
▶ NvSciBufGeneralAttrKey_RequiredPerm: Different access permissions can be configured
for different UMDs per NvSciBuf memory object instance. For example, to provide the GPU with
read-only access permissions to the buffer, create a duplicate NvSciBuf object using NvSciBu-
fObjDupWithReducePerm() with NvSciBufAccessPerm_Readonly as the input parameter.
Then import this newly created duplicate object with reduced permission into CUDA as shown
▶ NvSciBufGeneralAttrKey_EnableGpuCache: To control GPU L2 cacheability
▶ NvSciBufGeneralAttrKey_EnableGpuCompression: To specify GPU compression
Note: For more details on these attributes and their valid input options, refer to NvSciBuf Documen-
tation.
∕∕ Fill in values
NvSciBufAttrKeyValuePair rawbuffattrs[] = {
{ NvSciBufGeneralAttrKey_Types, &bufType, sizeof(bufType) },
{ NvSciBufRawBufferAttrKey_Size, &rawsize, sizeof(rawsize) },
{ NvSciBufRawBufferAttrKey_Align, &align, sizeof(align) },
{ NvSciBufGeneralAttrKey_NeedCpuAccess, &cpuaccess_flag, sizeof(cpuaccess_
,→flag) },
};
NvSciBufAttrListCreate(NvSciBufModule, &attrListBuffer);
The allocated NvSciBuf memory object can be imported in CUDA using the NvSciBufObj handle as
shown below. Application should query the allocated NvSciBufObj for attributes required for filling
CUDA External Memory Descriptor. Note that the attribute list and NvSciBuf objects should be main-
tained by the application. If the NvSciBuf object imported into CUDA is also mapped by other drivers,
then based on NvSciBufGeneralAttrKey_GpuSwNeedCacheCoherency output attribute value the
application must use NvSciSync objects (Refer Importing Synchronization Objects) as appropriate bar-
riers to maintain coherence between CUDA and the other drivers.
Note: For more details on how to allocate and maintain NvSciBuf objects refer to NvSciBuf API Doc-
umentation.
∕∕ Note cache and compression are per GPU attributes, so read values for specific�
,→ gpu by comparing UUID
∕∕ Read cacheability granted by NvSciBuf
int numGpus = bufattrs[1].len ∕ sizeof(NvSciBufAttrValGpuCache);
NvSciBufAttrValGpuCache[] cacheVal = (NvSciBufAttrValGpuCache *)bufattrs[1].value;
bool ret_cacheVal;
for (int i = 0; i < numGpus; i++) {
if (memcmp(gpuid[0].bytes, cacheVal[i].gpuId.bytes, sizeof(CUuuid)) == 0) {
ret_cacheVal = cacheVal[i].cacheability);
}
}
NvSciBufCompressionType ret_compVal;
for (int i = 0; i < numGpus; i++) {
if (memcmp(gpuid[0].bytes, compVal[i].gpuId.bytes, sizeof(CUuuid)) == 0) {
ret_compVal = compVal[i].compressionType);
}
}
∕∕ Fill up CUDA_EXTERNAL_MEMORY_HANDLE_DESC
cudaExternalMemoryHandleDesc memHandleDesc;
memset(&memHandleDesc, 0, sizeof(memHandleDesc));
memHandleDesc.type = cudaExternalMemoryHandleTypeNvSciBuf;
memHandleDesc.handle.nvSciBufObject = bufferObjRaw;
∕∕ Set the NvSciBuf object with required access permissions in this step
memHandleDesc.handle.nvSciBufObject = bufferObjRo;
memHandleDesc.size = ret_size;
cudaImportExternalMemory(&extMemBuffer, &memHandleDesc);
return extMemBuffer;
}
A device pointer can be mapped onto an imported memory object as shown below. The offset and size
of the mapping can be filled as per the attributes of the allocated NvSciBufObj. All mapped device
pointers must be freed using cudaFree().
void * mapBufferOntoExternalMemory(cudaExternalMemory_t extMem, unsigned long long�
,→offset, unsigned long long size) {
memset(&desc, 0, sizeof(desc));
desc.offset = offset;
desc.size = size;
A CUDA mipmapped array can be mapped onto an imported memory object as shown below. The
offset, dimensions and format can be filled as per the attributes of the allocated NvSciBufObj. All
mapped mipmapped arrays must be freed using cudaFreeMipmappedArray(). The following code
sample shows how to convert NvSciBuf attributes into the corresponding CUDA parameters when
mapping mipmapped arrays onto imported memory objects.
memset(&desc, 0, sizeof(desc));
desc.offset = offset;
desc.formatDesc = *formatDesc;
desc.extent = *extent;
desc.flags = flags;
desc.numLevels = numLevels;
return mipmap;
}
NvSciSync attributes that are compatible with a given CUDA device can be generated using cudaDe-
viceGetNvSciSyncAttributes(). The returned attribute list can be used to create a NvSciSyn-
cObj that is guaranteed compatibility with a given CUDA device.
NvSciSyncObj createNvSciSyncObject() {
NvSciSyncObj nvSciSyncObj
int cudaDev0 = 0;
int cudaDev1 = 1;
NvSciSyncAttrList signalerAttrList = NULL;
NvSciSyncAttrList waiterAttrList = NULL;
NvSciSyncAttrList reconciledList = NULL;
NvSciSyncAttrList newConflictList = NULL;
NvSciSyncAttrListCreate(module, &signalerAttrList);
NvSciSyncAttrListCreate(module, &waiterAttrList);
NvSciSyncAttrList unreconciledList[2] = {NULL, NULL};
unreconciledList[0] = signalerAttrList;
unreconciledList[1] = waiterAttrList;
NvSciSyncObjAlloc(reconciledList, &nvSciSyncObj);
return nvSciSyncObj;
}
An NvSciSync object (created as above) can be imported into CUDA using the NvSciSyncObj handle as
shown below. Note that ownership of the NvSciSyncObj handle continues to lie with the application
even after it is imported.
cudaExternalSemaphore_t importNvSciSyncObject(void* nvSciSyncObj) {
cudaExternalSemaphore_t extSem = NULL;
cudaExternalSemaphoreHandleDesc desc = {};
memset(&desc, 0, sizeof(desc));
desc.type = cudaExternalSemaphoreHandleTypeNvSciSync;
desc.handle.nvSciSyncObj = nvSciSyncObj;
cudaImportExternalSemaphore(&extSem, &desc);
return extSem;
}
An imported NvSciSyncObj object can be signaled as outlined below. Signaling NvSciSync backed
semaphore object initializes the fence parameter passed as input. This fence parameter is waited
upon by a wait operation that corresponds to the aforementioned signal. Additionally, the wait that
waits on this signal must be issued after this signal has been issued. If the flags are set to cudaExter-
nalSemaphoreSignalSkipNvSciBufMemSync then memory synchronization operations (over all the
imported NvSciBuf in this process) that are executed as a part of the signal operation by default are
skipped. When NvsciBufGeneralAttrKey_GpuSwNeedCacheCoherency is FALSE, this flag should
be set.
void signalExternalSemaphore(cudaExternalSemaphore_t extSem, cudaStream_t stream,�
,→void *fence) {
memset(&signalParams, 0, sizeof(signalParams));
signalParams.params.nvSciSync.fence = (void*)fence;
signalParams.flags = 0; ∕∕OR cudaExternalSemaphoreSignalSkipNvSciBufMemSync
An imported NvSciSyncObj object can be waited upon as outlined below. Waiting on NvSciSync
backed semaphore object waits until the input fence parameter is signaled by the corresponding sig-
naler. Additionally, the signal must be issued before the wait can be issued. If the flags are set to cud-
aExternalSemaphoreWaitSkipNvSciBufMemSync then memory synchronization operations (over
all the imported NvSciBuf in this process) that are executed as a part of the signal operation by de-
fault are skipped. When NvsciBufGeneralAttrKey_GpuSwNeedCacheCoherency is FALSE, this flag
should be set.
void waitExternalSemaphore(cudaExternalSemaphore_t extSem, cudaStream_t stream, void�
,→*fence) {
memset(&waitParams, 0, sizeof(waitParams));
waitParams.params.nvSciSync.fence = (void*)fence;
waitParams.flags = 0; ∕∕OR cudaExternalSemaphoreWaitSkipNvSciBufMemSync
▶ All plug-ins and libraries used by an application must use the same version of any libraries that
use the runtime (such as cuFFT, cuBLAS, …) unless statically linking to those libraries.
Figure 25: The Driver API Is Backward but Not Forward Compatible
For Tesla GPU products, CUDA 10 introduced a new forward-compatible upgrade path for the user-
mode components of the CUDA Driver. This feature is described in CUDA Compatibility. The require-
ments on the CUDA Driver version described here apply to the version of the user-mode components.
Maxwell and Kepler GPU architecture, with the benefit that applications with long-running kernels can
be prevented from either monopolizing the system or timing out. However, there will be context switch
overheads associated with Compute Preemption, which is automatically enabled on those devices for
which support exists. The individual attribute query function cudaDeviceGetAttribute() with the
attribute cudaDevAttrComputePreemptionSupported can be used to determine if the device in use
supports Compute Preemption. Users wishing to avoid context switch overheads associated with dif-
ferent processes can ensure that only one process is active on the GPU by selecting exclusive-process
mode.
Applications may query the compute mode of a device by checking the computeMode device property
(see Device Enumeration).
The NVIDIA GPU architecture is built around a scalable array of multithreaded Streaming Multipro-
cessors (SMs). When a CUDA program on the host CPU invokes a kernel grid, the blocks of the grid
are enumerated and distributed to multiprocessors with available execution capacity. The threads of a
thread block execute concurrently on one multiprocessor, and multiple thread blocks can execute con-
currently on one multiprocessor. As thread blocks terminate, new blocks are launched on the vacated
multiprocessors.
A multiprocessor is designed to execute hundreds of threads concurrently. To manage such a large
number of threads, it employs a unique architecture called SIMT (Single-Instruction, Multiple-Thread)
that is described in SIMT Architecture. The instructions are pipelined, leveraging instruction-level par-
allelism within a single thread, as well as extensive thread-level parallelism through simultaneous hard-
ware multithreading as detailed in Hardware Multithreading. Unlike CPU cores, they are issued in order
and there is no branch prediction or speculative execution.
SIMT Architecture and Hardware Multithreading describe the architecture features of the streaming
multiprocessor that are common to all devices. Compute Capability 5.x, Compute Capability 6.x, and
Compute Capability 7.x provide the specifics for devices of compute capabilities 5.x, 6.x, and 7.x re-
spectively.
The NVIDIA GPU architecture uses a little-endian representation.
135
CUDA C++ Programming Guide, Release 12.4
The SIMT architecture is akin to SIMD (Single Instruction, Multiple Data) vector organizations in that a
single instruction controls multiple processing elements. A key difference is that SIMD vector organi-
zations expose the SIMD width to the software, whereas SIMT instructions specify the execution and
branching behavior of a single thread. In contrast with SIMD vector machines, SIMT enables program-
mers to write thread-level parallel code for independent, scalar threads, as well as data-parallel code
for coordinated threads. For the purposes of correctness, the programmer can essentially ignore the
SIMT behavior; however, substantial performance improvements can be realized by taking care that
the code seldom requires threads in a warp to diverge. In practice, this is analogous to the role of cache
lines in traditional code: Cache line size can be safely ignored when designing for correctness but must
be considered in the code structure when designing for peak performance. Vector architectures, on
the other hand, require the software to coalesce loads into vectors and manage divergence manually.
Prior to NVIDIA Volta, warps used a single program counter shared amongst all 32 threads in the warp
together with an active mask specifying the active threads of the warp. As a result, threads from the
same warp in divergent regions or different states of execution cannot signal each other or exchange
data, and algorithms requiring fine-grained sharing of data guarded by locks or mutexes can easily
lead to deadlock, depending on which warp the contending threads come from.
Starting with the NVIDIA Volta architecture, Independent Thread Scheduling allows full concurrency
between threads, regardless of warp. With Independent Thread Scheduling, the GPU maintains ex-
ecution state per thread, including a program counter and call stack, and can yield execution at a
per-thread granularity, either to make better use of execution resources or to allow one thread to wait
for data to be produced by another. A schedule optimizer determines how to group active threads
from the same warp together into SIMT units. This retains the high throughput of SIMT execution
as in prior NVIDIA GPUs, but with much more flexibility: threads can now diverge and reconverge at
sub-warp granularity.
Independent Thread Scheduling can lead to a rather different set of threads participating in the ex-
ecuted code than intended if the developer made assumptions about warp-synchronicity7 of previ-
ous hardware architectures. In particular, any warp-synchronous code (such as synchronization-free,
intra-warp reductions) should be revisited to ensure compatibility with NVIDIA Volta and beyond. See
Compute Capability 7.x for further details.
Note: The threads of a warp that are participating in the current instruction are called the active
threads, whereas threads not on the current instruction are inactive (disabled). Threads can be inactive
for a variety of reasons including having exited earlier than other threads of their warp, having taken a
different branch path than the branch path currently executed by the warp, or being the last threads
of a block whose number of threads is not a multiple of the warp size.
If a non-atomic instruction executed by a warp writes to the same location in global or shared memory
for more than one of the threads of the warp, the number of serialized writes that occur to that loca-
tion varies depending on the compute capability of the device (see Compute Capability 5.x, Compute
Capability 6.x, and Compute Capability 7.x), and which thread performs the final write is undefined.
If an atomic instruction executed by a warp reads, modifies, and writes to the same location in global
memory for more than one of the threads of the warp, each read/modify/write to that location occurs
and they are all serialized, but the order in which they occur is undefined.
7 The term warp-synchronous refers to code that implicitly assumes threads in the same warp are synchronized at every
instruction.
139
CUDA C++ Programming Guide, Release 12.4
The most common reason a warp is not ready to execute its next instruction is that the instruction’s
input operands are not available yet.
If all input operands are registers, latency is caused by register dependencies, i.e., some of the input
operands are written by some previous instruction(s) whose execution has not completed yet. In this
case, the latency is equal to the execution time of the previous instruction and the warp schedulers
must schedule instructions of other warps during that time. Execution time varies depending on the
instruction. On devices of compute capability 7.x, for most arithmetic instructions, it is typically 4
clock cycles. This means that 16 active warps per multiprocessor (4 cycles, 4 warp schedulers) are
required to hide arithmetic instruction latencies (assuming that warps execute instructions with max-
imum throughput, otherwise fewer warps are needed). If the individual warps exhibit instruction-level
parallelism, i.e. have multiple independent instructions in their instruction stream, fewer warps are
needed because multiple independent instructions from a single warp can be issued back to back.
If some input operand resides in off-chip memory, the latency is much higher: typically hundreds of
clock cycles. The number of warps required to keep the warp schedulers busy during such high la-
tency periods depends on the kernel code and its degree of instruction-level parallelism. In general,
more warps are required if the ratio of the number of instructions with no off-chip memory operands
(i.e., arithmetic instructions most of the time) to the number of instructions with off-chip memory
operands is low (this ratio is commonly called the arithmetic intensity of the program).
Another reason a warp is not ready to execute its next instruction is that it is waiting at some memory
fence (Memory Fence Functions) or synchronization point (Synchronization Functions). A synchro-
nization point can force the multiprocessor to idle as more and more warps wait for other warps in the
same block to complete execution of instructions prior to the synchronization point. Having multiple
resident blocks per multiprocessor can help reduce idling in this case, as warps from different blocks
do not need to wait for each other at synchronization points.
The number of blocks and warps residing on each multiprocessor for a given kernel call depends
on the execution configuration of the call (Execution Configuration), the memory resources of the
multiprocessor, and the resource requirements of the kernel as described in Hardware Multithread-
ing. Register and shared memory usage are reported by the compiler when compiling with the
--ptxas-options=-v option.
The total amount of shared memory required for a block is equal to the sum of the amount of statically
allocated shared memory and the amount of dynamically allocated shared memory.
The number of registers used by a kernel can have a significant impact on the number of resident
warps. For example, for devices of compute capability 6.x, if a kernel uses 64 registers and each block
has 512 threads and requires very little shared memory, then two blocks (i.e., 32 warps) can reside
on the multiprocessor since they require 2x512x64 registers, which exactly matches the number of
registers available on the multiprocessor. But as soon as the kernel uses one more register, only one
block (i.e., 16 warps) can be resident since two blocks would require 2x512x65 registers, which are
more registers than are available on the multiprocessor. Therefore, the compiler attempts to min-
imize register usage while keeping register spilling (see Device Memory Accesses) and the number
of instructions to a minimum. Register usage can be controlled using the maxrregcount compiler
option, the __launch_bounds__() qualifier as described in Launch Bounds, or the __maxnreg__()
qualifier as described in Maximum Number of Registers per Thread.
The register file is organized as 32-bit registers. So, each variable stored in a register needs at least
one 32-bit register, for example, a double variable uses two 32-bit registers.
The effect of execution configuration on performance for a given kernel call generally depends on the
kernel code. Experimentation is therefore recommended. Applications can also parametrize execution
configurations based on register file size and shared memory size, which depends on the compute
capability of the device, as well as on the number of multiprocessors and memory bandwidth of the
device, all of which can be queried using the runtime (see reference manual).
The number of threads per block should be chosen as a multiple of the warp size to avoid wasting
computing resources with under-populated warps as much as possible.
Several API functions exist to assist programmers in choosing thread block size and cluster size based
on register and shared memory requirements.
▶ The occupancy calculator API, cudaOccupancyMaxActiveBlocksPerMultiprocessor, can
provide an occupancy prediction based on the block size and shared memory usage of a ker-
nel. This function reports occupancy in terms of the number of concurrent thread blocks per
multiprocessor.
▶ Note that this value can be converted to other metrics. Multiplying by the number of warps
per block yields the number of concurrent warps per multiprocessor; further dividing con-
current warps by max warps per multiprocessor gives the occupancy as a percentage.
▶ The occupancy-based launch configurator APIs, cudaOccupancyMaxPotentialBlockSize and
cudaOccupancyMaxPotentialBlockSizeVariableSMem, heuristically calculate an execution
configuration that achieves the maximum multiprocessor-level occupancy.
▶ The occupancy calculator API, cudaOccupancyMaxActiveClusters, can provided occupancy
prediction based on the cluster size, block size and shared memory usage of a kernel. This func-
tion reports occupancy in terms of number of max active clusters of a given size on the GPU
present in the system.
The following code sample calculates the occupancy of MyKernel. It then reports the occupancy level
with the ratio between concurrent warps versus maximum warps per multiprocessor.
∕∕ Device code
__global__ void MyKernel(int *d, int *a, int *b)
{
int idx = threadIdx.x + blockIdx.x * blockDim.x;
d[idx] = a[idx] * b[idx];
}
∕∕ Host code
int main()
{
int numBlocks; ∕∕ Occupancy in terms of active blocks
int blockSize = 32;
cudaGetDevice(&device);
cudaGetDeviceProperties(&prop, device);
cudaOccupancyMaxActiveBlocksPerMultiprocessor(
&numBlocks,
MyKernel,
blockSize,
0);
std::cout << "Occupancy: " << (double)activeWarps ∕ maxWarps * 100 << "%" <<�
,→ std::endl;
return 0;
}
The following code sample configures an occupancy-based kernel launch of MyKernel according to
the user input.
∕∕ Device code
__global__ void MyKernel(int *array, int arrayCount)
{
int idx = threadIdx.x + blockIdx.x * blockDim.x;
if (idx < arrayCount) {
array[idx] *= array[idx];
}
}
∕∕ Host code
int launchMyKernel(int *array, int arrayCount)
{
int blockSize; ∕∕ The launch configurator returned block size
int minGridSize; ∕∕ The minimum grid size needed to achieve the
∕∕ maximum occupancy for a full device
∕∕ launch
int gridSize; ∕∕ The actual grid size needed, based on input
∕∕ size
cudaOccupancyMaxPotentialBlockSize(
&minGridSize,
&blockSize,
(void*)MyKernel,
0,
arrayCount);
return 0;
}
The following code sample shows how to use the cluster occupancy API to find the max number of
active clusters of a given size. Example code below calucaltes occupancy for cluster of size 2 and 128
threads per block.
Cluster size of 8 is forward compatible starting compute capability 9.0, except on GPU hardware or MIG
configurations which are too small to support 8 multiprocessors in which case the maximum cluster
size will be reduced. But it is recommended that the users query the maximum cluster size before
launching a cluster kernel. Max cluster size can be queried using cudaOccupancyMaxPotential-
ClusterSize API.
{
cudaLaunchConfig_t config = {0};
config.gridDim = number_of_blocks;
config.blockDim = 128; ∕∕ threads_per_block = 128
config.dynamicSmemBytes = dynamic_shared_memory_size;
cudaLaunchAttribute attribute[1];
attribute[0].id = cudaLaunchAttributeClusterDimension;
attribute[0].val.clusterDim.x = 2; ∕∕ cluster_size = 2
attribute[0].val.clusterDim.y = 1;
attribute[0].val.clusterDim.z = 1;
config.attrs = attribute;
config.numAttrs = 1;
int max_cluster_size = 0;
cudaOccupancyMaxPotentialClusterSize(&max_cluster_size, (void *)kernel, &config);
int max_active_clusters = 0;
cudaOccupancyMaxActiveClusters(&max_active_clusters, (void *)kernel, &config);
std::cout << "Max Active Clusters of size 2: " << max_active_clusters << std::endl;
}
The CUDA Nsight Compute User Interface also provides a standalone occupancy calculator and launch
configurator implementation in <CUDA_Toolkit_Path>∕include∕cuda_occupancy.h for any use
cases that cannot depend on the CUDA software stack. The Nsight Compute version of the occu-
pancy calculator is particularly useful as a learning tool that visualizes the impact of changes to the
parameters that affect occupancy (block size, registers per thread, and shared memory per thread).
▶ Synchronize again if necessary to make sure that shared memory has been updated with the
results,
▶ Write the results back to device memory.
For some applications (for example, for which global memory access patterns are data-dependent),
a traditional hardware-managed cache is more appropriate to exploit data locality. As mentioned in
Compute Capability 7.x, Compute Capability 8.x and Compute Capability 9.0, for devices of compute
capability 7.x, 8.x and 9.0, the same on-chip memory is used for both L1 and shared memory, and how
much of it is dedicated to L1 versus shared memory is configurable for each kernel call.
The throughput of memory accesses by a kernel can vary by an order of magnitude depending on ac-
cess pattern for each type of memory. The next step in maximizing memory throughput is therefore
to organize memory accesses as optimally as possible based on the optimal memory access patterns
described in Device Memory Accesses. This optimization is especially important for global memory
accesses as global memory bandwidth is low compared to available on-chip bandwidths and arith-
metic instruction throughput, so non-optimal global memory accesses generally have a high impact
on performance.
or
struct __align__(16) {
float x;
float y;
float z;
};
Any address of a variable residing in global memory or returned by one of the memory allocation rou-
tines from the driver or runtime API is always aligned to at least 256 bytes.
Reading non-naturally aligned 8-byte or 16-byte words produces incorrect results (off by a few words),
so special care must be taken to maintain alignment of the starting address of any value or array of
values of these types. A typical case where this might be easily overlooked is when using some cus-
tom global memory allocation scheme, whereby the allocations of multiple arrays (with multiple calls
to cudaMalloc() or cuMemAlloc()) is replaced by the allocation of a single large block of memory
partitioned into multiple arrays, in which case the starting address of each array is offset from the
block’s starting address.
Two-Dimensional Arrays
A common global memory access pattern is when each thread of index (tx,ty) uses the following
address to access one element of a 2D array of width width, located at address BaseAddress of type
type* (where type meets the requirement described in Maximize Utilization):
BaseAddress + width * ty + tx
For these accesses to be fully coalesced, both the width of the thread block and the width of the array
must be a multiple of the warp size.
In particular, this means that an array whose width is not a multiple of this size will be accessed much
more efficiently if it is actually allocated with a width rounded up to the closest multiple of this size
and its rows padded accordingly. The cudaMallocPitch() and cuMemAllocPitch() functions and
associated memory copy functions described in the reference manual enable programmers to write
non-hardware-dependent code to allocate arrays that conform to these constraints.
Local Memory
Local memory accesses only occur for some automatic variables as mentioned in Variable Memory
Space Specifiers. Automatic variables that the compiler is likely to place in local memory are:
▶ Arrays for which it cannot determine that they are indexed with constant quantities,
▶ Large structures or arrays that would consume too much register space,
▶ Any variable if the kernel uses more registers than available (this is also known as register spilling).
Inspection of the PTX assembly code (obtained by compiling with the -ptx or-keep option) will tell if a
variable has been placed in local memory during the first compilation phases as it will be declared using
the .local mnemonic and accessed using the ld.local and st.local mnemonics. Even if it has not,
subsequent compilation phases might still decide otherwise though if they find it consumes too much
register space for the targeted architecture: Inspection of the cubin object using cuobjdump will tell if
this is the case. Also, the compiler reports total local memory usage per kernel (lmem) when compiling
with the --ptxas-options=-v option. Note that some mathematical functions have implementation
paths that might access local memory.
The local memory space resides in device memory, so local memory accesses have the same high
latency and low bandwidth as global memory accesses and are subject to the same requirements for
memory coalescing as described in Device Memory Accesses. Local memory is however organized
such that consecutive 32-bit words are accessed by consecutive thread IDs. Accesses are therefore
fully coalesced as long as all threads in a warp access the same relative address (for example, same
index in an array variable, same member in a structure variable).
On devices of compute capability 5.x onwards, local memory accesses are always cached in L2 in the
same way as global memory accesses (see Compute Capability 5.x and Compute Capability 6.x).
Shared Memory
Because it is on-chip, shared memory has much higher bandwidth and much lower latency than local
or global memory.
To achieve high bandwidth, shared memory is divided into equally-sized memory modules, called banks,
which can be accessed simultaneously. Any memory read or write request made of n addresses that
fall in n distinct memory banks can therefore be serviced simultaneously, yielding an overall bandwidth
that is n times as high as the bandwidth of a single module.
However, if two addresses of a memory request fall in the same memory bank, there is a bank conflict
and the access has to be serialized. The hardware splits a memory request with bank conflicts into
as many separate conflict-free requests as necessary, decreasing throughput by a factor equal to the
number of separate memory requests. If the number of separate memory requests is n, the initial
memory request is said to cause n-way bank conflicts.
To get maximum performance, it is therefore important to understand how memory addresses map
to memory banks in order to schedule the memory requests so as to minimize bank conflicts. This
is described in Compute Capability 5.x, Compute Capability 6.x, Compute Capability 7.x, Compute Ca-
pability 8.x, and Compute Capability 9.0 for devices of compute capability 5.x, 6.x, 7.x, 8.x, and 9.0
respectively.
Constant Memory
The constant memory space resides in device memory and is cached in the constant cache.
A request is then split into as many separate requests as there are different memory addresses in the
initial request, decreasing throughput by a factor equal to the number of separate requests.
The resulting requests are then serviced at the throughput of the constant cache in case of a cache
hit, or at the throughput of device memory otherwise.
Texture and Surface Memory
The texture and surface memory spaces reside in device memory and are cached in texture cache, so
a texture fetch or surface read costs one memory read from device memory only on a cache miss,
otherwise it just costs one read from texture cache. The texture cache is optimized for 2D spatial
locality, so threads of the same warp that read texture or surface addresses that are close together in
2D will achieve best performance. Also, it is designed for streaming fetches with a constant latency;
a cache hit reduces DRAM bandwidth demand but not fetch latency.
Reading device memory through texture or surface fetching present some benefits that can make it
an advantageous alternative to reading device memory from global or constant memory:
▶ If the memory reads do not follow the access patterns that global or constant memory reads
must follow to get good performance, higher bandwidth can be achieved providing that there is
locality in the texture fetches or surface reads;
▶ Addressing calculations are performed outside the kernel by dedicated units;
▶ Packed data may be broadcast to separate variables in a single operation;
▶ 8-bit and 16-bit integer input data may be optionally converted to 32 bit floating-point values in
the range [0.0, 1.0] or [-1.0, 1.0] (see Texture Memory).
Other instructions and functions are implemented on top of the native instructions. The implemen-
tation may be different for devices of different compute capabilities, and the number of native in-
structions after compilation may fluctuate with every compiler version. For complicated functions,
there can be multiple code paths depending on input. cuobjdump can be used to inspect a particular
implementation in a cubin object.
The implementation of some functions are readily available on the CUDA header files
(math_functions.h, device_functions.h, …).
In general, code compiled with -ftz=true (denormalized numbers are flushed to zero) tends to
have higher performance than code compiled with -ftz=false. Similarly, code compiled with
-prec-div=false (less precise division) tends to have higher performance code than code compiled
with -prec-div=true, and code compiled with -prec-sqrt=false (less precise square root) tends
to have higher performance than code compiled with -prec-sqrt=true. The nvcc user manual de-
scribes these compilation flags in more details.
Single-Precision Floating-Point Division
__fdividef(x, y) (see Intrinsic Functions) provides faster single-precision floating-point division
than the division operator.
Single-Precision Floating-Point Reciprocal Square Root
To preserve IEEE-754 semantics the compiler can optimize 1.0∕sqrtf() into rsqrtf() only
when both reciprocal and square root are approximate, (i.e., with -prec-div=false and
-prec-sqrt=false). It is therefore recommended to invoke rsqrtf() directly where desired.
Single-Precision Floating-Point Square Root
Single-precision floating-point square root is implemented as a reciprocal square root followed by a
reciprocal instead of a reciprocal square root followed by a multiplication so that it gives correct results
for 0 and infinity.
Sine and Cosine
8 128 for __nv_bfloat16
10 2 for compute capability 7.5 GPUs
11 32 for extended-precision
13 16 for compute capabilities 7.5 GPUs
15 2 for compute capabilities 7.5 GPUs
▶ If an application cannot allocate enough device memory, consider falling back on other memory
types such as cudaMallocHost or cudaMallocManaged, which may not be as performant, but
will enable the application to make progress.
▶ For platforms that support the feature, cudaMallocManaged allows for oversubscription, and
with the correct cudaMemAdvise policies enabled, will allow the application to retain most if not
all the performance of cudaMalloc. cudaMallocManaged also won’t force an allocation to be
resident until it is needed or prefetched, reducing the overall pressure on the operating system
schedulers and better enabling multi-tenet use cases.
159
CUDA C++ Programming Guide, Release 12.4
10.1.1. __global__
The __global__ execution space specifier declares a function as being a kernel. Such a function is:
▶ Executed on the device,
▶ Callable from the host,
▶ Callable from the device for devices of compute capability 5.0 or higher (see CUDA Dynamic
Parallelism for more details).
A __global__ function must have void return type, and cannot be a member of a class.
Any call to a __global__ function must specify its execution configuration as described in Execution
Configuration.
A call to a __global__ function is asynchronous, meaning it returns before the device has completed
its execution.
10.1.2. __device__
The __device__ execution space specifier declares a function that is:
▶ Executed on the device,
▶ Callable from the device only.
The __global__ and __device__ execution space specifiers cannot be used together.
161
CUDA C++ Programming Guide, Release 12.4
10.1.3. __host__
The __host__ execution space specifier declares a function that is:
▶ Executed on the host,
▶ Callable from the host only.
It is equivalent to declare a function with only the __host__ execution space specifier or to declare it
without any of the __host__, __device__, or __global__ execution space specifier; in either case
the function is compiled for the host only.
The __global__ and __host__ execution space specifiers cannot be used together.
The __device__ and __host__ execution space specifiers can be used together however, in which
case the function is compiled for both the host and the device. The __CUDA_ARCH__ macro introduced
in Application Compatibility can be used to differentiate code paths between host and device:
__host__ __device__ func()
{
#if __CUDA_ARCH__ >= 800
∕∕ Device code path for compute capability 8.x
#elif __CUDA_ARCH__ >= 700
∕∕ Device code path for compute capability 7.x
#elif __CUDA_ARCH__ >= 600
∕∕ Device code path for compute capability 6.x
#elif __CUDA_ARCH__ >= 500
∕∕ Device code path for compute capability 5.x
#elif !defined(__CUDA_ARCH__)
∕∕ Host code path
#endif
}
10.1.6. __inline_hint__
The __inline_hint__ qualifier enables more aggressive inlining in the compiler. Unlike __forcein-
line__, it does not imply that the function is inline. It can be used to improve inlining across modules
when using LTO.
Neither the __noinline__ nor the __forceinline__ function qualifier can be used with the __in-
line_hint__ function qualifier.
10.2.1. __device__
The __device__ memory space specifier declares a variable that resides on the device.
At most one of the other memory space specifiers defined in the next three sections may be used
together with __device__ to further denote which memory space the variable belongs to. If none of
them is present, the variable:
▶ Resides in global memory space,
▶ Has the lifetime of the CUDA context in which it is created,
▶ Has a distinct object per device,
▶ Is accessible from all the threads within the grid and from the host through the runtime library
(cudaGetSymbolAddress() / cudaGetSymbolSize() / cudaMemcpyToSymbol() / cudaMem-
cpyFromSymbol()).
10.2.2. __constant__
The __constant__ memory space specifier, optionally used together with __device__, declares a
variable that:
▶ Resides in constant memory space,
▶ Has the lifetime of the CUDA context in which it is created,
▶ Has a distinct object per device,
▶ Is accessible from all the threads within the grid and from the host through the runtime library
(cudaGetSymbolAddress() / cudaGetSymbolSize() / cudaMemcpyToSymbol() / cudaMem-
cpyFromSymbol()).
10.2.3. __shared__
The __shared__ memory space specifier, optionally used together with __device__, declares a vari-
able that:
▶ Resides in the shared memory space of a thread block,
▶ Has the lifetime of the block,
▶ Has a distinct object per block,
▶ Is only accessible from all the threads within the block,
▶ Does not have a constant address.
When declaring a variable in shared memory as an external array such as
extern __shared__ float shared[];
the size of the array is determined at launch time (see Execution Configuration). All variables declared
in this fashion, start at the same address in memory, so that the layout of the variables in the array
must be explicitly managed through offsets. For example, if one wants the equivalent of
short array0[128];
float array1[64];
int array2[256];
in dynamically allocated shared memory, one could declare and initialize the arrays the following way:
extern __shared__ float array[];
__device__ void func() ∕∕ __device__ or __global__ function
{
short* array0 = (short*)array;
float* array1 = (float*)&array0[128];
int* array2 = (int*)&array1[64];
}
Note that pointers need to be aligned to the type they point to, so the following code, for example,
does not work since array1 is not aligned to 4 bytes.
extern __shared__ float array[];
__device__ void func() ∕∕ __device__ or __global__ function
{
short* array0 = (short*)array;
float* array1 = (float*)&array0[127];
}
Alignment requirements for the built-in vector types are listed in Table 5.
10.2.4. __grid_constant__
The __grid_constant__ annotation for compute architectures greater or equal to 7.0 annotates a
const-qualified __global__ function parameter of non-reference type that:
▶ Has the lifetime of the grid,
▶ Is private to the grid, i.e., the object is not accessible to host threads and threads from other
grids, including sub-grids,
▶ Has a distinct object per grid, i.e., all threads in the grid see the same address,
▶ Is read-only, i.e., modifying a __grid_constant__ object or any of its sub-objects is undefined
behavior, including mutable members.
Requirements:
▶ Kernel parameters annotated with __grid_constant__ must have const-qualified non-
reference types.
▶ All function declarations must match with respect to any __grid_constant_ parameters.
▶ A function template specialization must match the primary template declaration with respect to
any __grid_constant__ parameters.
▶ A function template instantiation directive must match the primary template declaration with
respect to any __grid_constant__ parameters.
If the address of a __global__ function parameter is taken, the compiler will ordinarily make a copy of
the kernel parameter in thread local memory and use the address of the copy, to partially support C++
semantics, which allow each thread to modify its own local copy of function parameters. Annotating a
__global__ function parameter with __grid_constant__ ensures that the compiler will not create
a copy of the kernel parameter in thread local memory, but will instead use the generic address of the
parameter itself. Avoiding the local copy may result in improved performance.
__device__ void unknown_function(S const&);
__global__ void kernel(const __grid_constant__ S s) {
s.x += threadIdx.x; ∕∕ Undefined Behavior: tried to modify read-only memory
10.2.5. __managed__
The __managed__ memory space specifier, optionally used together with __device__, declares a
variable that:
▶ Can be referenced from both device and host code, for example, its address can be taken or it
can be read or written directly from a device or host function.
▶ Has the lifetime of an application.
See __managed__ Memory Space Specifier for more details.
10.2.6. __restrict__
nvcc supports restricted pointers via the __restrict__ keyword.
Restricted pointers were introduced in C99 to alleviate the aliasing problem that exists in C-type lan-
guages, and which inhibits all kind of optimization from code re-ordering to common sub-expression
elimination.
Here is an example subject to the aliasing issue, where use of restricted pointer can help the compiler
to reduce the number of instructions:
void foo(const float* a,
const float* b,
float* c)
{
c[0] = a[0] * b[0];
c[1] = a[0] * b[0];
c[2] = a[0] * b[0] * a[1];
c[3] = a[0] * a[1];
c[4] = a[0] * b[0];
c[5] = b[0];
...
}
In C-type languages, the pointers a, b, and c may be aliased, so any write through c could modify
elements of a or b. This means that to guarantee functional correctness, the compiler cannot load
a[0] and b[0] into registers, multiply them, and store the result to both c[0] and c[1], because
the results would differ from the abstract execution model if, say, a[0] is really the same location as
c[0]. So the compiler cannot take advantage of the common sub-expression. Likewise, the compiler
cannot just reorder the computation of c[4] into the proximity of the computation of c[0] and c[1]
because the preceding write to c[3] could change the inputs to the computation of c[4].
By making a, b, and c restricted pointers, the programmer asserts to the compiler that the pointers
are in fact not aliased, which in this case means writes through c would never overwrite elements of a
or b. This changes the function prototype as follows:
void foo(const float* __restrict__ a,
const float* __restrict__ b,
float* __restrict__ c);
Note that all pointer arguments need to be made restricted for the compiler optimizer to derive any
benefit. With the __restrict__ keywords added, the compiler can now reorder and do common
sub-expression elimination at will, while retaining functionality identical with the abstract execution
model:
void foo(const float* __restrict__ a,
const float* __restrict__ b,
float* __restrict__ c)
{
float t0 = a[0];
float t1 = b[0];
float t2 = t0 * t1;
float t3 = a[1];
c[0] = t2;
c[1] = t2;
c[4] = t2;
c[2] = t2 * t3;
(continues on next page)
The effects here are a reduced number of memory accesses and reduced number of computa-
tions. This is balanced by an increase in register pressure due to “cached” loads and common sub-
expressions.
Since register pressure is a critical issue in many CUDA codes, use of restricted pointers can have
negative performance impact on CUDA code, due to reduced occupancy.
10.3.2. dim3
This type is an integer vector type based on uint3 that is used to specify dimensions. When defining
a variable of type dim3, any component left unspecified is initialized to 1.
10.4.1. gridDim
This variable is of type dim3 (see dim3) and contains the dimensions of the grid.
10.4.2. blockIdx
This variable is of type uint3 (see char, short, int, long, longlong, float, double) and contains the block
index within the grid.
10.4.3. blockDim
This variable is of type dim3 (see dim3) and contains the dimensions of the block.
10.4.4. threadIdx
This variable is of type uint3 (see char, short, int, long, longlong, float, double ) and contains the thread
index within the block.
10.4.5. warpSize
This variable is of type int and contains the warp size in threads (see SIMT Architecture for the defi-
nition of a warp).
The two threads read and write from the same memory locations X and Y simultaneously. Any data-
race is undefined behavior, and has no defined semantics. The resulting values for A and B can be
anything.
Memory fence functions can be used to enforce a sequentially-consistent ordering on memory ac-
cesses. The memory fence functions differ in the scope in which the orderings are enforced but they
are independent of the accessed memory space (shared memory, global memory, page-locked host
memory, and the memory of a peer device).
void __threadfence_block();
is equivalent to cuda::atomic_thread_fence(cuda::memory_order_seq_cst,
cuda::thread_scope_device) and ensures that no writes to all memory made by the calling thread
after the call to __threadfence() are observed by any thread in the device as occurring before any
write to all memory made by the calling thread before the call to __threadfence().
void __threadfence_system();
is equivalent to cuda::atomic_thread_fence(cuda::memory_order_seq_cst,
cuda::thread_scope_system) and ensures that all writes to all memory made by the calling thread
before the call to __threadfence_system() are observed by all threads in the device, host threads,
and all threads in peer devices as occurring before all writes to all memory made by the calling thread
after the call to __threadfence_system().
__threadfence_system() is only supported by devices of compute capability 2.x and higher.
In the previous code sample, we can insert fences in the codes as follows:
__device__ int X = 1, Y = 2;
if (threadIdx.x == 0) {
if (isLastBlockDone) {
if (threadIdx.x == 0) {
waits until all threads in the thread block have reached this point and all global and shared memory
accesses made by these threads prior to __syncthreads() are visible to all threads in the block.
__syncthreads() is used to coordinate communication between the threads of the same block.
When some threads within a block access the same addresses in shared or global memory, there are
potential read-after-write, write-after-read, or write-after-write hazards for some of these memory
accesses. These data hazards can be avoided by synchronizing threads in-between these accesses.
__syncthreads() is allowed in conditional code but only if the conditional evaluates identically across
the entire thread block, otherwise the code execution is likely to hang or produce unintended side
effects.
Devices of compute capability 2.x and higher support three variations of __syncthreads() described
below.
int __syncthreads_count(int predicate);
is identical to __syncthreads() with the additional feature that it evaluates predicate for all threads
of the block and returns the number of threads for which predicate evaluates to non-zero.
int __syncthreads_and(int predicate);
is identical to __syncthreads() with the additional feature that it evaluates predicate for all threads
of the block and returns non-zero if and only if predicate evaluates to non-zero for all of them.
int __syncthreads_or(int predicate);
is identical to __syncthreads() with the additional feature that it evaluates predicate for all threads
of the block and returns non-zero if and only if predicate evaluates to non-zero for any of them.
void __syncwarp(unsigned mask=0xffffffff);
will cause the executing thread to wait until all warp lanes named in mask have executed a
__syncwarp() (with the same mask) before resuming execution. Each calling thread must have its
own bit set in the mask and all non-exited threads named in mask must execute a corresponding
__syncwarp() with the same mask, or the result is undefined.
Executing __syncwarp() guarantees memory ordering among threads participating in the barrier.
Thus, threads within a warp that wish to communicate via memory can store to memory, execute
__syncwarp(), and then safely read values stored by other threads in the warp.
Note: For .target sm_6x or below, all threads in mask must execute the same __syncwarp() in
convergence, and the union of all values in mask must be equal to the active mask. Otherwise, the
behavior is undefined.
template<class T>
T tex1Dfetch(cudaTextureObject_t texObj, int x);
fetches from the region of linear memory specified by the one-dimensional texture object texObj
using integer texture coordinate x. tex1Dfetch() only works with non-normalized coordinates, so
only the border and clamp addressing modes are supported. It does not perform any texture filtering.
For integer types, it may optionally promote the integer to single-precision floating point.
10.8.1.2 tex1D()
template<class T>
T tex1D(cudaTextureObject_t texObj, float x);
fetches from the CUDA array specified by the one-dimensional texture object texObj using texture
coordinate x.
10.8.1.3 tex1DLod()
template<class T>
T tex1DLod(cudaTextureObject_t texObj, float x, float level);
fetches from the CUDA array specified by the one-dimensional texture object texObj using texture
coordinate x at the level-of-detail level.
10.8.1.4 tex1DGrad()
template<class T>
T tex1DGrad(cudaTextureObject_t texObj, float x, float dx, float dy);
fetches from the CUDA array specified by the one-dimensional texture object texObj using texture
coordinate x. The level-of-detail is derived from the X-gradient dx and Y-gradient dy.
10.8.1.5 tex2D()
template<class T>
T tex2D(cudaTextureObject_t texObj, float x, float y);
fetches from the CUDA array or the region of linear memory specified by the two-dimensional texture
object texObj using texture coordinate (x,y).
template<class T>
T tex2D(cudaTextureObject_t texObj, float x, float y, bool* isResident);
fetches from the CUDA array specified by the two-dimensional texture object texObj using texture
coordinate (x,y). Also returns whether the texel is resident in memory via isResident pointer. If
not, the values fetched will be zeros.
10.8.1.7 tex2Dgather()
template<class T>
T tex2Dgather(cudaTextureObject_t texObj,
float x, float y, int comp = 0);
fetches from the CUDA array specified by the 2D texture object texObj using texture coordinates x
and y and the comp parameter as described in Texture Gather.
template<class T>
T tex2Dgather(cudaTextureObject_t texObj,
float x, float y, bool* isResident, int comp = 0);
fetches from the CUDA array specified by the 2D texture object texObj using texture coordinates
x and y and the comp parameter as described in Texture Gather. Also returns whether the texel is
resident in memory via isResident pointer. If not, the values fetched will be zeros.
10.8.1.9 tex2DGrad()
template<class T>
T tex2DGrad(cudaTextureObject_t texObj, float x, float y,
float2 dx, float2 dy);
fetches from the CUDA array specified by the two-dimensional texture object texObj using texture
coordinate (x,y). The level-of-detail is derived from the dx and dy gradients.
template<class T>
T tex2DGrad(cudaTextureObject_t texObj, float x, float y,
float2 dx, float2 dy, bool* isResident);
fetches from the CUDA array specified by the two-dimensional texture object texObj using texture
coordinate (x,y). The level-of-detail is derived from the dx and dy gradients. Also returns whether
the texel is resident in memory via isResident pointer. If not, the values fetched will be zeros.
10.8.1.11 tex2DLod()
template<class T>
tex2DLod(cudaTextureObject_t texObj, float x, float y, float level);
fetches from the CUDA array or the region of linear memory specified by the two-dimensional texture
object texObj using texture coordinate (x,y) at level-of-detail level.
template<class T>
tex2DLod(cudaTextureObject_t texObj, float x, float y, float level, bool* isResident);
fetches from the CUDA array specified by the two-dimensional texture object texObj using texture
coordinate (x,y) at level-of-detail level. Also returns whether the texel is resident in memory via
isResident pointer. If not, the values fetched will be zeros.
10.8.1.13 tex3D()
template<class T>
T tex3D(cudaTextureObject_t texObj, float x, float y, float z);
fetches from the CUDA array specified by the three-dimensional texture object texObj using texture
coordinate (x,y,z).
template<class T>
T tex3D(cudaTextureObject_t texObj, float x, float y, float z, bool* isResident);
fetches from the CUDA array specified by the three-dimensional texture object texObj using texture
coordinate (x,y,z). Also returns whether the texel is resident in memory via isResident pointer. If
not, the values fetched will be zeros.
10.8.1.15 tex3DLod()
template<class T>
T tex3DLod(cudaTextureObject_t texObj, float x, float y, float z, float level);
fetches from the CUDA array or the region of linear memory specified by the three-dimensional texture
object texObj using texture coordinate (x,y,z) at level-of-detail level.
template<class T>
T tex3DLod(cudaTextureObject_t texObj, float x, float y, float z, float level, bool*�
,→isResident);
fetches from the CUDA array or the region of linear memory specified by the three-dimensional texture
object texObj using texture coordinate (x,y,z) at level-of-detail level. Also returns whether the
texel is resident in memory via isResident pointer. If not, the values fetched will be zeros.
10.8.1.17 tex3DGrad()
template<class T>
T tex3DGrad(cudaTextureObject_t texObj, float x, float y, float z,
float4 dx, float4 dy);
fetches from the CUDA array specified by the three-dimensional texture object texObj using texture
coordinate (x,y,z) at a level-of-detail derived from the X and Y gradients dx and dy.
template<class T>
T tex3DGrad(cudaTextureObject_t texObj, float x, float y, float z,
float4 dx, float4 dy, bool* isResident);
fetches from the CUDA array specified by the three-dimensional texture object texObj using texture
coordinate (x,y,z) at a level-of-detail derived from the X and Y gradients dx and dy. Also returns
whether the texel is resident in memory via isResident pointer. If not, the values fetched will be
zeros.
10.8.1.19 tex1DLayered()
template<class T>
T tex1DLayered(cudaTextureObject_t texObj, float x, int layer);
fetches from the CUDA array specified by the one-dimensional texture object texObj using texture
coordinate x and index layer, as described in Layered Textures
10.8.1.20 tex1DLayeredLod()
template<class T>
T tex1DLayeredLod(cudaTextureObject_t texObj, float x, int layer, float level);
fetches from the CUDA array specified by the one-dimensional layered texture at layer layer using
texture coordinate x and level-of-detail level.
10.8.1.21 tex1DLayeredGrad()
template<class T>
T tex1DLayeredGrad(cudaTextureObject_t texObj, float x, int layer,
float dx, float dy);
fetches from the CUDA array specified by the one-dimensional layered texture at layer layer using
texture coordinate x and a level-of-detail derived from the dx and dy gradients.
10.8.1.22 tex2DLayered()
template<class T>
T tex2DLayered(cudaTextureObject_t texObj,
float x, float y, int layer);
fetches from the CUDA array specified by the two-dimensional texture object texObj using texture
coordinate (x,y) and index layer, as described in Layered Textures.
template<class T>
T tex2DLayered(cudaTextureObject_t texObj,
float x, float y, int layer, bool* isResident);
fetches from the CUDA array specified by the two-dimensional texture object texObj using texture
coordinate (x,y) and index layer, as described in Layered Textures. Also returns whether the texel
is resident in memory via isResident pointer. If not, the values fetched will be zeros.
10.8.1.24 tex2DLayeredLod()
template<class T>
T tex2DLayeredLod(cudaTextureObject_t texObj, float x, float y, int layer,
float level);
fetches from the CUDA array specified by the two-dimensional layered texture at layer layer using
texture coordinate (x,y).
template<class T>
T tex2DLayeredLod(cudaTextureObject_t texObj, float x, float y, int layer,
float level, bool* isResident);
fetches from the CUDA array specified by the two-dimensional layered texture at layer layer us-
ing texture coordinate (x,y). Also returns whether the texel is resident in memory via isResident
pointer. If not, the values fetched will be zeros.
10.8.1.26 tex2DLayeredGrad()
template<class T>
T tex2DLayeredGrad(cudaTextureObject_t texObj, float x, float y, int layer,
float2 dx, float2 dy);
fetches from the CUDA array specified by the two-dimensional layered texture at layer layer using
texture coordinate (x,y) and a level-of-detail derived from the dx and dy gradients.
template<class T>
T tex2DLayeredGrad(cudaTextureObject_t texObj, float x, float y, int layer,
float2 dx, float2 dy, bool* isResident);
fetches from the CUDA array specified by the two-dimensional layered texture at layer layer using
texture coordinate (x,y) and a level-of-detail derived from the dx and dy gradients. Also returns
whether the texel is resident in memory via isResident pointer. If not, the values fetched will be
zeros.
10.8.1.28 texCubemap()
template<class T>
T texCubemap(cudaTextureObject_t texObj, float x, float y, float z);
fetches the CUDA array specified by the cubemap texture object texObj using texture coordinate
(x,y,z), as described in Cubemap Textures.
10.8.1.29 texCubemapGrad()
template<class T>
T texCubemapGrad(cudaTextureObject_t texObj, float x, float, y, float z,
float4 dx, float4 dy);
fetches from the CUDA array specified by the cubemap texture object texObj using texture coordi-
nate (x,y,z) as described in Cubemap Textures. The level-of-detail used is derived from the dx and
dy gradients.
10.8.1.30 texCubemapLod()
template<class T>
T texCubemapLod(cudaTextureObject_t texObj, float x, float, y, float z,
float level);
fetches from the CUDA array specified by the cubemap texture object texObj using texture coordi-
nate (x,y,z) as described in Cubemap Textures. The level-of-detail used is given by level.
10.8.1.31 texCubemapLayered()
template<class T>
T texCubemapLayered(cudaTextureObject_t texObj,
float x, float y, float z, int layer);
fetches from the CUDA array specified by the cubemap layered texture object texObj using texture
coordinates (x,y,z), and index layer, as described in Cubemap Layered Textures.
10.8.1.32 texCubemapLayeredGrad()
template<class T>
T texCubemapLayeredGrad(cudaTextureObject_t texObj, float x, float y, float z,
int layer, float4 dx, float4 dy);
fetches from the CUDA array specified by the cubemap layered texture object texObj using texture
coordinate (x,y,z) and index layer, as described in Cubemap Layered Textures, at level-of-detail
derived from the dx and dy gradients.
10.8.1.33 texCubemapLayeredLod()
template<class T>
T texCubemapLayeredLod(cudaTextureObject_t texObj, float x, float y, float z,
int layer, float level);
fetches from the CUDA array specified by the cubemap layered texture object texObj using texture
coordinate (x,y,z) and index layer, as described in Cubemap Layered Textures, at level-of-detail
level level.
template<class T>
T surf1Dread(cudaSurfaceObject_t surfObj, int x,
boundaryMode = cudaBoundaryModeTrap);
reads the CUDA array specified by the one-dimensional surface object surfObj using byte coordinate
x.
10.9.1.2 surf1Dwrite
template<class T>
void surf1Dwrite(T data,
cudaSurfaceObject_t surfObj,
int x,
boundaryMode = cudaBoundaryModeTrap);
writes value data to the CUDA array specified by the one-dimensional surface object surfObj at byte
coordinate x.
10.9.1.3 surf2Dread()
template<class T>
T surf2Dread(cudaSurfaceObject_t surfObj,
int x, int y,
boundaryMode = cudaBoundaryModeTrap);
template<class T>
void surf2Dread(T* data,
cudaSurfaceObject_t surfObj,
int x, int y,
boundaryMode = cudaBoundaryModeTrap);
reads the CUDA array specified by the two-dimensional surface object surfObj using byte coordinates
x and y.
10.9.1.4 surf2Dwrite()
template<class T>
void surf2Dwrite(T data,
cudaSurfaceObject_t surfObj,
int x, int y,
boundaryMode = cudaBoundaryModeTrap);
writes value data to the CUDA array specified by the two-dimensional surface object surfObj at byte
coordinate x and y.
10.9.1.5 surf3Dread()
template<class T>
T surf3Dread(cudaSurfaceObject_t surfObj,
int x, int y, int z,
boundaryMode = cudaBoundaryModeTrap);
template<class T>
void surf3Dread(T* data,
cudaSurfaceObject_t surfObj,
int x, int y, int z,
boundaryMode = cudaBoundaryModeTrap);
reads the CUDA array specified by the three-dimensional surface object surfObj using byte coordi-
nates x, y, and z.
10.9.1.6 surf3Dwrite()
template<class T>
void surf3Dwrite(T data,
cudaSurfaceObject_t surfObj,
int x, int y, int z,
boundaryMode = cudaBoundaryModeTrap);
writes value data to the CUDA array specified by the three-dimensional object surfObj at byte coor-
dinate x, y, and z.
10.9.1.7 surf1DLayeredread()
template<class T>
T surf1DLayeredread(
cudaSurfaceObject_t surfObj,
int x, int layer,
boundaryMode = cudaBoundaryModeTrap);
template<class T>
void surf1DLayeredread(T data,
cudaSurfaceObject_t surfObj,
int x, int layer,
boundaryMode = cudaBoundaryModeTrap);
reads the CUDA array specified by the one-dimensional layered surface object surfObj using byte
coordinate x and index layer.
10.9.1.8 surf1DLayeredwrite()
template<class Type>
void surf1DLayeredwrite(T data,
cudaSurfaceObject_t surfObj,
int x, int layer,
boundaryMode = cudaBoundaryModeTrap);
writes value data to the CUDA array specified by the two-dimensional layered surface object surfObj
at byte coordinate x and index layer.
10.9.1.9 surf2DLayeredread()
template<class T>
T surf2DLayeredread(
cudaSurfaceObject_t surfObj,
int x, int y, int layer,
boundaryMode = cudaBoundaryModeTrap);
template<class T>
void surf2DLayeredread(T data,
cudaSurfaceObject_t surfObj,
int x, int y, int layer,
boundaryMode = cudaBoundaryModeTrap);
reads the CUDA array specified by the two-dimensional layered surface object surfObj using byte
coordinate x and y, and index layer.
10.9.1.10 surf2DLayeredwrite()
template<class T>
void surf2DLayeredwrite(T data,
cudaSurfaceObject_t surfObj,
int x, int y, int layer,
boundaryMode = cudaBoundaryModeTrap);
writes value data to the CUDA array specified by the one-dimensional layered surface object surfObj
at byte coordinate x and y, and index layer.
10.9.1.11 surfCubemapread()
template<class T>
T surfCubemapread(
cudaSurfaceObject_t surfObj,
int x, int y, int face,
boundaryMode = cudaBoundaryModeTrap);
template<class T>
void surfCubemapread(T data,
cudaSurfaceObject_t surfObj,
int x, int y, int face,
boundaryMode = cudaBoundaryModeTrap);
reads the CUDA array specified by the cubemap surface object surfObj using byte coordinate x and
y, and face index face.
10.9.1.12 surfCubemapwrite()
template<class T>
void surfCubemapwrite(T data,
cudaSurfaceObject_t surfObj,
int x, int y, int face,
boundaryMode = cudaBoundaryModeTrap);
writes value data to the CUDA array specified by the cubemap object surfObj at byte coordinate x
and y, and face index face.
10.9.1.13 surfCubemapLayeredread()
template<class T>
T surfCubemapLayeredread(
cudaSurfaceObject_t surfObj,
int x, int y, int layerFace,
boundaryMode = cudaBoundaryModeTrap);
template<class T>
void surfCubemapLayeredread(T data,
cudaSurfaceObject_t surfObj,
int x, int y, int layerFace,
boundaryMode = cudaBoundaryModeTrap);
reads the CUDA array specified by the cubemap layered surface object surfObj using byte coordinate
x and y, and index layerFace.
10.9.1.14 surfCubemapLayeredwrite()
template<class T>
void surfCubemapLayeredwrite(T data,
cudaSurfaceObject_t surfObj,
int x, int y, int layerFace,
boundaryMode = cudaBoundaryModeTrap);
writes value data to the CUDA array specified by the cubemap layered object surfObj at byte coordi-
nate x and y, and index layerFace.
returns the data of type T located at address address, where T is char, signed char, short,
int, long, long longunsigned char, unsigned short, unsigned int, unsigned long, un-
signed long long, char2, char4, short2, short4, int2, int4, longlong2uchar2, uchar4,
ushort2, ushort4, uint2, uint4, ulonglong2float, float2, float4, double, or double2. With
the cuda_fp16.h header included, T can be __half or __half2. Similarly, with the cuda_bf16.h
header included, T can also be __nv_bfloat16 or __nv_bfloat162. The operation is cached in the
read-only data cache (see Global Memory).
returns the data of type T located at address address, where T is char, signed char, short,
int, long, long longunsigned char, unsigned short, unsigned int, unsigned long, un-
signed long long, char2, char4, short2, short4, int2, int4, longlong2uchar2, uchar4,
ushort2, ushort4, uint2, uint4, ulonglong2float, float2, float4, double, or double2. With
the cuda_fp16.h header included, T can be __half or __half2. Similarly, with the cuda_bf16.h
header included, T can also be __nv_bfloat16 or __nv_bfloat162. The operation is using the cor-
responding cache operator (see PTX ISA)
stores the value argument of type T to the location at address address, where T is char, signed
char, short, int, long, long longunsigned char, unsigned short, unsigned int, unsigned
long, unsigned long long, char2, char4, short2, short4, int2, int4, longlong2uchar2,
uchar4, ushort2, ushort4, uint2, uint4, ulonglong2float, float2, float4, double, or dou-
ble2. With the cuda_fp16.h header included, T can be __half or __half2. Similarly, with the
cuda_bf16.h header included, T can also be __nv_bfloat16 or __nv_bfloat162. The operation
is using the corresponding cache operator (see PTX ISA )
when executed in device code, returns the value of a per-multiprocessor counter that is incremented
every clock cycle. Sampling this counter at the beginning and at the end of a kernel, taking the dif-
ference of the two samples, and recording the result per thread provides a measure for each thread
of the number of clock cycles taken by the device to completely execute the thread, but not of the
number of clock cycles the device actually spent executing thread instructions. The former number
is greater than the latter since threads are time sliced.
void foo() {
int *addr;
cudaMallocManaged(&addr, 4);
*addr = 0;
mykernel<<<...>>>(addr);
__sync_fetch_and_add(addr, 10); ∕∕ CPU atomic operation
}
Note that any atomic operation can be implemented based on atomicCAS() (Compare And Swap). For
example, atomicAdd() for double-precision floating-point numbers is not available on devices with
compute capability lower than 6.0 but it can be implemented as follows:
#if __CUDA_ARCH__ < 600
__device__ double atomicAdd(double* address, double val)
{
unsigned long long int* address_as_ull =
(unsigned long long int*)address;
unsigned long long int old = *address_as_ull, assumed;
do {
assumed = old;
old = atomicCAS(address_as_ull, assumed,
__double_as_longlong(val +
(continues on next page)
∕∕ Note: uses integer comparison to avoid hang in case of NaN (since NaN != NaN)
} while (assumed != old);
return __longlong_as_double(old);
}
#endif
There are system-wide and block-wide variants of the following device-wide atomic APIs, with the
following exceptions:
▶ Devices with compute capability less than 6.0 only support device-wide atomic operations,
▶ Tegra devices with compute capability less than 7.2 do not support system-wide atomic opera-
tions.
reads the 16-bit, 32-bit or 64-bit old located at the address address in global or shared memory,
computes (old + val), and stores the result back to memory at the same address. These three
operations are performed in one atomic transaction. The function returns old.
The 32-bit floating-point version of atomicAdd() is only supported by devices of compute capability
2.x and higher.
The 64-bit floating-point version of atomicAdd() is only supported by devices of compute capability
6.x and higher.
The 32-bit __half2 floating-point version of atomicAdd() is only supported by devices of compute
capability 6.x and higher. The atomicity of the __half2 or __nv_bfloat162 add operation is guar-
anteed separately for each of the two __half or __nv_bfloat16 elements; the entire __half2 or
__nv_bfloat162 is not guaranteed to be atomic as a single 32-bit access.
The float2 and float4 floating-point vector versions of atomicAdd() are only supported by devices
of compute capability 9.x and higher. The atomicity of the float2 or float4 add operation is guar-
anteed separately for each of the two or four float elements; the entire float2 or float4 is not
guaranteed to be atomic as a single 64-bit or 128-bit access.
The 16-bit __half floating-point version of atomicAdd() is only supported by devices of compute
capability 7.x and higher.
The 16-bit __nv_bfloat16 floating-point version of atomicAdd() is only supported by devices of
compute capability 8.x and higher.
The float2 and float4 floating-point vector versions of atomicAdd() are only supported by devices
of compute capability 9.x and higher.
The float2 and float4 floating-point vector versions of atomicAdd() are only supported for global
memory addresses.
10.14.1.2 atomicSub()
reads the 32-bit word old located at the address address in global or shared memory, computes
(old - val), and stores the result back to memory at the same address. These three operations are
performed in one atomic transaction. The function returns old.
10.14.1.3 atomicExch()
reads the 32-bit or 64-bit word old located at the address address in global or shared memory and
stores val back to memory at the same address. These two operations are performed in one atomic
transaction. The function returns old.
template<typename T> T atomicExch(T* address, T val);
reads the 128-bit word old located at the address address in global or shared memory and stores val
back to memory at the same address. These two operations are performed in one atomic transaction.
The function returns old. The type T must meet the following requirements:
sizeof(T) == 16
alignof(T) >= 16
std::is_trivially_copyable<T>::value == true
∕∕ for C++03 and older
std::is_default_constructible<T>::value == true
So, T must be 128-bit and properly aligned, be trivially copyable, and on C++03 or older, it must also be
default constructible.
The 128-bit atomicExch() is only supported by devices of compute capability 9.x and higher.
10.14.1.4 atomicMin()
reads the 32-bit or 64-bit word old located at the address address in global or shared memory, com-
putes the minimum of old and val, and stores the result back to memory at the same address. These
three operations are performed in one atomic transaction. The function returns old.
The 64-bit version of atomicMin() is only supported by devices of compute capability 5.0 and higher.
10.14.1.5 atomicMax()
reads the 32-bit or 64-bit word old located at the address address in global or shared memory, com-
putes the maximum of old and val, and stores the result back to memory at the same address. These
three operations are performed in one atomic transaction. The function returns old.
The 64-bit version of atomicMax() is only supported by devices of compute capability 5.0 and higher.
10.14.1.6 atomicInc()
reads the 32-bit word old located at the address address in global or shared memory, computes
((old >= val) ? 0 : (old+1)), and stores the result back to memory at the same address.
These three operations are performed in one atomic transaction. The function returns old.
10.14.1.7 atomicDec()
reads the 32-bit word old located at the address address in global or shared memory, computes
(((old == 0) || (old > val)) ? val : (old-1) ), and stores the result back to memory
at the same address. These three operations are performed in one atomic transaction. The function
returns old.
10.14.1.8 atomicCAS()
reads the 16-bit, 32-bit or 64-bit word old located at the address address in global or shared memory,
computes (old == compare ? val : old), and stores the result back to memory at the same
address. These three operations are performed in one atomic transaction. The function returns old
(Compare And Swap).
template<typename T> T atomicCAS(T* address, T compare, T val);
reads the 128-bit word old located at the address address in global or shared memory, computes
(old == compare ? val : old), and stores the result back to memory at the same address.
These three operations are performed in one atomic transaction. The function returns old (Compare
And Swap). The type T must meet the following requirements:
sizeof(T) == 16
alignof(T) >= 16
std::is_trivially_copyable<T>::value == true
∕∕ for C++03 and older
std::is_default_constructible<T>::value == true
So, T must be 128-bit and properly aligned, be trivially copyable, and on C++03 or older, it must also be
default constructible.
The 128-bit atomicCAS() is only supported by devices of compute capability 9.x and higher.
reads the 32-bit or 64-bit word old located at the address address in global or shared memory, com-
putes (old & val), and stores the result back to memory at the same address. These three operations
are performed in one atomic transaction. The function returns old.
The 64-bit version of atomicAnd() is only supported by devices of compute capability 5.0 and higher.
10.14.2.2 atomicOr()
reads the 32-bit or 64-bit word old located at the address address in global or shared memory, com-
putes (old | val), and stores the result back to memory at the same address. These three opera-
tions are performed in one atomic transaction. The function returns old.
The 64-bit version of atomicOr() is only supported by devices of compute capability 5.0 and higher.
10.14.2.3 atomicXor()
reads the 32-bit or 64-bit word old located at the address address in global or shared memory, com-
putes (old ^ val), and stores the result back to memory at the same address. These three opera-
tions are performed in one atomic transaction. The function returns old.
The 64-bit version of atomicXor() is only supported by devices of compute capability 5.0 and higher.
10.15.1. __isGlobal()
__device__ unsigned int __isGlobal(const void *ptr);
Returns 1 if ptr contains the generic address of an object in global memory space, otherwise returns
0.
10.15.2. __isShared()
__device__ unsigned int __isShared(const void *ptr);
Returns 1 if ptr contains the generic address of an object in shared memory space, otherwise returns
0.
10.15.3. __isConstant()
__device__ unsigned int __isConstant(const void *ptr);
Returns 1 if ptr contains the generic address of an object in constant memory space, otherwise re-
turns 0.
10.15.4. __isGridConstant()
__device__ unsigned int __isGridConstant(const void *ptr);
Returns 1 if ptr contains the generic address of a kernel parameter annotated with
__grid_constant__, otherwise returns 0. Only supported for compute architectures greater
than or equal to 7.x or later.
10.15.5. __isLocal()
__device__ unsigned int __isLocal(const void *ptr);
Returns 1 if ptr contains the generic address of an object in local memory space, otherwise returns
0.
10.16.1. __cvta_generic_to_global()
__device__ size_t __cvta_generic_to_global(const void *ptr);
Returns the result of executing the PTXcvta.to.global instruction on the generic address denoted
by ptr.
10.16.2. __cvta_generic_to_shared()
__device__ size_t __cvta_generic_to_shared(const void *ptr);
Returns the result of executing the PTXcvta.to.shared instruction on the generic address denoted
by ptr.
10.16.3. __cvta_generic_to_constant()
__device__ size_t __cvta_generic_to_constant(const void *ptr);
Returns the result of executing the PTXcvta.to.const instruction on the generic address denoted
by ptr.
10.16.4. __cvta_generic_to_local()
__device__ size_t __cvta_generic_to_local(const void *ptr);
Returns the result of executing the PTXcvta.to.local instruction on the generic address denoted
by ptr.
10.16.5. __cvta_global_to_generic()
__device__ void * __cvta_global_to_generic(size_t rawbits);
Returns the generic pointer obtained by executing the PTXcvta.global instruction on the value pro-
vided by rawbits.
10.16.6. __cvta_shared_to_generic()
__device__ void * __cvta_shared_to_generic(size_t rawbits);
Returns the generic pointer obtained by executing the PTXcvta.shared instruction on the value pro-
vided by rawbits.
10.16.7. __cvta_constant_to_generic()
__device__ void * __cvta_constant_to_generic(size_t rawbits);
Returns the generic pointer obtained by executing the PTXcvta.const instruction on the value pro-
vided by rawbits.
10.16.8. __cvta_local_to_generic()
__device__ void * __cvta_local_to_generic(size_t rawbits);
Returns the generic pointer obtained by executing the PTXcvta.local instruction on the value pro-
vided by rawbits.
10.17.1. Synopsis
__host__ __device__ void * alloca(size_t size);
10.17.2. Description
The alloca() function allocates size bytes of memory in the stack frame of the caller. The returned
value is a pointer to allocated memory, the beginning of the memory is 16 bytes aligned when the
function is invoked from device code. The allocated memory is automatically freed when the caller to
alloca() is returned.
Note: On Windows platform, <malloc.h> must be included before using alloca(). Using alloca()
may cause the stack to overflow, user needs to adjust stack size accordingly.
10.17.3. Example
__device__ void foo(unsigned int num) {
int4 *ptr = (int4 *)alloca(num * sizeof(int4));
∕∕ use of ptr
...
}
10.18.1. __builtin_assume_aligned()
void * __builtin_assume_aligned (const void *exp, size_t align)
Allows the compiler to assume that the argument pointer is aligned to at least align bytes, and re-
turns the argument pointer.
Example:
void *res = __builtin_assume_aligned(ptr, 32); ∕∕ compiler can assume 'res' is
∕∕ at least 32-byte aligned
Allows the compiler to assume that (char *)exp - offset is aligned to at least align bytes, and
returns the argument pointer.
Example:
void *res = __builtin_assume_aligned(ptr, 32, 8); ∕∕ compiler can assume
∕∕ '(char *)res - 8' is
∕∕ at least 32-byte aligned.
10.18.2. __builtin_assume()
void __builtin_assume(bool exp)
Allows the compiler to assume that the Boolean argument is true. If the argument is not true at
run time, then the behavior is undefined. Note that if the argument has side effects, the behavior is
unspecified.
Example:
__device__ int get(int *ptr, int idx) {
__builtin_assume(idx <= 2);
return ptr[idx];
}
10.18.3. __assume()
void __assume(bool exp)
Allows the compiler to assume that the Boolean argument is true. If the argument is not true at
run time, then the behavior is undefined. Note that if the argument has side effects, the behavior is
unspecified.
Example:
__device__ int get(int *ptr, int idx) {
__assume(idx <= 2);
return ptr[idx];
}
10.18.4. __builtin_expect()
long __builtin_expect (long exp, long c)
Indicates to the compiler that it is expected that exp == c, and returns the value of exp. Typically
used to indicate branch prediction information to the compiler.
Example:
∕∕ indicate to the compiler that likely "var == 0",
∕∕ so the body of the if-block is unlikely to be
∕∕ executed at run time.
if (__builtin_expect (var, 0))
doit ();
10.18.5. __builtin_unreachable()
void __builtin_unreachable(void)
Indicates to the compiler that control flow never reaches the point where this function is being called
from. The program has undefined behavior if the control flow does actually reach this point at run
time.
Example:
∕∕ indicates to the compiler that the default case label is never reached.
switch (in) {
case 1: return 4;
case 2: return 10;
default: __builtin_unreachable();
}
10.18.6. Restrictions
__assume() is only supported when using cl.exe host compiler. The other functions are supported
on all platforms, subject to the following restrictions:
▶ If the host compiler supports the function, the function can be invoked from anywhere in trans-
lation unit.
▶ Otherwise, the function must be invoked from within the body of a __device__/
__global__function, or only when the __CUDA_ARCH__ macro is defined17 .
Deprecation notice: __any, __all, and __ballot have been deprecated in CUDA 9.0 for all devices.
Removal notice: When targeting devices with compute capability 7.x or higher, __any, __all, and
__ballot are no longer available and their sync variants should be used instead.
The warp vote functions allow the threads of a given warp to perform a reduction-and-broadcast op-
eration. These functions take as input an integer predicate from each thread in the warp and com-
pare those values with zero. The results of the comparisons are combined (reduced) across the active
threads of the warp in one of the following ways, broadcasting a single return value to each partici-
pating thread:
__all_sync(unsigned mask, predicate):
Evaluate predicate for all non-exited threads in mask and return non-zero if and only if pred-
icate evaluates to non-zero for all of them.
__any_sync(unsigned mask, predicate):
Evaluate predicate for all non-exited threads in mask and return non-zero if and only if pred-
icate evaluates to non-zero for any of them.
__ballot_sync(unsigned mask, predicate):
Evaluate predicate for all non-exited threads in mask and return an integer whose Nth bit is
set if and only if predicate evaluates to non-zero for the Nth thread of the warp and the Nth
thread is active.
__activemask():
Returns a 32-bit integer mask of all currently active threads in the calling warp. The Nth bit
is set if the Nth lane in the warp is active when __activemask() is called. Inactive threads
are represented by 0 bits in the returned mask. Threads which have exited the program are
always marked as inactive. Note that threads that are convergent at an __activemask() call
are not guaranteed to be convergent at subsequent instructions unless those instructions are
synchronizing warp-builtin functions.
For __all_sync, __any_sync, and __ballot_sync, a mask must be passed that specifies the
threads participating in the call. A bit, representing the thread’s lane ID, must be set for each partici-
pating thread to ensure they are properly converged before the intrinsic is executed by the hardware.
17 The intent is to prevent the host compiler from encountering the call to the function if the host compiler does not support
it.
Each calling thread must have its own bit set in the mask and all non-exited threads named in mask
must execute the same intrinsic with the same mask, or the result is undefined.
These intrinsics do not imply a memory barrier. They do not guarantee any memory ordering.
10.20.1. Synopsis
unsigned int __match_any_sync(unsigned mask, T value);
unsigned int __match_all_sync(unsigned mask, T value, int *pred);
T can be int, unsigned int, long, unsigned long, long long, unsigned long long, float or
double.
10.20.2. Description
The __match_sync() intrinsics permit a broadcast-and-compare of a value value across threads in
a warp after synchronizing threads named in mask.
__match_any_sync
Returns mask of threads that have same value of value in mask
__match_all_sync
Returns mask if all threads in mask have the same value for value; otherwise 0 is returned. Pred-
icate pred is set to true if all threads in mask have the same value of value; otherwise the
predicate is set to false.
The new *_sync match intrinsics take in a mask indicating the threads participating in the call. A bit,
representing the thread’s lane id, must be set for each participating thread to ensure they are properly
converged before the intrinsic is executed by the hardware. Each calling thread must have its own bit
set in the mask and all non-exited threads named in mask must execute the same intrinsic with the
same mask, or the result is undefined.
These intrinsics do not imply a memory barrier. They do not guarantee any memory ordering.
10.21.1. Synopsis
∕∕ add∕min∕max
unsigned __reduce_add_sync(unsigned mask, unsigned value);
unsigned __reduce_min_sync(unsigned mask, unsigned value);
unsigned __reduce_max_sync(unsigned mask, unsigned value);
int __reduce_add_sync(unsigned mask, int value);
int __reduce_min_sync(unsigned mask, int value);
int __reduce_max_sync(unsigned mask, int value);
∕∕ and∕or∕xor
unsigned __reduce_and_sync(unsigned mask, unsigned value);
unsigned __reduce_or_sync(unsigned mask, unsigned value);
unsigned __reduce_xor_sync(unsigned mask, unsigned value);
10.21.2. Description
__reduce_add_sync, __reduce_min_sync, __reduce_max_sync
Returns the result of applying an arithmetic add, min, or max reduction operation on the values
provided in value by each thread named in mask.
__reduce_and_sync, __reduce_or_sync, __reduce_xor_sync
Returns the result of applying a logical AND, OR, or XOR reduction operation on the values pro-
vided in value by each thread named in mask.
The mask indicates the threads participating in the call. A bit, representing the thread’s lane id, must be
set for each participating thread to ensure they are properly converged before the intrinsic is executed
by the hardware. Each calling thread must have its own bit set in the mask and all non-exited threads
named in mask must execute the same intrinsic with the same mask, or the result is undefined.
These intrinsics do not imply a memory barrier. They do not guarantee any memory ordering.
10.22.1. Synopsis
T __shfl_sync(unsigned mask, T var, int srcLane, int width=warpSize);
T __shfl_up_sync(unsigned mask, T var, unsigned int delta, int width=warpSize);
T __shfl_down_sync(unsigned mask, T var, unsigned int delta, int width=warpSize);
T __shfl_xor_sync(unsigned mask, T var, int laneMask, int width=warpSize);
T can be int, unsigned int, long, unsigned long, long long, unsigned long long, float or
double. With the cuda_fp16.h header included, T can also be __half or __half2. Similarly, with
the cuda_bf16.h header included, T can also be __nv_bfloat16 or __nv_bfloat162.
10.22.2. Description
The __shfl_sync() intrinsics permit exchanging of a variable between threads within a warp without
use of shared memory. The exchange occurs simultaneously for all active threads within the warp (and
named in mask), moving 4 or 8 bytes of data per thread depending on the type.
Threads within a warp are referred to as lanes, and may have an index between 0 and warpSize-1
(inclusive). Four source-lane addressing modes are supported:
__shfl_sync()
Direct copy from indexed lane
__shfl_up_sync()
Copy from a lane with lower ID relative to caller
__shfl_down_sync()
Copy from a lane with higher ID relative to caller
__shfl_xor_sync()
Copy from a lane based on bitwise XOR of own lane ID
Threads may only read data from another thread which is actively participating in the __shfl_sync()
command. If the target thread is inactive, the retrieved value is undefined.
All of the __shfl_sync() intrinsics take an optional width parameter which alters the behavior of
the intrinsic. width must have a value which is a power of two in the range [1, warpSize] (i.e., 1, 2, 4, 8,
16 or 32). Results are undefined for other values.
__shfl_sync() returns the value of var held by the thread whose ID is given by srcLane. If width
is less than warpSize then each subsection of the warp behaves as a separate entity with a starting
logical lane ID of 0. If srcLane is outside the range [0:width-1], the value returned corresponds to
the value of var held by the srcLane modulo width (i.e. within the same subsection).
__shfl_up_sync() calculates a source lane ID by subtracting delta from the caller’s lane ID. The
value of var held by the resulting lane ID is returned: in effect, var is shifted up the warp by delta
lanes. If width is less than warpSize then each subsection of the warp behaves as a separate entity
with a starting logical lane ID of 0. The source lane index will not wrap around the value of width, so
effectively the lower delta lanes will be unchanged.
__shfl_down_sync() calculates a source lane ID by adding delta to the caller’s lane ID. The value
of var held by the resulting lane ID is returned: this has the effect of shifting var down the warp by
delta lanes. If width is less than warpSize then each subsection of the warp behaves as a separate
entity with a starting logical lane ID of 0. As for __shfl_up_sync(), the ID number of the source lane
will not wrap around the value of width and so the upper delta lanes will remain unchanged.
__shfl_xor_sync() calculates a source line ID by performing a bitwise XOR of the caller’s lane ID with
laneMask: the value of var held by the resulting lane ID is returned. If width is less than warpSize
then each group of width consecutive threads are able to access elements from earlier groups of
threads, however if they attempt to access elements from later groups of threads their own value of
var will be returned. This mode implements a butterfly addressing pattern such as is used in tree
reduction and broadcast.
The new *_sync shfl intrinsics take in a mask indicating the threads participating in the call. A bit,
representing the thread’s lane id, must be set for each participating thread to ensure they are properly
converged before the intrinsic is executed by the hardware. Each calling thread must have its own bit
set in the mask and all non-exited threads named in mask must execute the same intrinsic with the
same mask, or the result is undefined.
Threads may only read data from another thread which is actively participating in the __shfl_sync()
command. If the target thread is inactive, the retrieved value is undefined.
These intrinsics do not imply a memory barrier. They do not guarantee any memory ordering.
10.22.3. Examples
10.22.3.1 Broadcast of a single value across a warp
#include <stdio.h>
if (value != arg)
printf("Thread %d failed.\n", threadIdx.x);
}
int main() {
bcast<<< 1, 32 >>>(1234);
cudaDeviceSynchronize();
return 0;
}
#include <stdio.h>
int main() {
scan4<<< 1, 32 >>>();
cudaDeviceSynchronize();
return 0;
}
#include <stdio.h>
int main() {
warpReduce<<< 1, 32 >>>();
cudaDeviceSynchronize();
return 0;
}
10.23.1. Synopsis
T __nanosleep(unsigned ns);
10.23.2. Description
__nanosleep(ns) suspends the thread for a sleep duration of approximately ns nanoseconds. The
maximum sleep duration is approximately 1 millisecond.
It is supported with compute capability 7.0 or higher.
10.23.3. Example
The following code implements a mutex with exponential back-off.
__device__ void mutex_lock(unsigned int *mutex) {
unsigned int ns = 8;
while (atomicCAS(mutex, 0, 1) == 1) {
__nanosleep(ns);
if (ns < 256) {
ns *= 2;
}
}
}
10.24.1. Description
All following functions and types are defined in the namespace nvcuda::wmma. Sub-byte oper-
ations are considered preview, i.e. the data structures and APIs for them are subject to change
and may not be compatible with future releases. This extra functionality is defined in the
nvcuda::wmma::experimental namespace.
template<typename Use, int m, int n, int k, typename T, typename Layout=void> class�
,→fragment;
fragment
An overloaded class containing a section of a matrix distributed across all threads in the warp.
The mapping of matrix elements into fragment internal storage is unspecified and subject to
change in future architectures.
Only certain combinations of template arguments are allowed. The first template parameter specifies
how the fragment will participate in the matrix operation. Acceptable values for Use are:
▶ matrix_a when the fragment is used as the first multiplicand, A,
▶ matrix_b when the fragment is used as the second multiplicand, B, or
▶ accumulator when the fragment is used as the source or destination accumulators (C or D,
respectively).
The m, n and k sizes describe the shape of the warp-wide matrix tiles participating in the multiply-
accumulate operation. The dimension of each tile depends on its role. For matrix_a the tile takes
dimension m x k; for matrix_b the dimension is k x n, and accumulator tiles are m x n.
The data type, T, may be double, float, __half, __nv_bfloat16, char, or unsigned char
for multiplicands and double, float, int, or __half for accumulators. As documented in El-
ement Types and Matrix Sizes, limited combinations of accumulator and multiplicand types are
supported. The Layout parameter must be specified for matrix_a and matrix_b fragments.
row_major or col_major indicate that elements within a matrix row or column are contiguous
in memory, respectively. The Layout parameter for an accumulator matrix should retain the
default value of void. A row or column layout is specified only when the accumulator is loaded
or stored as described below.
load_matrix_sync
Waits until all warp lanes have arrived at load_matrix_sync and then loads the matrix fragment
a from memory. mptr must be a 256-bit aligned pointer pointing to the first element of the
matrix in memory. ldm describes the stride in elements between consecutive rows (for row major
layout) or columns (for column major layout) and must be a multiple of 8 for __half element
type or multiple of 4 for float element type. (i.e., multiple of 16 bytes in both cases). If the
fragment is an accumulator, the layout argument must be specified as either mem_row_major
or mem_col_major. For matrix_a and matrix_b fragments, the layout is inferred from the
fragment’s layout parameter. The values of mptr, ldm, layout and all template parameters for
a must be the same for all threads in the warp. This function must be called by all threads in the
warp, or the result is undefined.
store_matrix_sync
Waits until all warp lanes have arrived at store_matrix_sync and then stores the matrix fragment
a to memory. mptr must be a 256-bit aligned pointer pointing to the first element of the matrix in
memory. ldm describes the stride in elements between consecutive rows (for row major layout)
or columns (for column major layout) and must be a multiple of 8 for __half element type or
multiple of 4 for float element type. (i.e., multiple of 16 bytes in both cases). The layout of the
output matrix must be specified as either mem_row_major or mem_col_major. The values of
mptr, ldm, layout and all template parameters for a must be the same for all threads in the
warp.
fill_fragment
Fill a matrix fragment with a constant value v. Because the mapping of matrix elements to each
fragment is unspecified, this function is ordinarily called by all threads in the warp with a common
value for v.
mma_sync
Waits until all warp lanes have arrived at mma_sync, and then performs the warp-synchronous
matrix multiply-accumulate operation D=A*B+C. The in-place operation, C=A*B+C, is also sup-
ported. The value of satf and template parameters for each matrix fragment must be the same
for all threads in the warp. Also, the template parameters m, n and k must match between frag-
ments A, B, C and D. This function must be called by all threads in the warp, or the result is unde-
fined.
If satf (saturate to finite value) mode is true, the following additional numerical properties apply for
the destination accumulator:
▶ If an element result is +Infinity, the corresponding accumulator will contain +MAX_NORM
▶ If an element result is -Infinity, the corresponding accumulator will contain -MAX_NORM
▶ If an element result is NaN, the corresponding accumulator will contain +0
Because the map of matrix elements into each thread’s fragment is unspecified, individual matrix
elements must be accessed from memory (shared or global) after calling store_matrix_sync. In
the special case where all threads in the warp will apply an element-wise operation uniformly to all
fragment elements, direct element access can be implemented using the following fragment class
members.
enum fragment<Use, m, n, k, T, Layout>::num_elements;
T fragment<Use, m, n, k, T, Layout>::x[num_elements];
For 4 bit precision, the APIs available remain the same, but you must specify experimen-
tal::precision::u4 or experimental::precision::s4 as the fragment data type. Since
the elements of the fragment are packed together, num_storage_elements will be smaller than
num_elements for that fragment. The num_elements variable for a sub-byte fragment, hence re-
turns the number of elements of sub-byte type element_type<T>. This is true for single bit preci-
sion as well, in which case, the mapping from element_type<T> to storage_element_type<T> is
as follows:
experimental::precision::u4 -> unsigned (8 elements in 1 storage element)
experimental::precision::s4 -> int (8 elements in 1 storage element)
experimental::precision::b1 -> unsigned (32 elements in 1 storage element)
T -> T ∕∕all other types
The allowed layouts for sub-byte fragments is always row_major for matrix_a and col_major for
matrix_b.
For sub-byte operations the value of ldm in load_matrix_sync should be a multiple of 32 for element
type experimental::precision::u4 and experimental::precision::s4 or a multiple of 128
for element type experimental::precision::b1 (i.e., multiple of 16 bytes in both cases).
Note: Support for the following variants for MMA instructions is deprecated and will be removed in
sm_90:
▶ experimental::precision::u4
▶ experimental::precision::s4
▶ experimental::precision::b1 with bmmaBitOp set to bmmaBitOpXOR
bmma_sync
Waits until all warp lanes have executed bmma_sync, and then performs the warp-synchronous
bit matrix multiply-accumulate operation D = (A op B) + C, where op consists of a logical op-
eration bmmaBitOp followed by the accumulation defined by bmmaAccumulateOp. The available
operations are:
bmmaBitOpXOR, a 128-bit XOR of a row in matrix_a with the 128-bit column of matrix_b
bmmaBitOpAND, a 128-bit AND of a row in matrix_a with the 128-bit column of matrix_b, avail-
able on devices with compute capability 8.0 and higher.
The accumulate op is always bmmaAccumulateOpPOPC which counts the number of set bits.
10.24.5. Restrictions
The special format required by tensor cores may be different for each major and minor device archi-
tecture. This is further complicated by threads holding only a fragment (opaque architecture-specific
ABI data structure) of the overall matrix, with the developer not allowed to make assumptions on how
the individual parameters are mapped to the registers participating in the matrix multiply-accumulate.
Since fragments are architecture-specific, it is unsafe to pass them from function A to function B if
the functions have been compiled for different link-compatible architectures and linked together into
the same device executable. In this case, the size and layout of the fragment will be specific to one
architecture and using WMMA APIs in the other will lead to incorrect results or potentially, corruption.
An example of two link-compatible architectures, where the layout of the fragment differs, is sm_70
and sm_75.
fragA.cu: void foo() { wmma::fragment<...> mat_a; bar(&mat_a); }
fragB.cu: void bar(wmma::fragment<...> *mat_a) { ∕∕ operate on mat_a }
This undefined behavior might also be undetectable at compilation time and by tools at runtime, so
extra care is needed to make sure the layout of the fragments is consistent. This linking hazard is
most likely to appear when linking with a legacy library that is both built for a different link-compatible
architecture and expecting to be passed a WMMA fragment.
Note that in the case of weak linkages (for example, a CUDA C++ inline function), the linker may choose
any available function definition which may result in implicit passes between compilation units.
To avoid these sorts of problems, the matrix should always be stored out to memory for transit through
external interfaces (e.g. wmma::store_matrix_sync(dst, …);) and then it can be safely passed to
bar() as a pointer type [e.g. float *dst].
Note that since sm_70 can run on sm_75, the above example sm_75 code can be changed to sm_70 and
correctly work on sm_75. However, it is recommended to have sm_75 native code in your application
when linking with other sm_75 separately compiled binaries.
10.24.7. Example
The following code implements a 16x16x16 matrix multiplication in a single warp.
#include <mma.h>
using namespace nvcuda;
10.25. DPX
DPX is a set of functions that enable finding min and max values, as well as fused addition and min/max,
for up to three 16 and 32-bit signed or unsigned integer parameters, with optional ReLU (clamping to
zero):
▶ three parameters: __vimax3_s32, __vimax3_s16x2, __vimax3_u32, __vimax3_u16x2,
__vimin3_s32, __vimin3_s16x2, __vimin3_u32, __vimin3_u16x2
▶ two parameters, with ReLU: __vimax_s32_relu, __vimax_s16x2_relu, __vimin_s32_relu,
__vimin_s16x2_relu
▶ three parameters, with ReLU: __vimax3_s32_relu, __vimax3_s16x2_relu,
__vimin3_s32_relu, __vimin3_s16x2_relu
▶ two parameters, also returning which parameter was smaller/larger: __vibmax_s32, __vib-
max_u32, __vibmin_s32, __vibmin_u32, __vibmax_s16x2, __vibmax_u16x2, __vib-
min_s16x2, __vibmin_u16x2
▶ three parameters, comparing (first + second) with the third: __viaddmax_s32, __viad-
dmax_s16x2, __viaddmax_u32, __viaddmax_u16x2, __viaddmin_s32, __viaddmin_s16x2,
__viaddmin_u32, __viaddmin_u16x2
▶ three parameters, with ReLU, comparing (first + second) with the third and a zero:
__viaddmax_s32_relu, __viaddmax_s16x2_relu, __viaddmin_s32_relu, __viad-
dmin_s16x2_relu
These instructions are hardware-accelerated on devices with compute capability 9 and higher, and
software emulation on older devices.
Full API can be found in CUDA Math API documentation.
DPX is exceptionally useful when implementing dynamic programming algorithms, such as Smith-
Waterman or Needleman–Wunsch in genomics and Floyd-Warshall in route optimization.
10.25.1. Examples
Max value of three signed 32-bit integers, with ReLU
const int a = -15;
const int b = 8;
const int c = 5;
int max_value_0 = __vimax3_s32_relu(a, b, c); ∕∕ max(-15, 8, 5, 0) = 8
const int d = -2;
const int e = -4;
int max_value_1 = __vimax3_s32_relu(a, d, e); ∕∕ max(-15, -2, -4, 0) = 0
Min value of the sum of two 32-bit signed integers, another 32-bit signed integer and a zero (ReLU)
const int a = -5;
const int b = 6;
const int c = -2;
int max_value_0 = __viaddmax_s32_relu(a, b, c); ∕∕ max(-5 + 6, -2, 0) = max(1, -2, 0)�
,→= 1
const int d = 4;
int max_value_1 = __viaddmax_s32_relu(a, d, c); ∕∕ max(-5 + 4, -2, 0) = max(-1, -2, 0)�
,→= 0
Min value of two unsigned 32-bit integers and determining which value is smaller
const unsigned int a = 9;
const unsigned int b = 6;
bool smaller_value;
unsigned int min_value = __vibmin_u32(a, b, &smaller_value); ∕∕ min_value is 6,�
,→smaller_value is true
Threads are blocked at the synchronization point (block.sync()) until all threads have reached the
synchronization point. In addition, memory updates that happened before the synchronization point
are guaranteed to be visible to all threads in the block after the synchronization point, i.e., equivalent
to atomic_thread_fence(memory_order_seq_cst, thread_scope_block) as well as the sync.
This pattern has three stages:
▶ Code before sync performs memory updates that will be read after the sync.
▶ Synchronization point
▶ Code after sync point with visibility of memory updates that happened before sync point.
if (block.thread_rank() == 0) {
init(&bar, block.size()); ∕∕ Initialize the barrier with expected arrival count
}
block.sync();
compute(data, curr_iter);
bar.wait(std::move(token)); ∕* wait for all threads participating in the barrier�
,→to complete bar.arrive()*∕
In this pattern, the synchronization point (block.sync()) is split into an arrive point (bar.
arrive()) and a wait point (bar.wait(std::move(token))). A thread begins participat-
ing in a cuda::barrier with its first call to bar.arrive(). When a thread calls bar.
wait(std::move(token)) it will be blocked until participating threads have completed bar.
arrive() the expected number of times as specified by the expected arrival count argument passed
to init(). Memory updates that happen before participating threads’ call to bar.arrive() are guar-
anteed to be visible to participating threads after their call to bar.wait(std::move(token)). Note
that the call to bar.arrive() does not block a thread, it can proceed with other work that does not
depend upon memory updates that happen before other participating threads’ call to bar.arrive().
The arrive and then wait pattern has five stages which may be iteratively repeated:
▶ Code before arrive performs memory updates that will be read after the wait.
▶ Arrive point with implicit memory fence (i.e., equivalent to atomic_thread_fence(memory_order_seq_cst,
thread_scope_block)).
▶ Code between arrive and wait.
▶ Wait point.
▶ Code after the wait, with visibility of updates that were performed before the arrive.
if (block.thread_rank() == 0) {
init(&bar, block.size()); ∕∕ Single thread initializes the total expected�
,→arrival count.
}
block.sync();
}
Before any thread can participate in cuda::barrier, the barrier must be initialized using init()
with an expected arrival count, block.size() in this example. Initialization must happen before any
thread calls bar.arrive(). This poses a bootstrapping challenge in that threads must synchronize
before participating in the cuda::barrier, but threads are creating a cuda::barrier in order to
synchronize. In this example, threads that will participate are part of a cooperative group and use
block.sync() to bootstrap initialization. In this example a whole thread block is participating in
initialization, hence __syncthreads() could also be used.
The second parameter of init() is the expected arrival count, i.e., the number of times bar.
arrive() will be called by participating threads before a participating thread is unblocked from its call
to bar.wait(std::move(token)). In the prior example the cuda::barrier is initialized with the
number of threads in the thread block i.e., cooperative_groups::this_thread_block().size(),
and all threads within the thread block participate in the barrier.
A cuda::barrier is flexible in specifying how threads participate (split arrive/wait) and which
threads participate. In contrast this_thread_block.sync() from cooperative groups or __sync-
threads() is applicable to whole-thread-block and __syncwarp(mask) is a specified subset of a
warp. If the intention of the user is to synchronize a full thread block or a full warp we recommend
using __syncthreads() and __syncwarp(mask) respectively for performance reasons.
▶ A thread’s call to bar.arrive() must occur when the barrier’s counter is non-zero. After bar-
rier initialization, if a thread’s call to bar.arrive() causes the countdown to reach zero then
a call to bar.wait(std::move(token)) must happen before the barrier can be reused for a
subsequent call to bar.arrive().
▶ bar.wait() must only be called using a token object of the current phase or the immediately
preceding phase. For any other values of the token object, the behavior is undefined.
For simple arrive/wait synchronization patterns, compliance with these usage rules is straightforward.
Producer Consumer
wait for buffer to be ready to be filled signal buffer is ready to be filled
produce data and fill the buffer
signal buffer is filled wait for buffer to be filled
consume data in filled buffer
Producer threads wait for consumer threads to signal that the buffer is ready to be filled; how-
ever, consumer threads do not wait for this signal. Consumer threads wait for producer threads to
signal that the buffer is filled; however, producer threads do not wait for this signal. For full pro-
ducer/consumer concurrency this pattern has (at least) double buffering where each buffer requires
two cuda::barriers.
#include <cuda∕barrier>
#include <cooperative_groups.h>
__device__ void producer(barrier ready[], barrier filled[], float* buffer, float* in,�
,→int N, int buffer_len)
{
for (int i = 0; i < (N∕buffer_len); ++i) {
ready[i%2].arrive_and_wait(); ∕* wait for buffer_(i%2) to be ready to be filled�
,→*∕
}
}
__device__ void consumer(barrier ready[], barrier filled[], float* buffer, float* out,
,→ int N, int buffer_len)
(continues on next page)
}
}
∕∕ bar[0] and bar[1] track if buffers buffer_0 and buffer_1 are ready to be filled,
∕∕ while bar[2] and bar[3] track if buffers buffer_0 and buffer_1 are filled-in�
,→respectively
In this example the first warp is specialized as the producer and the remaining warps are special-
ized as the consumer. All producer and consumer threads participate (call bar.arrive() or bar.
arrive_and_wait()) in each of the four cuda::barriers so the expected arrival counts are equal
to block.size().
A producer thread waits for the consumer threads to signal that the shared memory buffer can be
filled. In order to wait for a cuda::barrier a producer thread must first arrive on that ready[i%2].
arrive() to get a token and then ready[i%2].wait(token) with that token. For simplicity
ready[i%2].arrive_and_wait() combines these operations.
bar.arrive_and_wait();
∕* is equivalent to *∕
bar.wait(bar.arrive());
Producer threads compute and fill the ready buffer, they then signal that the buffer is filled by arriving
on the filled barrier, filled[i%2].arrive(). A producer thread does not wait at this point, instead
it waits until the next iteration’s buffer (double buffering) is ready to be filled.
A consumer thread begins by signaling that both buffers are ready to be filled. A consumer thread
does not wait at this point, instead it waits for this iteration’s buffer to be filled, filled[i%2].
arrive_and_wait(). After the consumer threads consume the buffer they signal that the buffer
is ready to be filled again, ready[i%2].arrive(), and then wait for the next iteration’s buffer to be
filled.
if (block.thread_rank() == 0)
init(&bar , block.size());
block.sync();
This operation arrives on the cuda::barrier to fulfill the participating thread’s obligation to arrive
in the current phase, and then decrements the expected arrival count for the next phase so that this
thread is no longer expected to arrive on the barrier.
#include <cuda∕barrier>
#include <cooperative_groups.h>
#include <functional>
namespace cg = cooperative_groups;
∕∕ Barrier storage
∕∕ Note: the barrier is not default-constructible because
∕∕ completion_fn is not default-constructible due
∕∕ to the capture.
using completion_fn_t = decltype(completion_fn);
using barrier_t = cuda::barrier<cuda::thread_scope_block,
completion_fn_t>;
__shared__ std::aligned_storage<sizeof(barrier_t),
alignof(barrier_t)> bar_storage;
∕∕ Initialize barrier:
barrier_t* bar = (barrier_t*)&bar_storage;
if (block.thread_rank() == 0) {
assert(*acc == 0);
assert(blockDim.x == blockDim.y == blockDim.y == 1);
new (bar) barrier_t{block.size(), completion_fn};
∕∕ equivalent to: init(bar, block.size(), completion_fn);
}
block.sync();
∕∕ Main loop
for (int i = 0; i < n; i += block.size()) {
smem[block.thread_rank()] = data[i] + *acc;
auto t = bar->arrive();
∕∕ We can do independent computation here
bar->wait(std::move(t));
∕∕ shared-memory is safe to re-use in the next iteration
∕∕ since all threads are done with it, including the one
∕∕ that did the reduction
}
}
uint32_t __mbarrier_maximum_count();
void __mbarrier_init(__mbarrier_t* bar, uint32_t expected_count);
▶ token must be associated with the immediately preceding phase or current phase of *this.
▶ Returns true if token is associated with the immediately preceding phase of *bar, otherwise
returns false.
∕∕Note: This API has been deprecated in CUDA 11.1
uint32_t __mbarrier_pending_count(__mbarrier_token_t token);
▶ Section Single-Stage Asynchronous Data Copies using cuda::pipeline show memcpy with single
stage pipeline
▶ Section Multi-Stage Asynchronous Data Copies using cuda::pipeline show memcpy with multi
stage pipeline
block.sync();
}
}}
__global__ void with_barrier(int* global_out, int const* global_in, size_t size, size_
,→t batch_sz) {
block.sync();
}
}
10.27.6.1 Alignment
On devices with compute capability 8.0, the cp.async family of instructions allows copying data from
global to shared memory asynchronously. These instructions support copying 4, 8, and 16 bytes at
a time. If the size provided to memcpy_async is a multiple of 4, 8, or 16, and both pointers passed
to memcpy_async are aligned to a 4, 8, or 16 alignment boundary, then memcpy_async can be imple-
mented using exclusively asynchronous memory operations.
Additionally for achieving best performance when using memcpy_async API, an alignment of 128 Bytes
for both shared memory and global memory is required.
For pointers to values of types with an alignment requirement of 1 or 2, it is often not possible to prove
that the pointers are always aligned to a higher alignment boundary. Determining whether the cp.
async instructions can or cannot be used must be delayed until run-time. Performing such a runtime
alignment check increases code-size and adds runtime overhead.
The cuda::aligned_size_t<size_t Align>(size_t size)Shape can be used to supply a proof that both point-
ers passed to memcpy_async are aligned to an Align alignment boundary and that size is a multiple
of Align, by passing it as an argument where the memcpy_async APIs expect a Shape:
cuda::memcpy_async(group, dst, src, cuda::aligned_size_t<16>(N * block.size()),�
,→pipeline);
On devices with compute capability 8.0, the cp.async family of instructions allows copying data from
global to shared memory asynchronously. If the pointer types passed to memcpy_async do not point
to TriviallyCopyable types, the copy constructor of each output element needs to be invoked, and these
instructions cannot be used to accelerate memcpy_async.
The sequence of memcpy_async batches is shared across the warp. The commit operation is coalesced
such that the sequence is incremented once for all converged threads that invoke the commit opera-
tion. If the warp is fully converged, the sequence is incremented by one; if the warp is fully diverged,
the sequence is incremented by 32.
▶ Let PB be the warp-shared pipeline’s actual sequence of batches.
PB = {BP0, BP1, BP2, …, BPL}
▶ Let TB be a thread’s perceived sequence of batches, as if the sequence were only incremented
by this thread’s invocation of the commit operation.
TB = {BT0, BT1, BT2, …, BTL}
The pipeline::producer_commit() return value is from the thread’s perceived batch se-
quence.
▶ An index in a thread’s perceived sequence always aligns to an equal or larger index in the actual
warp-shared sequence. The sequences are equal only when all commit operations are invoked
from converged threads.
BTn � BPm where n <= m
For example, when a warp is fully diverged:
▶ The warp-shared pipeline’s actual sequence would be: PB = {0, 1, 2, 3, ..., 31} (PL=31).
▶ The perceived sequence for each thread of this warp would be:
▶ Thread 0: TB = {0} (TL=0)
▶ Thread 1: TB = {0} (TL=0)
▶ …
▶ Thread 31: TB = {0} (TL=0)
Warp-divergence affects the number of times an arrive_on(bar) operation updates the barrier. If
the invoking warp is fully converged, then the barrier is updated once. If the invoking warp is fully
diverged, then 32 individual updates are applied to the barrier.
∕∕ Collectively acquire the pipeline head stage from all producer threads:
pipeline.producer_acquire();
pipeline.producer_commit();
∕∕ Pipelined copy∕compute:
for (size_t batch = 1; batch < batch_sz; ++batch) {
∕∕ Stage indices for the compute and copy stages:
size_t compute_stage_idx = (batch - 1) % 2;
size_t copy_stage_idx = batch % 2;
∕∕ Collectively acquire the pipeline head stage from all producer threads:
pipeline.producer_acquire();
pipeline.consumer_release();
}
A pipeline object is a double-ended queue with a head and a tail, and is used to process work in a first-in
first-out (FIFO) order. Producer threads commit work to the pipeline’s head, while consumer threads
pull work from the pipeline’s tail. In the example above, all threads are both producer and consumer
threads. The threads first commitmemcpy_async operations to fetch the next batch while they wait
on the previous batch of memcpy_async operations to complete.
▶ Committing work to a pipeline stage involves:
▶ Collectively acquiring the pipeline head from a set of producer threads using pipeline.
producer_acquire().
▶ Submitting memcpy_async operations to the pipeline head.
▶ Collectively commiting (advancing) the pipeline head using pipeline.
producer_commit().
▶ Using a previously commited stage involves:
__shared__ cuda::pipeline_shared_state<
cuda::thread_scope::thread_scope_block,
stages_count
> shared_state;
auto pipeline = cuda::make_pipeline(block, &shared_state);
∕∕ This inner loop iterates over the memory transfers, making sure that the�
,→pipeline is always full
pipeline.producer_acquire();
size_t shared_idx = fetch_batch % stages_count;
size_t batch_idx = fetch_batch;
size_t block_batch_idx = block_batch(batch_idx);
cuda::memcpy_async(block, shared + shared_offset[shared_idx], global_in +�
,→block_batch_idx, sizeof(int) * block.size(), pipeline);
pipeline.producer_commit();
}
pipeline.consumer_wait();
int shared_idx = compute_batch % stages_count;
int batch_idx = compute_batch;
compute(global_out + block_batch(batch_idx), shared + shared_offset[shared_
,→idx]);
pipeline.consumer_release();
}
(continues on next page)
The pipeline<thread_scope_block> primitive used above is very flexible, and supports two fea-
tures that our examples above are not using: any arbitrary subset of threads in the block can par-
ticipate in the pipeline, and from the threads that participate, any subsets can be producers, con-
sumers, or both. In the following example, threads with an “even” thread rank are producers, while
other threads are consumers:
__device__ void compute(int* global_out, int shared_in);
∕∕ In this example, threads with "even" thread rank are producers, while threads�
,→with "odd" thread rank are consumers:
const cuda::pipeline_role thread_role
= block.thread_rank() % 2 == 0? cuda::pipeline_role::producer : cuda::pipeline_
,→role::consumer;
__shared__ cuda::pipeline_shared_state<
cuda::thread_scope::thread_scope_block,
stages_count
> shared_state;
cuda::pipeline pipeline = cuda::make_pipeline(block, &shared_state, thread_role);
∕∕ This inner loop iterates over the memory transfers, making sure that the�
,→pipeline is always full
pipeline.producer_commit();
}
}
if (thread_role == cuda::pipeline_role::consumer) {
∕∕ Only the consumer threads compute:
pipeline.consumer_wait();
size_t shared_idx = compute_batch % stages_count;
size_t global_batch_idx = block_batch(compute_batch) + thread_idx;
size_t shared_batch_idx = shared_offset[shared_idx] + thread_idx;
compute(global_out + global_batch_idx, *(shared + shared_batch_idx));
pipeline.consumer_release();
}
}
}
There are some optimizations that pipeline performs, for example, when all threads are both produc-
ers and consumers, but in general, the cost of supporting all these features cannot be fully eliminated.
For example, pipeline stores and uses a set of barriers in shared memory for synchronization, which
is not really necessary if all threads in the block participate in the pipeline.
For the particular case in which all threads in the block participate in the pipeline, we can do better
than pipeline<thread_scope_block> by using a pipeline<thread_scope_thread> combined
with __syncthreads():
template<size_t stages_count>
__global__ void with_staging_scope_thread(int* global_out, int const* global_in, size_
,→t size, size_t batch_sz) {
∕∕ No pipeline::shared_state needed
cuda::pipeline<cuda::thread_scope_thread> pipeline = cuda::make_pipeline();
pipeline.producer_commit();
}
pipeline.consumer_wait();
block.sync(); ∕∕ __syncthreads: All memcpy_async of all threads in the block�
,→for this stage have completed here
pipeline.consumer_release();
}
}
If the compute operation only reads shared memory written to by other threads in the same warp as
the current thread, __syncwarp() suffices.
▶ Requirements:
▶ dst_shared must be a pointer to the shared memory destination for the memcpy_async.
▶ src_global must be a pointer to the global memory source for the memcpy_async.
▶ size_and_align must be 4, 8, or 16.
▶ zfill <= size_and_align.
▶ size_and_align must be the alignment of dst_shared and src_global.
▶ It is a race condition for any thread to modify the source memory or observe the destination
memory prior to waiting for the memcpy_async operation to complete. Between submitting a
memcpy_async operation and waiting for its completion, any of the following actions introduces
a race condition:
▶ Loading from dst_shared.
▶ Storing to dst_shared or src_global.
▶ Applying an atomic update to dst_shared or src_global.
void __pipeline_commit();
10.29. Asynchronous Data Copies using Tensor Memory Access (TMA) 235
CUDA C++ Programming Guide, Release 12.4
they have completed. When the operation reads from global to shared memory, any thread in the
block can wait for the data to be readable in shared memory by waiting on a Shared Memory Bar-
rier. When the bulk-asynchronous operation writes data from shared memory to global or distributed
shared memory, only the initiating thread can wait for the operation to have completed. This is ac-
complished using a bulk async-group based completion mechanism. A table describing the completion
mechanisms can be found below and in the PTX ISA.
#endif ∕∕ __CUDA_MINIMUM_ARCH__
#ifndef __cccl_lib_experimental_ctk12_cp_async_exposure
static_assert(false, "libcu++ does not have experimental CTK 12 cp_async feature�
,→exposure.");
#endif ∕∕ __cccl_lib_has_experimental_ctk12_cp_async_exposure
5. Wait for shared memory writes to be visible to the subsequent bulk-asynchronous copy, i.e., order
the shared memory writes in the async proxy before the next step.
6. Initiate bulk-asynchronous copy of the buffer in shared memory to global memory.
7. Wait at end of kernel for bulk-asynchronous copy to have finished reading shared memory.
#include <cuda∕barrier>
using barrier = cuda::barrier<cuda::thread_scope_block>;
namespace cde = cuda::device::experimental;
∕∕ 1. a) Initialize shared memory barrier with the number of threads participating in�
,→ the barrier.
∕∕ b) Make initialized barrier visible in async proxy.
#pragma nv_diag_suppress static_var_with_dynamic_init
__shared__ barrier bar;
if (threadIdx.x == 0) {
init(&bar, blockDim.x); ∕∕ a)
cde::fence_proxy_async_shared_cta(); ∕∕ b)
}
__syncthreads();
∕∕ 3a. Arrive on the barrier and tell how many bytes are expected to come in (the�
,→transaction count)
10.29. Asynchronous Data Copies using Tensor Memory Access (TMA) 237
CUDA C++ Programming Guide, Release 12.4
Barrier initialization. The barrier is initialized with the number of threads participating in the block. As
a result, the barrier will flip only if all threads have arrived on this barrier. Shared memory barriers are
described in more detail in Asynchronous Data Copies using cuda::barrier. To make the initialized barrier
visible to subsequent bulk-asynchronous copies, the fence.proxy.async.shared::cta instruction
is used. This instruction ensures that subsequent bulk-asynchronous copy operations operate on the
initialized barrier.
TMA read. The bulk-asynchronous copy instruction directs the hardware to copy a large chunk of
data into shared memory, and to update the transaction count of the shared memory barrier after
completing the read. In general, issuing as few bulk copies with as big a size as possible results in
the best performance. Because the copy can be performed asynchronously by the hardware, it is not
necessary to split the copy into smaller chunks.
The thread that initiates the bulk-asynchronous copy operation arrives at the barrier using mbarrier.
arrive.expect_tx. This tells the barrier that the thread has arrived and also how many bytes (tx /
transactions) are expected to arrive. Only a single thread has to update the expected transaction
count. If multiple threads update the transaction count, the expected transaction will be the sum of
the updates. The barrier will only flip once all threads have arrived and all bytes have arrived. Once the
barrier has flipped, the bytes are safe to read from shared memory, both by the threads as well as by
subsequent bulk-asynchronous copies. More information about barrier transaction accounting can be
found in the PTX ISA.
Barrier wait. Waiting for the barrier to flip is done using mbarrier.try_wait. It can either return
true, indicating that the wait is over, or return false, which may mean that the wait timed out. The
while loop waits for completion, and retries on time-out.
SMEM write and sync. The increment of the buffer values reads and writes to shared memory. To make
the writes visible to subsequent bulk-asynchronous copies, the fence.proxy.async.shared::cta
instruction is used. This orders the writes to shared memory before subsequent reads from bulk-
asynchronous copy operations, which read through the async proxy. So each thread first orders the
writes to objects in shared memory in the async proxy via the fence.proxy.async.shared::cta,
and these operations by all threads are ordered before the async operation performed in thread 0
using __syncthreads().
TMA write and sync. The write from shared to global memory is again initiated by a single thread. The
completion of the write is not tracked by a shared memory barrier. Instead, a thread-local mechanism
is used. Multiple writes can be batched into a so-called bulk async-group. Afterwards, the thread can
wait for all operations in this group to have completed reading from shared memory (as in the code
above) or to have completed writing to global memory, making the writes visible to the initiating thread.
For more information, refer to the PTX ISA documentation of cp.async.bulk.wait_group. Note that the
bulk-asynchronous and non-bulk asynchronous copy instructions have different async-groups: there
exist both cp.async.wait_group and cp.async.bulk.wait_group instructions.
The bulk-asynchronous instructions have specific alignment requirements on their source and desti-
nation addresses. More information can be found in the table below.
Below, the PTX instructions are ordered by their use in the one-dimensional code example.
The PTX instruction cp.async.bulk initiates a bulk-asynchronous copy from global to shared memory.
∕∕ https:∕∕docs.nvidia.com∕cuda∕parallel-thread-execution∕index.html#data-movement-and-
,→conversion-instructions-cp-async-bulk
inline __device__
void cuda::device::experimental::cp_async_bulk_global_to_shared(
void *dest, const void *src, uint32_t size, cuda::barrier<cuda::thread_scope_
,→block> &bar
);
The PTX instruction fence.proxy.async.shared::cta waits for a thread’s shared memory writes to be-
come visible to the “async proxy”, which includes subsequent bulk-asynchronous copies. Variants of
this instruction exist that also wait for writes to distributed shared memory (in a cluster) and global
memory to become visible.
∕∕ https:∕∕docs.nvidia.com∕cuda∕parallel-thread-execution∕index.html#parallel-
,→synchronization-and-communication-instructions-membar
inline __device__
void cuda::device::experimental::fence_proxy_async_shared_cta();
The PTX instruction cp.async.bulk initiates a bulk-asynchronous copy from shared to global memory.
∕∕ https:∕∕docs.nvidia.com∕cuda∕parallel-thread-execution∕index.html#data-movement-and-
,→conversion-instructions-cp-async-bulk
inline __device__
void cuda::device::experimental::cp_async_bulk_shared_to_global(
void *dest, const void * src, uint32_t size
);
inline __device__
void cuda::device::experimental::cp_async_bulk_commit_group();
The PTX instruction cp.async.bulk.wait_group waits for operations in a bulk async-group to have com-
pleted. The parameter N indicates that it should wait until at most N groups are awaiting completion,
10.29. Asynchronous Data Copies using Tensor Memory Access (TMA) 239
CUDA C++ Programming Guide, Release 12.4
i.e., N=0 waits for all groups to have completed. The optional .read modifier indicates that the waiting
has to be done until all the bulk async operations in the specified bulk async-groups have completed
reading from their source locations. Thus, the .read variant can be expected to return earlier than
the normal variant that waits until the writes have become visible in the destination location to the
executing thread.
∕∕ https:∕∕docs.nvidia.com∕cuda∕parallel-thread-execution∕index.html#data-movement-and-
,→conversion-instructions-cp-async-bulk-wait-group
PFN_cuTensorMapEncodeTiled_v12000 get_cuTensorMapEncodeTiled() {
∕∕ Get pointer to cuGetProcAddress
cudaDriverEntryPointQueryResult driver_status;
void* cuGetProcAddress_ptr = nullptr;
CUDA_CHECK(cudaGetDriverEntryPoint("cuGetProcAddress", &cuGetProcAddress_ptr,�
,→cudaEnableDefault, &driver_status));
assert(driver_status == cudaDriverEntryPointSuccess);
PFN_cuGetProcAddress_v12000 cuGetProcAddress = reinterpret_cast<PFN_
,→cuGetProcAddress_v12000>(cuGetProcAddress_ptr);
return reinterpret_cast<PFN_cuTensorMapEncodeTiled_v12000>(cuTensorMapEncodeTiled_
,→ ptr);
}
Creation. Creating a tensor map requires many parameters. Among them are the base pointer to an
array in global memory, the size of the array (in number of elements), the stride from one row to the
next (in bytes), the size of the shared memory buffer (in number of elements). The code below creates
a tensor map to describe a two-dimensional row-major array of size GMEM_HEIGHT x GMEM_WIDTH.
Note the order of the parameters: the fastest moving dimension comes first.
CUtensorMap tensor_map{};
∕∕ rank is the number of dimensions of the array.
constexpr uint32_t rank = 2;
uint64_t size[rank] = {GMEM_WIDTH, GMEM_HEIGHT};
∕∕ The stride is the number of bytes to traverse from the first element of one row to�
,→the next.
Host-to-device transfer. A bulk tensor asynchronous operations require the tensor map to be in
immutable memory. This can be achieved by using constant memory or by passing the tensor map as
a const __grid_constant__ parameter to a kernel. When passing the tensor map as a parameter,
some versions of the GCC C++ compiler issue the warning “the ABI for passing parameters with 64-
byte alignment has changed in GCC 4.6”. This warning can be ignored.
__global__ void kernel(const __grid_constant__ CUtensorMap tensor_map)
{
∕∕ Use tensor_map here.
}
int main() {
CUtensorMap map;
∕∕ [ ..Initialize map.. ]
kernel<<<1, 1>>>(map);
}
As an alternative to the __grid_constant__ kernel parameter, a global constant variable can be used.
10.29. Asynchronous Data Copies using Tensor Memory Access (TMA) 241
CUDA C++ Programming Guide, Release 12.4
The following example copies the tensor map to global device memory. Using a pointer to a tensor
map in global device memory is undefined behavior and will lead to silent and difficult to track down
bugs.
__device__ CUtensorMap global_tensor_map;
__global__ void kernel(CUtensorMap *tensor_map)
{
∕∕ Do *not* use tensor_map here. Using a global memory pointer is
∕∕ undefined behavior and can fail silently and unreliably.
}
int main() {
CUtensorMap local_tensor_map;
∕∕ [ ..Initialize map.. ]
cudaMemcpy(global_tensor_map, &local_tensor_map, sizeof(CUtensorMap));
kernel<<<1, 1>>>(global_tensor_map);
}
Use. The kernel below loads a 2D tile of size SMEM_HEIGHT x SMEM_WIDTH from a larger 2D array. The
top-left corner of the tile is indicated by the indices x and y. The tile is loaded into shared memory,
modified, and written back to global memory.
#include <cuda.h> ∕∕ CUtensormap
#include <cuda∕barrier>
using barrier = cuda::barrier<cuda::thread_scope_block>;
namespace cde = cuda::device::experimental;
∕∕ Initialize shared memory barrier with the number of threads participating in the�
,→ barrier.
#pragma nv_diag_suppress static_var_with_dynamic_init
__shared__ barrier bar;
if (threadIdx.x == 0) {
∕∕ Initialize barrier. All `blockDim.x` threads in block participate.
init(&bar, blockDim.x);
∕∕ Make initialized barrier visible in async proxy.
cde::fence_proxy_async_shared_cta();
}
∕∕ Syncthreads so initialized barrier is visible to all threads.
__syncthreads();
(continues on next page)
barrier::arrival_token token;
if (threadIdx.x == 0) {
∕∕ Initiate bulk tensor copy.
cde::cp_async_bulk_tensor_2d_global_to_shared(&smem_buffer, &tensor_map, x, y,�
,→bar);
∕∕ Arrive on the barrier and tell how many bytes are expected to come in.
token = cuda::device::barrier_arrive_tx(bar, 1, sizeof(smem_buffer));
} else {
∕∕ Other threads just arrive.
token = bar.arrive();
}
∕∕ Wait for the data to have arrived.
bar.wait(std::move(token));
Negative indices and out of bounds. When part of the tile that is being read from global to shared
memory is out of bounds, the shared memory that corresponds to the out of bounds area is zero-
filled. The top-left corner indices of the tile may also be negative. When writing from shared to global
memory, parts of the tile may be out of bounds, but the top left corner cannot have any negative
indices.
Size and stride. The size of a tensor is the number of elements along one dimension. All sizes must
be greater than one. The stride is the number of bytes between elements of the same dimension. For
instance, a 4 x 4 matrix of integers has sizes 4 and 4. Since it has 4 bytes per element, the strides are 4
and 16 bytes. Due to alignment requirements, a 4 x 3 row-major matrix of integers must have strides of
4 and 16 bytes as well. Each row is padded with 4 extra bytes to ensure that the start of the next row is
aligned to 16 bytes. For more information regarding alignment, refer to Table Alignment requirements
for multi-dimensional bulk tensor asynchronous copy operations in Compute Capability 9.0..
10.29. Asynchronous Data Copies using Tensor Memory Access (TMA) 243
CUDA C++ Programming Guide, Release 12.4
Below, the PTX instructions are ordered by their use in the example code above. The functions already
covered in One-dimensional TMA PTX wrappers are not repeated here.
The cp.async.bulk.tensor instructions initiate a bulk tensor asynchronous copy between global and
shared memory. The wrappers below read from global to shared memory and write from shared to
global memory.
∕∕ https:∕∕docs.nvidia.com∕cuda∕parallel-thread-execution∕index.html#data-movement-and-
,→conversion-instructions-cp-async-bulk-tensor
inline __device__
void cuda::device::experimental::cp_async_bulk_tensor_1d_global_to_shared(
void *dest, const CUtensorMap *tensor_map , int c0, cuda::barrier<cuda::thread_
,→scope_block> &bar
);
∕∕ https:∕∕docs.nvidia.com∕cuda∕parallel-thread-execution∕index.html#data-movement-and-
,→conversion-instructions-cp-async-bulk-tensor
inline __device__
void cuda::device::experimental::cp_async_bulk_tensor_2d_global_to_shared(
void *dest, const CUtensorMap *tensor_map , int c0, int c1, cuda::barrier
,→<cuda::thread_scope_block> &bar
);
∕∕ https:∕∕docs.nvidia.com∕cuda∕parallel-thread-execution∕index.html#data-movement-and-
,→conversion-instructions-cp-async-bulk-tensor
inline __device__
void cuda::device::experimental::cp_async_bulk_tensor_3d_global_to_shared(
void *dest, const CUtensorMap *tensor_map, int c0, int c1, int c2, cuda::barrier
,→<cuda::thread_scope_block> &bar
);
∕∕ https:∕∕docs.nvidia.com∕cuda∕parallel-thread-execution∕index.html#data-movement-and-
,→conversion-instructions-cp-async-bulk-tensor
inline __device__
void cuda::device::experimental::cp_async_bulk_tensor_4d_global_to_shared(
(continues on next page)
);
∕∕ https:∕∕docs.nvidia.com∕cuda∕parallel-thread-execution∕index.html#data-movement-and-
,→conversion-instructions-cp-async-bulk-tensor
inline __device__
void cuda::device::experimental::cp_async_bulk_tensor_5d_global_to_shared(
void *dest, const CUtensorMap *tensor_map , int c0, int c1, int c2, int c3, int�
,→c4, cuda::barrier<cuda::thread_scope_block> &bar
);
∕∕ https:∕∕docs.nvidia.com∕cuda∕parallel-thread-execution∕index.html#data-movement-and-
,→conversion-instructions-cp-async-bulk-tensor
inline __device__
void cuda::device::experimental::cp_async_bulk_tensor_1d_shared_to_global(
const CUtensorMap *tensor_map, int c0, const void *src
);
∕∕ https:∕∕docs.nvidia.com∕cuda∕parallel-thread-execution∕index.html#data-movement-and-
,→conversion-instructions-cp-async-bulk-tensor
inline __device__
void cuda::device::experimental::cp_async_bulk_tensor_2d_shared_to_global(
const CUtensorMap *tensor_map, int c0, int c1, const void *src
);
∕∕ https:∕∕docs.nvidia.com∕cuda∕parallel-thread-execution∕index.html#data-movement-and-
,→conversion-instructions-cp-async-bulk-tensor
inline __device__
void cuda::device::experimental::cp_async_bulk_tensor_3d_shared_to_global(
const CUtensorMap *tensor_map, int c0, int c1, int c2, const void *src
);
∕∕ https:∕∕docs.nvidia.com∕cuda∕parallel-thread-execution∕index.html#data-movement-and-
,→conversion-instructions-cp-async-bulk-tensor
inline __device__
void cuda::device::experimental::cp_async_bulk_tensor_4d_shared_to_global(
const CUtensorMap *tensor_map, int c0, int c1, int c2, int c3, const void *src
);
∕∕ https:∕∕docs.nvidia.com∕cuda∕parallel-thread-execution∕index.html#data-movement-and-
,→conversion-instructions-cp-async-bulk-tensor
inline __device__
void cuda::device::experimental::cp_async_bulk_tensor_5d_shared_to_global(
const CUtensorMap *tensor_map, int c0, int c1, int c2, int c3, int c4, const void�
,→*src
);
10.29. Asynchronous Data Copies using Tensor Memory Access (TMA) 245
CUDA C++ Programming Guide, Release 12.4
increments by one per warp the per-multiprocessor hardware counter of index counter. Counters 8
to 15 are reserved and should not be used by applications.
The value of counters 0, 1, …, 7 can be obtained via nvprof by nvprof --events prof_trigger_0x
where x is 0, 1, …, 7. All counters are reset before each kernel launch (note that when collecting coun-
ters, kernel launches are synchronous as mentioned in Concurrent Execution between Host and De-
vice).
10.31. Assertion
Assertion is only supported by devices of compute capability 2.x and higher.
void assert(int expression);
stops the kernel execution if expression is equal to zero. If the program is run within a debugger,
this triggers a breakpoint and the debugger can be used to inspect the current state of the device.
Otherwise, each thread for which expression is equal to zero prints a message to stderr after syn-
chronization with the host via cudaDeviceSynchronize(), cudaStreamSynchronize(), or cud-
aEventSynchronize(). The format of this message is as follows:
<filename>:<line number>:<function>:
block: [blockId.x,blockId.x,blockIdx.z],
thread: [threadIdx.x,threadIdx.y,threadIdx.z]
Assertion `<expression>` failed.
Any subsequent host-side synchronization calls made for the same device will return cudaErro-
rAssert. No more commands can be sent to this device until cudaDeviceReset() is called to reini-
tialize the device.
If expression is different from zero, the kernel execution is unaffected.
For example, the following program from source file test.cu
#include <assert.h>
return 0;
}
will output:
test.cu:19: void testAssert(): block: [0,0,0], thread: [0,0,0] Assertion `should_be_
,→one` failed.
Assertions are for debugging purposes. They can affect performance and it is therefore recommended
to disable them in production code. They can be disabled at compile time by defining the NDEBUG
preprocessor macro before including assert.h. Note that expression should not be an expression
with side effects (something like(++i > 0), for example), otherwise disabling the assertion will affect
the functionality of the code.
The execution of the kernel is aborted and an interrupt is raised in the host program.
behavior. In essence, the string passed in as format is output to a stream on the host, with substi-
tutions made from the argument list wherever a format specifier is encountered. Supported format
specifiers are listed below.
The printf() command is executed as any other device-side function: per-thread, and in the context
of the calling thread. From a multi-threaded kernel, this means that a straightforward call to printf()
will be executed by every thread, using that thread’s data as specified. Multiple versions of the output
string will then appear at the host stream, once for each thread which encountered the printf().
It is up to the programmer to limit the output to a single thread if only a single output string is desired
(see Examples for an illustrative example).
Unlike the C-standard printf(), which returns the number of characters printed, CUDA’s printf()
returns the number of arguments parsed. If no arguments follow the format string, 0 is returned. If
the format string is NULL, -1 is returned. If an internal error occurs, -2 is returned.
10.34.2. Limitations
Final formatting of the printf()output takes place on the host system. This means that the format
string must be understood by the host-system’s compiler and C library. Every effort has been made to
ensure that the format specifiers supported by CUDA’s printf function form a universal subset from
the most common host compilers, but exact behavior will be host-OS-dependent.
As described in Format Specifiers, printf() will accept all combinations of valid flags and types. This
is because it cannot determine what will and will not be valid on the host system where the final output
is formatted. The effect of this is that output may be undefined if the program emits a format string
which contains invalid combinations.
The printf() command can accept at most 32 arguments in addition to the format string. Additional
arguments beyond this will be ignored, and the format specifier output as-is.
Owing to the differing size of the long type on 64-bit Windows platforms (four bytes on 64-bit Win-
dows platforms, eight bytes on other 64-bit platforms), a kernel which is compiled on a non-Windows
64-bit machine but then run on a win64 machine will see corrupted output for all format strings which
include “%ld”. It is recommended that the compilation platform matches the execution platform to
ensure safety.
The output buffer for printf() is set to a fixed size before kernel launch (see Associated Host-Side
API). It is circular and if more output is produced during kernel execution than can fit in the buffer,
older output is overwritten. It is flushed only when one of these actions is performed:
▶ Kernel launch via <<<>>> or cuLaunchKernel() (at the start of the launch, and if the
CUDA_LAUNCH_BLOCKING environment variable is set to 1, at the end of the launch as well),
▶ Synchronization via cudaDeviceSynchronize(), cuCtxSynchronize(), cudaStreamSyn-
chronize(), cuStreamSynchronize(), cudaEventSynchronize(), or cuEventSynchro-
nize(),
▶ Memory copies via any blocking version of cudaMemcpy*() or cuMemcpy*(),
▶ Module loading/unloading via cuModuleLoad() or cuModuleUnload(),
▶ Context destruction via cudaDeviceReset() or cuCtxDestroy().
▶ Prior to executing a stream callback added by cudaStreamAddCallback or cuStreamAddCall-
back.
Note that the buffer is not flushed automatically when the program exits. The user must call cudaDe-
viceReset() or cuCtxDestroy() explicitly, as shown in the examples below.
Internally printf() uses a shared data structure and so it is possible that calling printf() might
change the order of execution of threads. In particular, a thread which calls printf() might take a
longer execution path than one which does not call printf(), and that path length is dependent upon
the parameters of the printf(). Note, however, that CUDA makes no guarantees of thread execution
order except at explicit __syncthreads() barriers, so it is impossible to tell whether execution order
has been modified by printf() or by other scheduling behavior in the hardware.
10.34.4. Examples
The following code sample:
#include <stdio.h>
int main()
{
(continues on next page)
will output:
Hello thread 2, f=1.2345
Hello thread 1, f=1.2345
Hello thread 4, f=1.2345
Hello thread 0, f=1.2345
Hello thread 3, f=1.2345
Notice how each thread encounters the printf() command, so there are as many lines of output
as there were threads launched in the grid. As expected, global values (i.e., float f) are common
between all threads, and local values (i.e., threadIdx.x) are distinct per-thread.
The following code sample:
#include <stdio.h>
int main()
{
helloCUDA<<<1, 5>>>(1.2345f);
cudaDeviceSynchronize();
return 0;
}
will output:
Hello thread 0, f=1.2345
Self-evidently, the if() statement limits which threads will call printf, so that only a single line of
output is seen.
allocate and free memory dynamically from a fixed-size heap in global memory.
__host__ __device__ void* memcpy(void* dest, const void* src, size_t size);
copy size bytes from the memory location pointed by src to the memory location pointed by dest.
__host__ __device__ void* memset(void* ptr, int value, size_t size);
set size bytes of memory block pointed by ptr to value (interpreted as an unsigned char).
The CUDA in-kernel malloc()function allocates at least size bytes from the device heap and returns
a pointer to the allocated memory or NULL if insufficient memory exists to fulfill the request. The
returned pointer is guaranteed to be aligned to a 16-byte boundary.
The CUDA in-kernel __nv_aligned_device_malloc() function allocates at least size bytes from
the device heap and returns a pointer to the allocated memory or NULL if insufficient memory exists
to fulfill the requested size or alignment. The address of the allocated memory will be a multiple of
align. align must be a non-zero power of 2.
The CUDA in-kernel free() function deallocates the memory pointed to by ptr, which must have
been returned by a previous call to malloc() or __nv_aligned_device_malloc(). If ptr is NULL,
the call to free() is ignored. Repeated calls to free() with the same ptr has undefined behavior.
The memory allocated by a given CUDA thread via malloc() or __nv_aligned_device_malloc()
remains allocated for the lifetime of the CUDA context, or until it is explicitly released by a call to
free(). It can be used by any other CUDA threads even from subsequent kernel launches. Any CUDA
thread may free memory allocated by another thread, but care should be taken to ensure that the
same pointer is not freed more than once.
10.35.3. Examples
10.35.3.1 Per Thread Allocation
int main()
{
∕∕ Set a heap size of 128 megabytes. Note that this must
∕∕ be done before any kernel is launched.
cudaDeviceSetLimit(cudaLimitMallocHeapSize, 128*1024*1024);
mallocTest<<<1, 5>>>();
cudaDeviceSynchronize();
return 0;
}
will output:
Thread 0 got pointer: 00057020
Thread 1 got pointer: 0005708c
Thread 2 got pointer: 000570f8
Thread 3 got pointer: 00057164
Thread 4 got pointer: 000571d0
Notice how each thread encounters the malloc() and memset() commands and so receives and
initializes its own allocation. (Exact pointer values will vary: these are illustrative.)
#include <stdlib.h>
∕∕ The first thread in the block does the allocation and then
∕∕ shares the pointer with all other threads through shared memory,
∕∕ so that access can easily be coalesced.
∕∕ 64 bytes per thread are allocated.
if (threadIdx.x == 0) {
size_t size = blockDim.x * 64;
data = (int*)malloc(size);
}
__syncthreads();
int main()
{
cudaDeviceSetLimit(cudaLimitMallocHeapSize, 128*1024*1024);
mallocTest<<<10, 128>>>();
cudaDeviceSynchronize();
return 0;
}
#include <stdlib.h>
#include <stdio.h>
#define NUM_BLOCKS 20
int main()
{
cudaDeviceSetLimit(cudaLimitMallocHeapSize, 128*1024*1024);
∕∕ Allocate memory
allocmem<<< NUM_BLOCKS, 10 >>>();
∕∕ Use memory
usemem<<< NUM_BLOCKS, 10 >>>();
usemem<<< NUM_BLOCKS, 10 >>>();
usemem<<< NUM_BLOCKS, 10 >>>();
∕∕ Free memory
freemem<<< NUM_BLOCKS, 10 >>>();
cudaDeviceSynchronize();
return 0;
}
The arguments to the execution configuration are evaluated before the actual function arguments.
The function call will fail if Dg or Db are greater than the maximum sizes allowed for the device as
specified in Compute Capabilities, or if Ns is greater than the maximum amount of shared memory
available on the device, minus the amount of shared memory required for static allocation.
Compute capability 9.0 and above allows users to specify compile time thread block cluster dimen-
sions, so that the kernel can use the cluster hierarchy in CUDA. Compile time cluster dimension can
be specified using __cluster_dims__([x, [y, [z]]]). The example below shows compile time
cluster size of 2 in X dimension and 1 in Y and Z dimension.
__global__ void __cluster_dims__(2, 1, 1) Func(float* parameter);
Thread block cluster dimensions can also be specified at runtime and kernel with the cluster can be
launched using cudaLaunchKernelEx API. The API takes a configuration arugument of type cud-
aLaunchConfig_t, kernel function pointer and kernel arguments. Runtime kernel configuration is
shown in the example below.
__global__ void Func(float* parameter);
cudaLaunchAttribute attribute[1];
attribute[0].id = cudaLaunchAttributeClusterDimension;
attribute[0].val.clusterDim.x = 2; ∕∕ Cluster size in X-dimension
attribute[0].val.clusterDim.y = 1;
attribute[0].val.clusterDim.z = 1;
config.attrs = attribute;
config.numAttrs = 1;
float* parameter;
cudaLaunchKernelEx(&config, Func, parameter);
}
▶ maxThreadsPerBlock specifies the maximum number of threads per block with which the ap-
plication will ever launch MyKernel(); it compiles to the .maxntidPTX directive.
▶ minBlocksPerMultiprocessor is optional and specifies the desired minimum number of res-
ident blocks per multiprocessor; it compiles to the .minnctapersmPTX directive.
▶ maxBlocksPerCluster is optional and specifies the desired maximum number thread blocks
per cluster with which the application will ever launch MyKernel(); it compiles to the .
maxclusterrankPTX directive.
If launch bounds are specified, the compiler first derives from them the upper limit L on the number
of registers the kernel should use to ensure that minBlocksPerMultiprocessor blocks (or a sin-
gle block if minBlocksPerMultiprocessor is not specified) of maxThreadsPerBlock threads can
reside on the multiprocessor (see Hardware Multithreading for the relationship between the number
of registers used by a kernel and the number of registers allocated per block). The compiler then
optimizes register usage in the following way:
▶ If the initial register usage is higher than L, the compiler reduces it further until it becomes less
or equal to L, usually at the expense of more local memory usage and/or higher number of in-
structions;
∕∕ Device code
__global__ void
__launch_bounds__(MY_KERNEL_MAX_THREADS, MY_KERNEL_MIN_BLOCKS)
MyKernel(...)
{
...
}
In the common case where MyKernel is invoked with the maximum number of threads per
block (specified as the first parameter of __launch_bounds__()), it is tempting to use
MY_KERNEL_MAX_THREADS as the number of threads per block in the execution configuration:
∕∕ Host code
MyKernel<<<blocksPerGrid, MY_KERNEL_MAX_THREADS>>>(...);
This will not work however since __CUDA_ARCH__ is undefined in host code as mentioned in Applica-
tion Compatibility, so MyKernel will launch with 256 threads per block even when __CUDA_ARCH__ is
greater or equal to 200. Instead the number of threads per block should be determined:
▶ Either at compile time using a macro that does not depend on __CUDA_ARCH__, for example
∕∕ Host code
MyKernel<<<blocksPerGrid, THREADS_PER_BLOCK>>>(...);
Register usage is reported by the --ptxas-options=-v compiler option. The number of resident
blocks can be derived from the occupancy reported by the CUDA profiler (see Device Memory Ac-
cessesfor a definition of occupancy).
The __launch_bounds__() and __maxnreg__() qualifiers cannot be applied to the same kernel.
Register usage can also be controlled for all __global__ functions in a file using the maxrregcount
compiler option. The value of maxrregcount is ignored for functions with launch bounds.
∕∕ unroll value = 8
#pragma unroll (X+1)
for (int i = 0; i < 12; ++i)
p1[i] += p2[i]*4;
∕∕ unroll value = 4
#pragma unroll (T2::value)
for (int i = 0; i < 12; ++i)
p1[i] += p2[i]*16;
}
▶ vabsdiff2, vabsdiff4
▶ vmin2, vmin4
▶ vmax2, vmax4
▶ vset2, vset4
PTX instructions, such as the SIMD video instructions, can be included in CUDA programs by way of
the assembler, asm(), statement.
The basic syntax of an asm() statement is:
asm("template-string" : "constraint"(output) : "constraint"(input)"));
This uses the vabsdiff4 instruction to compute an integer quad byte SIMD sum of absolute differ-
ences. The absolute difference value is computed for each byte of the unsigned integers A and B in
SIMD fashion. The optional accumulate operation (.add) is specified to sum these differences.
Refer to the document “Using Inline PTX Assembly in CUDA” for details on using the assembly state-
ment in your code. Refer to the PTX ISA documentation (“Parallel Thread Execution ISA Version 3.0”
for example) for details on the PTX instructions for the version of PTX that you are using.
The diagnostic affected is specified using an error number showed in a warning message. Any diag-
nostic may be overridden to be an error, but only warnings may have their severity suppressed or be
restored to a warning after being promoted to an error. The nv_diag_default pragma is used to
return the severity of a diagnostic to the one that was in effect before any pragmas were issued (i.e.,
the normal severity of the message as modified by any command-line options). The following example
suppresses the "declared but never referenced" warning on the declaration of foo:
#pragma nv_diag_suppress 177
void foo()
{
int i=0;
}
#pragma nv_diag_default 177
(continues on next page)
The following pragmas may be used to save and restore the current diagnostic pragma state:
#pragma nv_diagnostic push
#pragma nv_diagnostic pop
Examples:
#pragma nv_diagnostic push
#pragma nv_diag_suppress 177
void foo()
{
int i=0;
}
#pragma nv_diagnostic pop
void bar()
{
int i=0;
}
Note that the pragmas only affect the nvcc CUDA frontend compiler; they have no effect on the host
compiler.
Removal Notice: The support of diagnostic pragmas without nv_ prefix are removed from CUDA 12.0,
if the pragmas are inside the device code, warning unrecognized #pragma in device code will
be emitted, otherwise they will be passed to the host compiler. If they are intended for CUDA code,
use the pragmas with nv_ prefix instead.
11.1. Introduction
Cooperative Groups is an extension to the CUDA programming model, introduced in CUDA 9, for orga-
nizing groups of communicating threads. Cooperative Groups allows developers to express the gran-
ularity at which threads are communicating, helping them to express richer, more efficient parallel
decompositions.
Historically, the CUDA programming model has provided a single, simple construct for synchronizing
cooperating threads: a barrier across all threads of a thread block, as implemented with the __sync-
threads() intrinsic function. However, programmers would like to define and synchronize groups
of threads at other granularities to enable greater performance, design flexibility, and software reuse
in the form of “collective” group-wide function interfaces. In an effort to express broader patterns
of parallel interaction, many performance-oriented programmers have resorted to writing their own
ad hoc and unsafe primitives for synchronizing threads within a single warp, or across sets of thread
blocks running on a single GPU. Whilst the performance improvements achieved have often been valu-
able, this has resulted in an ever-growing collection of brittle code that is expensive to write, tune, and
maintain over time and across GPU generations. Cooperative Groups addresses this by providing a
safe and future-proof mechanism to enable performant code.
263
CUDA C++ Programming Guide, Release 12.4
#include <cooperative_groups.h>
∕∕ Optionally include for memcpy_async() collective
#include <cooperative_groups∕memcpy_async.h>
∕∕ Optionally include for reduce() collective
#include <cooperative_groups∕reduce.h>
∕∕ Optionally include for inclusive_scan() and exclusive_scan() collectives
#include <cooperative_groups∕scan.h>
namespace cg = cooperative_groups;
The code can be compiled in a normal way using nvcc, however if you wish to use memcpy_async,
reduce or scan functionality and your host compiler’s default dialect is not C++11 or higher, then you
must add --std=c++11 to the command line.
All threads in the thread block must arrive at the __syncthreads() barrier, however, this constraint
is hidden from the developer who might want to use sum(…). With Cooperative Groups, a better way
of writing this would be:
__device__ int sum(const thread_block& g, int *x, int n) {
∕∕ ...
g.sync()
return total;
}
Any CUDA programmer is already familiar with a certain group of threads: the thread block. The Co-
operative Groups extension introduces a new datatype, thread_block, to explicitly represent this
concept within the kernel.
class thread_block;
Constructed via:
thread_block g = this_thread_block();
Example:
∕∕∕ Loading an integer from global into shared memory
__global__ void kernel(int *globalInput) {
__shared__ int x;
thread_block g = this_thread_block();
∕∕ Choose a leader in the thread block
if (g.thread_rank() == 0) {
∕∕ load from global into shared for all threads to work with
x = (*globalInput);
}
∕∕ After loading data into shared memory, you want to synchronize
∕∕ if all threads in your thread block need to see it
g.sync(); ∕∕ equivalent to __syncthreads();
}
Note: that all threads in the group must participate in collective operations, or the behavior is unde-
fined.
Related: The thread_block datatype is derived from the more generic thread_group datatype,
which can be used to represent a wider class of groups.
This group object represents all the threads launched in a single cluster. Refer to Thread Block Clusters.
The APIs are available on all hardware with Compute Capability 9.0+. In such cases, when a non-cluster
grid is launched, the APIs assume a 1x1x1 cluster.
class cluster_group;
Constructed via:
cluster_group g = this_cluster();
static T* map_shared_rank(T *addr, int rank): Obtain the address of a shared memory
variable of another block in the cluster
Legacy member functions (aliases):
static unsigned int size(): Total number of threads in the group (alias of num_threads())
This group object represents all the threads launched in a single grid. APIs other than sync() are
available at all times, but to be able to synchronize across the grid, you need to use the cooperative
launch API.
class grid_group;
Constructed via:
grid_group g = this_grid();
This group object represents all the threads launched across all devices of a multi-device cooperative
launch. Unlike the grid.group, all the APIs require that you have used the appropriate launch API.
class multi_grid_group;
Constructed via:
∕∕ Kernel must be launched with the cooperative multi-device API
multi_grid_group g = this_multi_grid();
A templated version of a tiled group, where a template parameter is used to specify the size of the tile
- with this known at compile time there is the potential for more optimal execution.
template <unsigned int Size, typename ParentT = void>
class thread_block_tile;
Constructed via:
template <unsigned int Size, typename ParentT>
_CG_QUALIFIER thread_block_tile<Size, ParentT> tiled_partition(const ParentT& g)
Size must be a power of 2 and less than or equal to 1024. Notes section describes extra steps needed
to create tiles of size larger than 32 on hardware with Compute Capability 7.5 or lower.
ParentT is the parent-type from which this group was partitioned. It is automatically inferred, but a
value of void will store this information in the group handle rather than in the type.
Public Member Functions:
void sync() const: Synchronize the threads named in the group
unsigned long long num_threads() const: Total number of threads in the group
unsigned long long thread_rank() const: Rank of the calling thread within [0, num_threads)
unsigned long long meta_group_size() const: Returns the number of groups created when
the parent group was partitioned.
unsigned long long meta_group_rank() const: Linear rank of the group within the set of tiles
partitioned from a parent group (bounded by meta_group_size)
T shfl(T var, unsigned int src_rank) const: Refer to Warp Shuffle Functions, Note: For sizes
larger than 32 all threads in the group have to specify the same src_rank, otherwise the behavior is
undefined.
T shfl_up(T var, int delta) const: Refer to Warp Shuffle Functions, available only for sizes
lower or equal to 32.
T shfl_down(T var, int delta) const: Refer to Warp Shuffle Functions, available only for sizes
lower or equal to 32.
T shfl_xor(T var, int delta) const: Refer to Warp Shuffle Functions, available only for sizes
lower or equal to 32.
T any(int predicate) const: Refer to Warp Vote Functions
T all(int predicate) const: Refer to Warp Vote Functions
T ballot(int predicate) const: Refer to Warp Vote Functions, available only for sizes lower or
equal to 32.
unsigned int match_any(T val) const: Refer to Warp Match Functions, available only for sizes
lower or equal to 32.
unsigned int match_all(T val, int &pred) const: Refer to Warp Match Functions, available
only for sizes lower or equal to 32.
Legacy member functions (aliases):
unsigned long long size() const: Total number of threads in the group (alias of
num_threads())
Notes:
▶ thread_block_tile templated data structure is being used here, the size of the group is
passed to the tiled_partition call as a template parameter rather than an argument.
▶ shfl, shfl_up, shfl_down, and shfl_xor functions accept objects of any type when
compiled with C++11 or later. This means it’s possible to shuffle non-integral types as long as
they satisfy the below constraints:
▶ Qualifies as trivially copyable i.e., is_trivially_copyable<T>::value == true
▶ sizeof(T) <= 32 for tile sizes lower or equal 32, sizeof(T) <= 8 for larger tiles
▶ On hardware with Compute Capability 7.5 or lower tiles of size larger than 32 need
small amount of memory reserved for them. This can be done using coopera-
tive_groups::block_tile_memory struct template that has to reside in either shared or
global memory.
template <unsigned int MaxBlockSize = 1024>
struct block_tile_memory;
MaxBlockSize Specifies the maximal number of threads in the current thread block. This pa-
rameter can be used to minimize the shared memory usage of block_tile_memory in kernels
launched only with smaller thread counts.
∕∕∕ The latter has the provenance encoded in the type, while the first stores it in the�
,→handle
∕∕∕ The following code will create tiles of size 128 on all Compute Capabilities.
∕∕∕ block_tile_memory can be omitted on Compute Capability 8.0 or higher.
__global__ void kernel(...) {
∕∕ reserve shared memory for thread_block_tile usage,
∕∕ specify that block size will be at most 256 threads.
__shared__ block_tile_memory<256> shared;
thread_block thb = this_thread_block(shared);
∕∕ ...
}
Developers might have had warp-synchronous codes that they previously made implicit assumptions
about the warp size and would code around that number. Now this needs to be specified explicitly.
__global__ void cooperative_kernel(...) {
∕∕ obtain default "current thread block" group
thread_block my_block = this_thread_block();
Group representing the current thread can be obtained from this_thread function:
thread_block_tile<1> this_thread();
The following memcpy_async API uses a thread_group, to copy an int element from source to desti-
nation:
#include <cooperative_groups.h>
#include <cooperative_groups∕memcpy_async.h>
More detailed examples of using this_thread to perform asynchronous copies can be found in
the Single-Stage Asynchronous Data Copies using cuda::pipeline and Multi-Stage Asynchronous Data
Copies using cuda::pipeline sections.
In CUDA’s SIMT architecture, at the hardware level the multiprocessor executes threads in groups of
32 called warps. If there exists a data-dependent conditional branch in the application code such that
threads within a warp diverge, then the warp serially executes each branch disabling threads not on
that path. The threads that remain active on the path are referred to as coalesced. Cooperative Groups
has functionality to discover, and create, a group containing all coalesced threads.
Constructing the group handle via coalesced_threads() is opportunistic. It returns the set of active
threads at that point in time, and makes no guarantee about which threads are returned (as long
as they are active) or that they will stay coalesced throughout execution (they will be brought back
together for the execution of a collective but can diverge again afterwards).
class coalesced_group;
Constructed via:
coalesced_group active = coalesced_threads();
Commonly developers need to work with the current active set of threads. No assumption is made
about the threads that are present, and instead developers work with the threads that happen to be
there. This is seen in the following “aggregating atomic increment across threads in a warp” example
(written using the correct CUDA 9.0 set of intrinsics):
{
unsigned int writemask = __activemask();
unsigned int total = __popc(writemask);
unsigned int prefix = __popc(writemask & __lanemask_lt());
∕∕ Find the lowest-numbered active lane
int elected_lane = __ffs(writemask) - 1;
int base_offset = 0;
if (prefix == 0) {
base_offset = atomicAdd(p, total);
}
(continues on next page)
11.5.1. tiled_partition
template <unsigned int Size, typename ParentT>
thread_block_tile<Size, ParentT> tiled_partition(const ParentT& g);
The tiled_partition method is a collective operation that partitions the parent group into a one-
dimensional, row-major, tiling of subgroups. A total of ((size(parent)/tilesz) subgroups will be created,
therefore the parent group size must be evenly divisible by the Size. The allowed parent groups are
thread_block or thread_block_tile.
The implementation may cause the calling thread to wait until all the members of the parent group have
invoked the operation before resuming execution. Functionality is limited to native hardware sizes,
1/2/4/8/16/32 and the cg::size(parent) must be greater than the Size parameter. The templated
version of tiled_partition supports 64/128/256/512 sizes as well, but some additional steps are
required on Compute Capability 7.5 or lower, refer to Thread Block Tile for details.
Codegen Requirements: Compute Capability 5.0 minimum, C++11 for sizes larger than 32
Example:
∕∕∕ The following code will create a 32-thread tile
thread_block block = this_thread_block();
thread_block_tile<32> tile32 = tiled_partition<32>(block);
We can partition each of these groups into even smaller groups, each of size 4 threads:
auto tile4 = tiled_partition<4>(tile32);
∕∕ or using a general group
∕∕ thread_group tile4 = tiled_partition(tile32, 4);
If, for instance, if we were to then include the following line of code:
then the statement would be printed by every fourth thread in the block: the threads of rank 0 in each
tile4 group, which correspond to those threads with ranks 0,4,8,12,etc. in the block group.
11.5.2. labeled_partition
template <typename Label>
coalesced_group labeled_partition(const coalesced_group& g, Label label);
The labeled_partition method is a collective operation that partitions the parent group into one-
dimensional subgroups within which the threads are coalesced. The implementation will evaluate a
condition label and assign threads that have the same value for label into the same group.
Label can be any integral type.
The implementation may cause the calling thread to wait until all the members of the parent group
have invoked the operation before resuming execution.
Note: This functionality is still being evaluated and may slightly change in the future.
Codegen Requirements: Compute Capability 7.0 minimum, C++11
11.5.3. binary_partition
coalesced_group binary_partition(const coalesced_group& g, bool pred);
The binary_partition() method is a collective operation that partitions the parent group into one-
dimensional subgroups within which the threads are coalesced. The implementation will evaluate a
predicate and assign threads that have the same value into the same group. This is a specialized form
of labeled_partition(), where the label can only be 0 or 1.
The implementation may cause the calling thread to wait until all the members of the parent group
have invoked the operation before resuming execution.
Note: This functionality is still being evaluated and may slightly change in the future.
Codegen Requirements: Compute Capability 7.0 minimum, C++11
Example:
∕∕∕ This example divides a 32-sized tile into a group with odd
∕∕∕ numbers and a group with even numbers
_global__ void oddEven(int *inputArr) {
auto block = cg::this_thread_block();
auto tile32 = cg::tiled_partition<32>(block);
11.6.1. Synchronization
11.6.1.1 barrier_arrive and barrier_wait
T::arrival_token T::barrier_arrive();
void T::barrier_wait(T::arrival_token&&);
auto token = cluster.barrier_arrive(); ∕∕ Let other blocks know this block is�
,→ running and data was initialized
∕∕ Map data in shared memory from the next block in the cluster
int *dsmem = cluster.map_shared_rank(&array[0], (cluster.block_rank() + 1) %�
,→cluster.num_blocks());
∕∕ Make sure all other blocks in the cluster are running and initialized shared�
,→ data before accessing dsmem
cluster.barrier_wait(std::move(token));
11.6.1.2 sync
sync synchronizes the threads named in the group. Group type T can be any of the existing group
types, as all of them support synchronization. Its available as a member function in every group type or
as a free function taking a group as parameter. If the group is a grid_group or a multi_grid_group
the kernel must have been launched using the appropriate cooperative launch APIs. Equivalent to
T.barrier_wait(T.barrier_arrive()).
memcpy_async is a group-wide collective memcpy that utilizes hardware accelerated support for non-
blocking memory transactions from global to shared memory. Given a set of threads named in the
group, memcpy_async will move specified amount of bytes or elements of the input type through a
single pipeline stage. Additionally for achieving best performance when using the memcpy_async API,
an alignment of 16 bytes for both shared memory and global memory is required. It is important to
note that while this is a memcpy in the general case, it is only asynchronous if the source is global
memory and the destination is shared memory and both can be addressed with 16, 8, or 4 byte align-
ments. Asynchronously copied data should only be read following a call to wait or wait_prior which
signals that the corresponding stage has completed moving data to shared memory.
Having to wait on all outstanding requests can lose some flexibility (but gain simplicity). In order to ef-
ficiently overlap data transfer and execution, its important to be able to kick off an N+1memcpy_async
request while waiting on and operating on request N. To do so, use memcpy_async and wait on it using
the collective stage-based wait_prior API. See wait and wait_prior for more details.
Usage 1
template <typename TyGroup, typename TyElem, typename TyShape>
void memcpy_async(
const TyGroup &group,
TyElem *__restrict__ _dst,
const TyElem *__restrict__ _src,
const TyShape &shape
);
void memcpy_async(
const TyGroup &group,
TyElem *__restrict__ dst,
const TyDstLayout &dstLayout,
const TyElem *__restrict__ src,
const TySrcLayout &srcLayout
);
namespace cg = cooperative_groups;
size_t copy_count;
size_t index = 0;
while (index < elementsPerThreadBlock) {
cg::memcpy_async(tb, local_smem, elementsInShared, global_data + index,�
,→elementsPerThreadBlock - index);
wait and wait_prior collectives allow to wait for memcpy_async copies to complete. wait blocks
calling threads until all previous copies are done. wait_prior allows that the latest NumStages are
still not done and waits for all the previous requests. So with N total copies requested, it waits until
the first N-NumStages are done and the last NumStages might still be in progress. Both wait and
wait_prior will synchronize the named group.
Codegen Requirements: Compute Capability 5.0 minimum, Compute Capability 8.0 for asynchronicity,
C++11
cooperative_groups∕memcpy_async.h header needs to be included.
Example:
∕∕∕ This example streams elementsPerThreadBlock worth of data from global memory
∕∕∕ into a limited sized shared memory (elementsInShared) block to operate on in
∕∕∕ multiple (two) stages. As stage N is kicked off, we can wait on and operate on�
,→stage N-1.
#include <cooperative_groups.h>
#include <cooperative_groups∕memcpy_async.h>
namespace cg = cooperative_groups;
∕∕ Calculate the amount fo data that was actually copied, for the next�
,→ iteration.
copy_count = min(elementsInShared, elementsPerThreadBlock - index);
index += copy_count;
reduce performs a reduction operation on the data provided by each thread named in the group
passed in. This takes advantage of hardware acceleration (on compute 80 and higher devices) for the
arithmetic add, min, or max operations and the logical AND, OR, or XOR, as well as providing a software
fallback on older generation hardware. Only 4B types are accelerated by hardware.
group: Valid group types are coalesced_group and thread_block_tile.
val: Any type that satisfies the below requirements:
▶ Qualifies as trivially copyable i.e. is_trivially_copyable<TyArg>::value == true
▶ sizeof(T) <= 32 for coalesced_group and tiles of size lower or equal 32, sizeof(T) <= 8
for larger tiles
▶ Has suitable arithmetic or comparative operators for the given function object.
Note: Different threads in the group can pass different values for this argument.
op: Valid function objects that will provide hardware acceleration with integral types are plus(),
less(), greater(), bit_and(), bit_xor(), bit_or(). These must be constructed, hence
the TyVal template argument is required, i.e. plus<int>(). Reduce also supports lambdas and other
function objects that can be invoked using operator()
Asynchronous reduce
*_async variants of the API are asynchronously calculating the result to either store to or update a
specified destination by one of the participating threads, instead of returning it by each thread. To
observe the effect of these asynchronous calls, calling group of threads or a larger group containing
them need to be synchronized.
▶ In case of the atomic store or update variant, atomic argument can be either of cuda::atomic
or cuda::atomic_ref available in CUDA C++ Standard Library. This variant of the API is available
only on platforms and devices, where these types are supported by the CUDA C++ Standard
Library. Result of the reduction is used to atomically update the atomic according to the specified
op, eg. the result is atomically added to the atomic in case of cg::plus(). Type held by the
atomic must match the type of TyArg. Scope of the atomic must include all the threads in
the group and if multiple groups are using the same atomic concurrently, scope must include all
threads in all groups using it. Atomic update is performed with relaxed memory ordering.
▶ In case of the pointer store variant, result of the reduction will be weakly stored into the dst
pointer.
Codegen Requirements: Compute Capability 5.0 minimum, Compute Capability 8.0 for HW accelera-
tion, C++11.
cooperative_groups∕reduce.h header needs to be included.
Example of approximate standard deviation for integer vector:
#include <cooperative_groups.h>
#include <cooperative_groups∕reduce.h>
namespace cg = cooperative_groups;
int thread_diffs_sum = 0;
for (int i = tile.thread_rank(); i < length; i += tile.num_threads()) {
int diff = vec[i] - avg;
thread_diffs_sum += diff * diff;
}
return static_cast<int>(sqrtf(diff_sum));
}
∕∕∕ The following example accepts input in *A and outputs a result into *sum
∕∕∕ It spreads the data equally within the block
__device__ void block_reduce(const int* A, int count, cuda::atomic<int, cuda::thread_
,→scope_block>& total_sum) {
∕∕ Stride loop over all values, each thread accumulates its part of the array.
for (int i = block.thread_rank(); i < count; i += block.size()) {
thread_sum += A[i];
}
∕∕ reduce thread sums across the tile, add the result to the atomic
∕∕ cg::plus<int> allows cg::reduce() to know it can use hardware acceleration for�
,→addition
Below are the prototypes of function objects for some of the basic operations that can be done with
reduce
namespace cooperative_groups {
template <typename Ty>
struct cg::plus;
Reduce is limited to the information available to the implementation at compile time. Thus in order to
make use of intrinsics introduced in CC 8.0, the cg:: namespace exposes several functional objects
that mirror the hardware. These objects appear similar to those presented in the C++ STL, with the
exception of less∕greater. The reason for any difference from the STL is that these function objects
are designed to actually mirror the operation of the hardware intrinsics.
Functional description:
▶ cg::plus: Accepts two values and returns the sum of both using operator+.
▶ cg::less: Accepts two values and returns the lesser using operator<. This differs in that the
lower value is returned rather than a Boolean.
▶ cg::greater: Accepts two values and returns the greater using operator<. This differs in that
the greater value is returned rather than a Boolean.
▶ cg::bit_and: Accepts two values and returns the result of operator&.
▶ cg::bit_xor: Accepts two values and returns the result of operator^.
▶ cg::bit_or: Accepts two values and returns the result of operator|.
Example:
{
∕∕ cg::plus<int> is specialized within cg::reduce and calls __reduce_add_sync(...)�
,→ on CC 8.0+
cg::reduce(tile, (int)val, cg::plus<int>());
∕∕ While individual components of a vector are supported, reduce will not use�
,→hardware intrinsics for the following
∕∕ It will also be necessary to define a corresponding operator for vector and any�
,→custom types that may be used
∕∕ Finally lambdas and other function objects cannot be inspected for dispatch
∕∕ and will instead perform shuffle based reductions using the provided function�
,→object.
inclusive_scan and exclusive_scan performs a scan operation on the data provided by each
thread named in the group passed in. Result for each thread is a reduction of data from threads with
lower thread_rank than that thread in case of exclusive_scan. inclusive_scan result also in-
cludes the calling thread data in the reduction.
group: Valid group types are coalesced_group and thread_block_tile.
val: Any type that satisfies the below requirements:
▶ Qualifies as trivially copyable i.e. is_trivially_copyable<TyArg>::value == true
▶ sizeof(T) <= 32 for coalesced_group and tiles of size lower or equal 32, sizeof(T) <= 8
for larger tiles
▶ Has suitable arithmetic or comparative operators for the given function object.
Note: Different threads in the group can pass different values for this argument.
op: Function objects defined for convenience are plus(), less(), greater(), bit_and(),
bit_xor(), bit_or() described in Reduce Operators. These must be constructed, hence the TyVal
template argument is required, i.e. plus<int>(). inclusive_scan and exclusive_scan also sup-
ports lambdas and other function objects that can be invoked using operator(). Overloads without
this argument use cg::plus<TyVal>().
Scan update
template <typename TyGroup, typename TyAtomic, typename TyVal, typename TyFn>
auto inclusive_scan_update(const TyGroup& group, TyAtomic& atomic, TyVal&& val, TyFn&&
,→ op) -> decltype(op(val, val));
*_scan_update collectives take an additional argument atomic that can be either of cuda::atomic
or cuda::atomic_ref available in CUDA C++ Standard Library. These variants of the API are available
only on platforms and devices, where these types are supported by the CUDA C++ Standard Library.
These variants will perform an update to the atomic according to op with value of the sum of input
values of all threads in the group. Previous value of the atomic will be combined with the result of
scan by each thread and returned. Type held by the atomic must match the type of TyVal. Scope of
the atomic must include all the threads in the group and if multiple groups are using the same atomic
concurrently, scope must include all threads in all groups using it. Atomic update is performed with
relaxed memory ordering.
Following pseudocode illustrates how the update variant of scan works:
∕*
inclusive_scan_update behaves as the following block,
except both reduce and inclusive_scan is calculated simultaneously.
auto total = reduce(group, val, op);
TyVal old;
if (group.thread_rank() == selected_thread) {
atomicaly {
old = atomic.load();
atomic.store(op(old, total));
}
}
old = group.shfl(old, selected_thread);
return op(inclusive_scan(group, val, op), old);
*∕
∕∕ put data from input into output only if it passes test_fn predicate
(continues on next page)
∕∕ scan over the needs of each thread, result for each thread is an offset
∕∕ of that thread’s part of the buffer. buffer_used is atomically updated with
∕∕ the sum of all thread's inputs, to correctly offset other tile’s allocations
int buf_offset =
cg::exclusive_scan_update(tile, buffer_used, buf_needed);
∕∕ each thread fills its own part of the buffer with thread specific data
for (int i = 0 ; i < buf_needed ; ++i) {
buffer[buf_offset + i] = my_thread_data(i);
}
block.sync();
∕∕ buffer_used now holds total amount of memory allocated
∕∕ buffer is {0, 0, 1, 0, 0, 1 ...};
}
invoke_one selects a single arbitrary thread from the calling group and uses that thread to call the
supplied invocable fn with the supplied arguments args. In case of invoke_one_broadcast the
result of the call is also distributed to all threads in the group and returned from this collective.
Calling group can be synchronized with the selected thread before and/or after it calls the supplied
invocable. It means that communication within the calling group is not allowed inside the supplied
invocable body, otherwise forward progress is not guaranteed. Communication with threads outside
of the calling group is allowed in the body of the supplied invocable. Thread selection mechanism is
not guranteed to be deterministic.
On devices with Compute Capability 9.0 or higher hardware acceleration might be used to select the
thread when called with explicit group types.
group: All group types are valid for invoke_one, coalesced_group and thread_block_tile are
valid for invoke_one_broadcast.
fn: Function or object that can be invoked using operator().
args: Parameter pack of types matching types of parameters of the supplied invocable fn.
In case of invoke_one_broadcast the return type of the supplied invocable fn must satisfy the
below requirements:
▶ Qualifies as trivially copyable i.e. is_trivially_copyable<T>::value == true
▶ sizeof(T) <= 32 for coalesced_group and tiles of size lower or equal 32, sizeof(T) <= 8
for larger tiles
Codegen Requirements: Compute Capability 5.0 minimum, Compute Capability 9.0 for hardware ac-
celeration, C++11.
Aggregated atomic example from Discovery pattern section re-written to use in-
voke_one_broadcast:
#include <cooperative_groups.h>
#include <cuda∕atomic>
namespace cg = cooperative_groups;
template<cuda::thread_scope Scope>
__device__ unsigned int atomicAddOneRelaxed(cuda::atomic<unsigned int, Scope>&�
,→atomic) {
auto g = cg::coalesced_threads();
auto prev = cg::invoke_one_broadcast(g, [&] () {
return atomic.fetch_add(g.num_threads(), cuda::memory_order_relaxed);
});
return prev + g.thread_rank();
}
And when launching the kernel it is necessary to use, instead of the <<<...>>> execution configu-
ration syntax, the cudaLaunchCooperativeKernel CUDA runtime launch API or the CUDA driver
equivalent.
Example:
To guarantee co-residency of the thread blocks on the GPU, the number of blocks launched needs to
be carefully considered. For example, as many blocks as there are SMs can be launched as follows:
int dev = 0;
cudaDeviceProp deviceProp;
cudaGetDeviceProperties(&deviceProp, dev);
(continues on next page)
Alternatively, you can maximize the exposed parallelism by calculating how many blocks can fit simul-
taneously per-SM using the occupancy calculator as follows:
∕∕∕ This will launch a grid that can maximally fill the GPU, on the default stream with�
,→kernel arguments
int numBlocksPerSm = 0;
∕∕ Number of threads my_kernel will be launched with
int numThreads = 128;
cudaDeviceProp deviceProp;
cudaGetDeviceProperties(&deviceProp, dev);
cudaOccupancyMaxActiveBlocksPerMultiprocessor(&numBlocksPerSm, my_kernel, numThreads,�
,→0);
∕∕ launch
void *kernelArgs[] = { ∕* add kernel args *∕ };
dim3 dimBlock(numThreads, 1, 1);
dim3 dimGrid(deviceProp.multiProcessorCount*numBlocksPerSm, 1, 1);
cudaLaunchCooperativeKernel((void*)my_kernel, dimGrid, dimBlock, kernelArgs);
It is good practice to first ensure the device supports cooperative launches by querying the device
attribute cudaDevAttrCooperativeLaunch:
int dev = 0;
int supportsCoopLaunch = 0;
cudaDeviceGetAttribute(&supportsCoopLaunch, cudaDevAttrCooperativeLaunch, dev);
which will set supportsCoopLaunch to 1 if the property is supported on device 0. Only devices with
compute capability of 6.0 and higher are supported. In addition, you need to be running on either of
these:
▶ The Linux platform without MPS
▶ The Linux platform with MPS and on a device with compute capability 7.0 or higher
▶ The latest Windows platform
▶ All devices being targeted by this launch must be of the same compute capability - major and
minor versions.
▶ The block size, grid size and amount of shared memory per grid must be the same across all
devices. Note that this means the maximum number of blocks that can be launched per device
will be limited by the device with the least number of SMs.
▶ Any user defined __device__, __constant__ or __managed__ device global variables present
in the module that owns the CUfunction being launched are independently instantiated on every
device. The user is responsible for initializing such device global variables appropriately.
Deprecation Notice: cudaLaunchCooperativeKernelMultiDevice has been deprecated in CUDA
11.3 for all devices. Example of an alternative approach can be found in the multi device conjugate
gradient sample.
Optimal performance in multi-device synchronization is achieved by enabling peer access via cuCtx-
EnablePeerAccess or cudaDeviceEnablePeerAccess for all participating devices.
The launch parameters should be defined using an array of structs (one per device), and launched with
cudaLaunchCooperativeKernelMultiDevice
Example:
cudaDeviceProp deviceProp;
cudaGetDeviceCount(&numGpus);
cudaStreamCreate(&streams[i]);
∕∕ Loop over other devices and cudaDeviceEnablePeerAccess to get a faster barrier�
,→implementation
}
∕∕ Since all devices must be of the same compute capability and have the same launch�
,→configuration
Also, as with grid-wide synchronization, the resulting device code looks very similar:
multi_grid_group multi_grid = this_multi_grid();
multi_grid.sync();
However, the code needs to be compiled in separate compilation by passing -rdc=true to nvcc.
It is good practice to first ensure the device supports multi-device cooperative launches by querying
the device attribute cudaDevAttrCooperativeMultiDeviceLaunch:
int dev = 0;
int supportsMdCoopLaunch = 0;
cudaDeviceGetAttribute(&supportsMdCoopLaunch, cudaDevAttrCooperativeMultiDeviceLaunch,
,→ dev);
which will set supportsMdCoopLaunch to 1 if the property is supported on device 0. Only devices
with compute capability of 6.0 and higher are supported. In addition, you need to be running on the
Linux platform (without MPS) or on current versions of Windows with the device in TCC mode.
See the cudaLaunchCooperativeKernelMultiDevice API documentation for more information.
12.1. Introduction
12.1.1. Overview
Dynamic Parallelism is an extension to the CUDA programming model enabling a CUDA kernel to cre-
ate and synchronize with new work directly on the GPU. The creation of parallelism dynamically at
whichever point in a program that it is needed offers exciting capabilities.
The ability to create work directly from the GPU can reduce the need to transfer execution control
and data between host and device, as launch configuration decisions can now be made at runtime by
threads executing on the device. Additionally, data-dependent parallel work can be generated inline
within a kernel at run-time, taking advantage of the GPU’s hardware schedulers and load balancers
dynamically and adapting in response to data-driven decisions or workloads. Algorithms and pro-
gramming patterns that had previously required modifications to eliminate recursion, irregular loop
structure, or other constructs that do not fit a flat, single-level of parallelism may more transparently
be expressed.
This document describes the extended capabilities of CUDA which enable Dynamic Parallelism, includ-
ing the modifications and additions to the CUDA programming model necessary to take advantage of
these, as well as guidelines and best practices for exploiting this added capacity.
Dynamic Parallelism is only supported by devices of compute capability 3.5 and higher.
12.1.2. Glossary
Definitions for terms used in this guide.
Grid
A Grid is a collection of Threads. Threads in a Grid execute a Kernel Function and are divided into
Thread Blocks.
Thread Block
A Thread Block is a group of threads which execute on the same multiprocessor (SM). Threads
within a Thread Block have access to shared memory and can be explicitly synchronized.
Kernel Function
A Kernel Function is an implicitly parallel subroutine that executes under the CUDA execution and
memory model for every Thread in a Grid.
293
CUDA C++ Programming Guide, Release 12.4
Host
The Host refers to the execution environment that initially invoked CUDA. Typically the thread
running on a system’s CPU processor.
Parent
A Parent Thread, Thread Block, or Grid is one that has launched new grid(s), the Child Grid(s). The
Parent is not considered completed until all of its launched Child Grids have also completed.
Child
A Child thread, block, or grid is one that has been launched by a Parent grid. A Child grid must
complete before the Parent Thread, Thread Block, or Grid are considered complete.
Thread Block Scope
Objects with Thread Block Scope have the lifetime of a single Thread Block. They only have de-
fined behavior when operated on by Threads in the Thread Block that created the object and are
destroyed when the Thread Block that created them is complete.
Device Runtime
The Device Runtime refers to the runtime system and APIs available to enable Kernel Functions
to use Dynamic Parallelism.
A device thread that configures and launches a new grid belongs to the parent grid, and the grid
created by the invocation is a child grid.
The invocation and completion of child grids is properly nested, meaning that the parent grid is not
considered complete until all child grids created by its threads have completed, and the runtime guar-
antees an implicit synchronization between the parent and child.
On both host and device, the CUDA runtime offers an API for launching kernels and for tracking depen-
dencies between launches via streams and events. On the host system, the state of launches and the
CUDA primitives referencing streams and events are shared by all threads within a process; however
processes execute independently and may not share CUDA objects.
On the device, launched kernels and CUDA objects are visible to all threads in a grid. This means, for
example, that a stream may be created by one thread and used by any other thread in the grid.
12.2.1.3 Synchronization
Warning: Explicit synchronization with child kernels from a parent block (i.e. using cud-
aDeviceSynchronize() in device code) is deprecated in CUDA 11.6 and removed for com-
pute_90+ compilation. For compute capability < 9.0, compile-time opt-in by specifying
-DCUDA_FORCE_CDP1_IF_SUPPORTED is required to continue using cudaDeviceSynchronize()
in device code. Note that this is slated for full removal in a future CUDA release.
CUDA runtime operations from any thread, including kernel launches, are visible across all the threads
in a grid. This means that an invoking thread in the parent grid may perform synchronization to control
the launch order of grids launched by any thread in the grid on streams created by any thread in the
grid. Execution of a grid is not considered complete until all launches by all threads in the grid have
completed. If all threads in a grid exit before all child launches have completed, an implicit synchro-
nization operation will automatically be triggered.
CUDA Streams and Events allow control over dependencies between grid launches: grids launched into
the same stream execute in-order, and events may be used to create dependencies between streams.
Streams and events created on the device serve this exact same purpose.
Streams and events created within a grid exist within grid scope, but have undefined behavior when
used outside of the grid where they were created. As described above, all work launched by a grid
is implicitly synchronized when the grid exits; work launched into streams is included in this, with all
dependencies resolved appropriately. The behavior of operations on a stream that has been modified
outside of grid scope is undefined.
Streams and events created on the host have undefined behavior when used within any kernel, just as
streams and events created by a parent grid have undefined behavior if used within a child grid.
The ordering of kernel launches from the device runtime follows CUDA Stream ordering semantics.
Within a grid, all kernel launches into the same stream (with the exception of the fire-and-forget
stream discussed later) are executed in-order. With multiple threads in the same grid launching into
the same stream, the ordering within the stream is dependent on the thread scheduling within the
grid, which may be controlled with synchronization primitives such as __syncthreads().
Note that while named streams are shared by all threads within a grid, the implicit NULL stream is only
shared by all threads within a thread block. If multiple threads in a thread block launch into the implicit
stream, then these launches will be executed in-order. If multiple threads in different thread blocks
launch into the implicit stream, then these launches may be executed concurrently. If concurrency is
desired for launches by multiple threads within a thread block, explicit named streams should be used.
Dynamic Parallelism enables concurrency to be expressed more easily within a program; however, the
device runtime introduces no new concurrency guarantees within the CUDA execution model. There
is no guarantee of concurrent execution between any number of different thread blocks on a device.
The lack of concurrency guarantee extends to a parent grid and their child grids. When a parent grid
launches a child grid, the child may start to execute once stream dependencies are satisfied and hard-
ware resources are available to host the child, but is not guaranteed to begin execution until the parent
grid reaches an implicit synchronization point.
While concurrency will often easily be achieved, it may vary as a function of device configuration, ap-
plication workload, and runtime scheduling. It is therefore unsafe to depend upon any concurrency
between different thread blocks.
There is no multi-GPU support from the device runtime; the device runtime is only capable of operating
on the device upon which it is currently executing. It is permitted, however, to query properties for any
CUDA capable device in the system.
Parent and child grids have coherent access to global memory, with weak consistency guarantees
between child and parent. There is only one point of time in the execution of a child grid when its view
of memory is fully consistent with the parent thread: at the point when the child grid is invoked by the
parent.
All global memory operations in the parent thread prior to the child grid’s invocation are visible to the
child grid. With the removal of cudaDeviceSynchronize(), it is no longer possible to access the
modifications made by the threads in the child grid from the parent grid. The only way to access the
modifications made by the threads in the child grid before the parent grid exits is via a kernel launched
into the cudaStreamTailLaunch stream.
In the following example, the child grid executing child_launch is only guaranteed to see the modi-
fications to data made before the child grid was launched. Since thread 0 of the parent is performing
the launch, the child will be consistent with the memory seen by thread 0 of the parent. Due to the
first __syncthreads() call, the child will see data[0]=0, data[1]=1, …, data[255]=255 (without
the __syncthreads() call, only data[0]=0 would be guaranteed to be seen by the child). The child
grid is only guaranteed to return at an implicit synchronization. This means that the modifications
made by the threads in the child grid are never guaranteed to become available to the parent grid.
To access modifications made by child_launch, a tail_launch kernel is launched into the cudaS-
treamTailLaunch stream.
__global__ void tail_launch(int *data) {
data[threadIdx.x] = data[threadIdx.x]+1;
}
__syncthreads();
if (threadIdx.x == 0) {
child_launch<<< 1, 256 >>>(data);
tail_launch<<< 1, 256, 0, cudaStreamTailLaunch >>>(data);
}
}
Zero-copy system memory has identical coherence and consistency guarantees to global memory, and
follows the semantics detailed above. A kernel may not allocate or free zero-copy memory, but may
use pointers to zero-copy passed in from the host program.
Constants are immutable and may not be modified from the device, even between parent and child
launches. That is to say, the value of all __constant__ variables must be set from the host prior to
launch. Constant memory is inherited automatically by all child kernels from their respective parents.
Taking the address of a constant memory object from within a kernel thread has the same semantics
as for all CUDA programs, and passing that pointer from parent to child or from a child to parent is
naturally supported.
Shared and Local memory is private to a thread block or thread, respectively, and is not visible or
coherent between parent and child. Behavior is undefined when an object in one of these locations is
referenced outside of the scope within which it belongs, and may cause an error.
The NVIDIA compiler will attempt to warn if it can detect that a pointer to local or shared memory
is being passed as an argument to a kernel launch. At runtime, the programmer may use the __is-
Global() intrinsic to determine whether a pointer references global memory and so may safely be
passed to a child launch.
Note that calls to cudaMemcpy*Async() or cudaMemset*Async() may invoke new child kernels on
the device in order to preserve stream semantics. As such, passing shared or local memory pointers
to these APIs is illegal and will return an error.
Local memory is private storage for an executing thread, and is not visible outside of that thread. It
is illegal to pass a pointer to local memory as a launch argument when launching a child kernel. The
result of dereferencing such a local memory pointer from a child will be undefined.
For example the following is illegal, with undefined behavior if x_array is accessed by child_launch:
int x_array[10]; ∕∕ Creates x_array in parent's local memory
child_launch<<< 1, 1 >>>(x_array);
It is sometimes difficult for a programmer to be aware of when a variable is placed into local memory by
the compiler. As a general rule, all storage passed to a child kernel should be allocated explicitly from
the global-memory heap, either with cudaMalloc(), new() or by declaring __device__ storage at
global scope. For example:
∕∕ Correct - "value" is global storage
__device__ int value;
__device__ void x() {
value = 5;
child<<< 1, 1 >>>(&value);
}
Writes to the global memory region over which a texture is mapped are incoherent with respect to
texture accesses. Coherence for texture memory is enforced at the invocation of a child grid and when
a child grid completes. This means that writes to memory prior to a child kernel launch are reflected
in texture memory accesses of the child. Similarly to Global Memory above, writes to memory by a
child are never guaranteed to be reflected in the texture memory accesses by a parent. The only way
to access the modifications made by the threads in the child grid before the parent grid exits is via a
kernel launched into the cudaStreamTailLaunch stream. Concurrent accesses by parent and child
may result in inconsistent data.
Kernels may be launched from the device using the standard CUDA <<< >>> syntax:
kernel_name<<< Dg, Db, Ns, S >>>([kernel arguments]);
▶ Dg is of type dim3 and specifies the dimensions and size of the grid
▶ Db is of type dim3 and specifies the dimensions and size of each thread block
▶ Ns is of type size_t and specifies the number of bytes of shared memory that is dynamically
allocated per thread block for this call in addition to statically allocated memory. Ns is an optional
argument that defaults to 0.
▶ S is of type cudaStream_t and specifies the stream associated with this call. The stream must
have been allocated in the same grid where the call is being made. S is an optional argument that
defaults to the NULL stream.
Identical to host-side launches, all device-side kernel launches are asynchronous with respect to the
launching thread. That is to say, the <<<>>> launch command will return immediately and the launch-
ing thread will continue to execute until it hits an implicit launch-synchronization point (such as at a
kernel launched into the cudaStreamTailLaunch stream).
The child grid launch is posted to the device and will execute independently of the parent thread. The
child grid may begin execution at any time after launch, but is not guaranteed to begin execution until
the launching thread reaches an implicit launch-synchronization point.
All global device configuration settings (for example, shared memory and L1 cache size as returned
from cudaDeviceGetCacheConfig(), and device limits returned from cudaDeviceGetLimit()) will
be inherited from the parent. Likewise, device limits such as stack size will remain as-configured.
For host-launched kernels, per-kernel configurations set from the host will take precedence over the
global setting. These configurations will be used when the kernel is launched from the device as well.
It is not possible to reconfigure a kernel’s environment from the device.
12.3.1.2 Streams
Both named and unnamed (NULL) streams are available from the device runtime. Named streams
may be used by any thread within a grid, but stream handles may not be passed to other child/parent
kernels. In other words, a stream should be treated as private to the grid in which it is created.
Similar to host-side launch, work launched into separate streams may run concurrently, but actual
concurrency is not guaranteed. Programs that depend upon concurrency between child kernels are
not supported by the CUDA programming model and will have undefined behavior.
The host-side NULL stream’s cross-stream barrier semantic is not supported on the device (see below
for details). In order to retain semantic compatibility with the host runtime, all device streams must
be created using the cudaStreamCreateWithFlags() API, passing the cudaStreamNonBlocking
flag. The cudaStreamCreate() call is a host-runtime- only API and will fail to compile for the device.
As cudaStreamSynchronize() and cudaStreamQuery() are unsupported by the device runtime, a
kernel launched into the cudaStreamTailLaunch stream should be used instead when the applica-
tion needs to know that stream-launched child kernels have completed.
Within a host program, the unnamed (NULL) stream has additional barrier synchronization semantics
with other streams (see Default Stream for details). The device runtime offers a single implicit, un-
named stream shared between all threads in a thread block, but as all named streams must be created
with the cudaStreamNonBlocking flag, work launched into the NULL stream will not insert an implicit
dependency on pending work in any other streams (including NULL streams of other thread blocks).
The fire-and-forget named stream (cudaStreamFireAndForget) allows the user to launch fire-and-
forget work with less boilerplate and without stream tracking overhead. It is functionally identical to,
but faster than, creating a new stream per launch, and launching into that stream.
Fire-and-forget launches are immediately scheduled for launch without any dependency on the com-
pletion of previously launched grids. No other grid launches can depend on the completion of a fire-
and-forget launch, except through the implicit synchronization at the end of the parent grid. So a tail
launch or the next grid in parent grid’s stream won’t launch before a parent grid’s fire-and-forget work
has completed.
∕∕ In this example, C2's launch will not wait for C1's completion
__global__ void P( ... ) {
C1<<< ... , cudaStreamFireAndForget >>>( ... );
C2<<< ... , cudaStreamFireAndForget >>>( ... );
}
The fire-and-forget stream cannot be used to record or wait on events. Attempting to do so re-
sults in cudaErrorInvalidValue. The fire-and-forget stream is not supported when compiled with
CUDA_FORCE_CDP1_IF_SUPPORTED defined. Fire-and-forget stream usage requires compilation to be
in 64-bit mode.
The tail launch named stream (cudaStreamTailLaunch) allows a grid to schedule a new grid for
launch after its completion. It should be possible to to use a tail launch to achieve the same func-
tionality as a cudaDeviceSynchronize() in most cases.
Each grid has its own tail launch stream. All non-tail launch work launched by a grid is implicitly syn-
chronized before the tail stream is kicked off. I.e. A parent grid’s tail launch does not launch until the
parent grid and all work launched by the parent grid to ordinary streams or per-thread or fire-and-
forget streams have completed. If two grids are launched to the same grid’s tail launch stream, the
later grid does not launch until the earlier grid and all its descendent work has completed.
∕∕ In this example, C2 will only launch after C1 completes.
__global__ void P( ... ) {
C1<<< ... , cudaStreamTailLaunch >>>( ... );
C2<<< ... , cudaStreamTailLaunch >>>( ... );
}
Grids launched into the tail launch stream will not launch until the completion of all work by the parent
grid, including all other grids (and their descendants) launched by the parent in all non-tail launched
streams, including work executed or launched after the tail launch.
The next grid in the parent grid’s stream will not be launched before a parent grid’s tail launch work
has completed. In other words, the tail launch stream behaves as if it were inserted between its parent
grid and the next grid in its parent grid’s stream.
∕∕ In this example, P2 will only launch after C completes.
__global__ void P1( ... ) {
C<<< ... , cudaStreamTailLaunch >>>( ... );
}
Each grid only gets one tail launch stream. To tail launch concurrent grids, it can be done like the
example below.
∕∕ In this example, C1 and C2 will launch concurrently after P's completion
__global__ void T( ... ) {
C1<<< ... , cudaStreamFireAndForget >>>( ... );
C2<<< ... , cudaStreamFireAndForget >>>( ... );
}
The tail launch stream cannot be used to record or wait on events. Attempting to do so re-
sults in cudaErrorInvalidValue. The tail launch stream is not supported when compiled with
CUDA_FORCE_CDP1_IF_SUPPORTED defined. Tail launch stream usage requires compilation to be in
64-bit mode.
12.3.1.3 Events
Only the inter-stream synchronization capabilities of CUDA events are supported. This means that cu-
daStreamWaitEvent() is supported, but cudaEventSynchronize(), cudaEventElapsedTime(),
and cudaEventQuery() are not. As cudaEventElapsedTime() is not supported, cudaEvents must
be created via cudaEventCreateWithFlags(), passing the cudaEventDisableTiming flag.
As with named streams, event objects may be shared between all threads within the grid which cre-
ated them but are local to that grid and may not be passed to other kernels. Event handles are not
guaranteed to be unique between grids, so using an event handle within a grid that did not create it
will result in undefined behavior.
12.3.1.4 Synchronization
It is up to the program to perform sufficient inter-thread synchronization, for example via a CUDA
Event, if the calling thread is intended to synchronize with child grids invoked from other threads.
As it is not possible to explicitly synchronize child work from a parent thread, there is no way to guar-
antee that changes occuring in child grids are visible to threads within the parent grid.
Only the device on which a kernel is running will be controllable from that kernel. This means that
device APIs such as cudaSetDevice() are not supported by the device runtime. The active device
as seen from the GPU (returned from cudaGetDevice()) will have the same device number as seen
from the host system. The cudaDeviceGetAttribute() call may request information about another
device as this API allows specification of a device ID as a parameter of the call. Note that the catch-all
cudaGetDeviceProperties() API is not offered by the device runtime - properties must be queried
individually.
Memory declared at file scope with __device__ or __constant__ memory space specifiers behaves
identically when using the device runtime. All kernels may read or write device variables, whether the
kernel was initially launched by the host or device runtime. Equivalently, all kernels will have the same
view of __constant__s as declared at the module scope.
CUDA supports dynamically created texture and surface objects19 , where a texture reference may be
created on the host, passed to a kernel, used by that kernel, and then destroyed from the host. The
device runtime does not allow creation or destruction of texture or surface objects from within device
code, but texture and surface objects created from the host may be used and passed around freely on
the device. Regardless of where they are created, dynamically created texture objects are always valid
and may be passed to child kernels from a parent.
Note: The device runtime does not support legacy module-scope (i.e., Fermi-style) textures and sur-
faces within a kernel launched from the device. Module-scope (legacy) textures may be created from
the host and used in device code as for any kernel, but may only be used by a top-level kernel (i.e., the
one which is launched from the host).
19 Dynamically created texture and surface objects are an addition to the CUDA memory model introduced with CUDA 5.0.
In CUDA C++ shared memory can be declared either as a statically sized file-scope or function-scoped
variable, or as an extern variable with the size determined at runtime by the kernel’s caller via a launch
configuration argument. Both types of declarations are valid under the device runtime.
__global__ void permute(int n, int *data) {
extern __shared__ int smem[];
if (n <= 1)
return;
smem[threadIdx.x] = data[threadIdx.x];
__syncthreads();
permute_data(smem, n);
__syncthreads();
if (threadIdx.x == 0) {
permute<<< 1, 256, n∕2*sizeof(int) >>>(n∕2, data);
permute<<< 1, 256, n∕2*sizeof(int) >>>(n∕2, data+n∕2);
}
}
Device-side symbols (i.e., those marked __device__) may be referenced from within a kernel simply
via the & operator, as all global-scope device variables are in the kernel’s visible address space. This
also applies to __constant__ symbols, although in this case the pointer will reference read-only data.
Given that device-side symbols can be referenced directly, those CUDA runtime APIs which reference
symbols (e.g., cudaMemcpyToSymbol() or cudaGetSymbolAddress()) are redundant and hence not
supported by the device runtime. Note this implies that constant data cannot be altered from within
a running kernel, even ahead of a child kernel launch, as references to __constant__ space are read-
only.
As usual for the CUDA runtime, any function may return an error code. The last error code returned
is recorded and may be retrieved via the cudaGetLastError() call. Errors are recorded per-thread,
so that each thread can identify the most recent error that it has generated. The error code is of type
cudaError_t.
Similar to a host-side launch, device-side launches may fail for many reasons (invalid arguments, etc).
The user must call cudaGetLastError() to determine if a launch generated an error, however lack
of an error after launch does not imply the child kernel completed successfully.
For device-side exceptions, e.g., access to an invalid address, an error in a child grid will be returned to
the host.
Kernel launch is a system-level mechanism exposed through the device runtime library, and as such
is available directly from PTX via the underlying cudaGetParameterBuffer() and cudaLaunchDe-
vice() APIs. It is permitted for a CUDA application to call these APIs itself, with the same require-
ments as for PTX. In both cases, the user is then responsible for correctly populating all necessary data
structures in the correct format according to specification. Backwards compatibility is guaranteed in
these data structures.
As with host-side launch, the device-side operator <<<>>> maps to underlying kernel launch APIs. This
is so that users targeting PTX will be able to enact a launch, and so that the compiler front-end can
translate <<<>>> into these calls.
The APIs for these launch functions are different to those of the CUDA Runtime API, and are defined
as follows:
extern device cudaError_t cudaGetParameterBuffer(void **params);
extern __device__ cudaError_t cudaLaunchDevice(void *kernel,
void *params, dim3 gridDim,
dim3 blockDim,
unsigned int sharedMemSize = 0,
cudaStream_t stream = 0);
The portions of the CUDA Runtime API supported in the device runtime are detailed here. Host and de-
vice runtime APIs have identical syntax; semantics are the same except where indicated. The following
table provides an overview of the API relative to the version available from the host.
Device-side kernel launches can be implemented using the following two APIs accessible from PTX:
cudaLaunchDevice() and cudaGetParameterBuffer(). cudaLaunchDevice() launches the
specified kernel with the parameter buffer that is obtained by calling cudaGetParameterBuffer()
and filled with the parameters to the launched kernel. The parameter buffer can be NULL, i.e., no need
to invoke cudaGetParameterBuffer(), if the launched kernel does not take any parameters.
12.3.2.1.1 cudaLaunchDevice
At the PTX level, cudaLaunchDevice()needs to be declared in one of the two forms shown below
before it is used.
∕∕ PTX-level Declaration of cudaLaunchDevice() when .address_size is 64
.extern .func(.param .b32 func_retval0) cudaLaunchDevice
(
.param .b64 func,
.param .b64 parameterBuffer,
.param .align 4 .b8 gridDimension[12],
.param .align 4 .b8 blockDimension[12],
.param .b32 sharedMemSize,
.param .b64 stream
)
;
The CUDA-level declaration below is mapped to one of the aforementioned PTX-level declarations and
is found in the system header file cuda_device_runtime_api.h. The function is defined in the
cudadevrt system library, which must be linked with a program in order to use device-side kernel
launch functionality.
∕∕ CUDA-level declaration of cudaLaunchDevice()
extern "C" __device__
cudaError_t cudaLaunchDevice(void *func, void *parameterBuffer,
dim3 gridDimension, dim3 blockDimension,
unsigned int sharedMemSize,
cudaStream_t stream);
The first parameter is a pointer to the kernel to be is launched, and the second parameter is the pa-
rameter buffer that holds the actual parameters to the launched kernel. The layout of the parameter
buffer is explained in Parameter Buffer Layout, below. Other parameters specify the launch configura-
tion, i.e., as grid dimension, block dimension, shared memory size, and the stream associated with the
launch (please refer to Execution Configuration for the detailed description of launch configuration.
12.3.2.1.2 cudaGetParameterBuffer
cudaGetParameterBuffer() needs to be declared at the PTX level before it’s used. The PTX-level
declaration must be in one of the two forms given below, depending on address size:
∕∕ PTX-level Declaration of cudaGetParameterBuffer() when .address_size is 64
.extern .func(.param .b64 func_retval0) cudaGetParameterBuffer
(
.param .b64 alignment,
.param .b64 size
)
;
The first parameter specifies the alignment requirement of the parameter buffer and the second pa-
rameter the size requirement in bytes. In the current implementation, the parameter buffer returned
by cudaGetParameterBuffer() is always guaranteed to be 64- byte aligned, and the alignment re-
quirement parameter is ignored. However, it is recommended to pass the correct alignment require-
ment value - which is the largest alignment of any parameter to be placed in the parameter buffer - to
cudaGetParameterBuffer() to ensure portability in the future.
Parameter reordering in the parameter buffer is prohibited, and each individual parameter placed in
the parameter buffer is required to be aligned. That is, each parameter must be placed at the nth byte
in the parameter buffer, where n is the smallest multiple of the parameter size that is greater than the
offset of the last byte taken by the preceding parameter. The maximum size of the parameter buffer
is 4KB.
For a more detailed description of PTX code generated by the CUDA compiler, please refer to the PTX-
3.5 specification.
Similar to the host-side runtime API, prototypes for the CUDA device runtime API are included au-
tomatically during program compilation. There is no need to includecuda_device_runtime_api.h
explicitly.
When compiling and linking CUDA programs using dynamic parallelism with nvcc, the program will
automatically link against the static device runtime library libcudadevrt.
The device runtime is offered as a static library (cudadevrt.lib on Windows, libcudadevrt.a under
Linux), against which a GPU application that uses the device runtime must be linked. Linking of device
libraries can be accomplished through nvcc and/or nvlink. Two simple examples are shown below.
A device runtime program may be compiled and linked in a single step, if all required source files can
be specified from the command line:
$ nvcc -arch=sm_75 -rdc=true hello_world.cu -o hello -lcudadevrt
It is also possible to compile CUDA .cu source files first to object files, and then link these together in
a two-stage process:
$ nvcc -arch=sm_75 -dc hello_world.cu -o hello_world.o
$ nvcc -arch=sm_75 -rdc=true hello_world.o -o hello -lcudadevrt
Please see the Using Separate Compilation section of The CUDA Driver Compiler NVCC guide for more
details.
12.4.1. Basics
The device runtime is a functional subset of the host runtime. API level device management, kernel
launching, device memcpy, stream management, and event management are exposed from the device
runtime.
Programming for the device runtime should be familiar to someone who already has experience with
CUDA. Device runtime syntax and semantics are largely the same as that of the host API, with any
exceptions detailed earlier in this document.
The following example shows a simple Hello World program incorporating dynamic parallelism:
#include <stdio.h>
return 0;
}
This program may be built in a single step from the command line as follows:
$ nvcc -arch=sm_75 -rdc=true hello_world.cu -o hello -lcudadevrt
12.4.2. Performance
12.4.2.1 Dynamic-parallelism-enabled Kernel Overhead
System software which is active when controlling dynamic launches may impose an overhead on any
kernel which is running at the time, whether or not it invokes kernel launches of its own. This over-
head arises from the device runtime’s execution tracking and management software and may result
in decreased performance. This overhead is, in general, incurred for applications that link against the
device runtime library.
12.4.3.1 Runtime
The device runtime system software reserves memory for various management purposes, in particular
a reservation for tracking pending grid launches. Configuration controls are available to reduce the size
of this reservation in exchange for certain launch limitations. See Configuration Options, below, for
details.
When a kernel is launched, all associated configuration and parameter data is tracked until the kernel
completes. This data is stored within a system-managed launch pool.
The size of the fixed-size launch pool is configurable by calling cudaDeviceSetLimit() from the host
and specifying cudaLimitDevRuntimePendingLaunchCount.
Resource allocation for the device runtime system software is controlled via the cudaDevice-
SetLimit() API from the host program. Limits must be set before any kernel is launched, and may
not be changed while the GPU is actively running programs.
The following named limits may be set:
Limit Behavior
cudaLimitDevRuntimePendingLaunchCount Controls the amount of memory set aside for
buffering kernel launches and events which have
not yet begun to execute, due either to un-
resolved dependencies or lack of execution re-
sources. When the buffer is full, an attempt
to allocate a launch slot during a device side
kernel launch will fail and return cudaError-
LaunchOutOfResources, while an attempt to
allocate an event slot will fail and return cud-
aErrorMemoryAllocation. The default num-
ber of launch slots is 2048. Applications may
increase the number of launch and/or event
slots by setting cudaLimitDevRuntimePend-
ingLaunchCount. The number of event slots al-
located is twice the value of that limit.
cudaLimitStackSize Controls the stack size in bytes of each GPU
thread. The CUDA driver automatically increases
the per-thread stack size for each kernel launch
as needed. This size isn’t reset back to the
original value after each launch. To set the
per-thread stack size to a different value, cud-
aDeviceSetLimit() can be called to set this
limit. The stack will be immediately resized, and
if necessary, the device will block until all pre-
ceding requested tasks are complete. cudaDe-
viceGetLimit() can be called to get the cur-
rent per-thread stack size.
cudaMalloc() and cudaFree() have distinct semantics between the host and device environments.
When invoked from the host, cudaMalloc() allocates a new region from unused device memory.
When invoked from the device runtime these functions map to device-side malloc() and free().
This implies that within the device environment the total allocatable memory is limited to the device
malloc() heap size, which may be smaller than the available unused device memory. Also, it is an error
to invoke cudaFree() from the host program on a pointer which was allocated by cudaMalloc() on
the device or vice-versa.
Note that in PTX %smid and %warpid are defined as volatile values. The device runtime may reschedule
thread blocks onto different SMs in order to more efficiently manage resources. As such, it is unsafe
to rely upon %smid or %warpid remaining unchanged across the lifetime of a thread or thread block.
No notification of ECC errors is available to code within a CUDA kernel. ECC errors are reported at the
host side once the entire launch tree has completed. Any ECC errors which arise during execution of
a nested program will either generate an exception or continue execution (depending upon error and
configuration).
Functions using CDP1 and CDP2 may be loaded and run simultaneously in the same context. The
CDP1 functions are able to use CDP1-specific features (e.g. cudaDeviceSynchronize) and CDP2
functions are able to use CDP2-specific features (e.g. tail launch and fire-and-forget launch).
A function using CDP1 cannot launch a function using CDP2, and vice versa. If a function that would
use CDP1 contains in its call graph a function that would use CDP2, or vice versa, cudaErrorCdpVer-
sionMismatch would result during function load.
Warning: Explicit synchronization with child kernels from a parent block (i.e. using cudaDe-
viceSynchronize() in device code) block is deprecated in CUDA 11.6, removed for compute_90+
compilation, and is slated for full removal in a future CUDA release.
See Parent and Child Grids, above, for CDP2 version of document.
A device thread that configures and launches a new grid belongs to the parent grid, and the grid
created by the invocation is a child grid.
The invocation and completion of child grids is properly nested, meaning that the parent grid is not
considered complete until all child grids created by its threads have completed. Even if the invoking
threads do not explicitly synchronize on the child grids launched, the runtime guarantees an implicit
synchronization between the parent and child.
Warning: Explicit synchronization with child kernels from a parent block (i.e. using cudaDe-
viceSynchronize() in device code) is deprecated in CUDA 11.6, removed for compute_90+ com-
pilation, and is slated for full removal in a future CUDA release.
Warning: Explicit synchronization with child kernels from a parent block (i.e. using cudaDe-
viceSynchronize() in device code) is deprecated in CUDA 11.6, removed for compute_90+ com-
pilation, and is slated for full removal in a future CUDA release.
CUDA runtime operations from any thread, including kernel launches, are visible across a thread block.
This means that an invoking thread in the parent grid may perform synchronization on the grids
launched by that thread, by other threads in the thread block, or on streams created within the same
thread block. Execution of a thread block is not considered complete until all launches by all threads
in the block have completed. If all threads in a block exit before all child launches have completed, a
synchronization operation will automatically be triggered.
Warning: Explicit synchronization with child kernels from a parent block (i.e. using cudaDe-
viceSynchronize() in device code) is deprecated in CUDA 11.6, removed for compute_90+ com-
pilation, and is slated for full removal in a future CUDA release.
While concurrency will often easily be achieved, it may vary as a function of deviceconfiguration, ap-
plication workload, and runtime scheduling. It is therefore unsafe to depend upon any concurrency
between different thread blocks.
Warning: Explicit synchronization with child kernels from a parent block (i.e. using cudaDe-
viceSynchronize() in device code) is deprecated in CUDA 11.6, removed for compute_90+ com-
pilation, and is slated for full removal in a future CUDA release.
All global memory operations in the parent thread prior to the child grid’s invocation are visible to
the child grid. All memory operations of the child grid are visible to the parent after the parent has
synchronized on the child grid’s completion.
In the following example, the child grid executing child_launch is only guaranteed to see the modi-
fications to data made before the child grid was launched. Since thread 0 of the parent is performing
the launch, the child will be consistent with the memory seen by thread 0 of the parent. Due to the
first __syncthreads() call, the child will see data[0]=0, data[1]=1, …, data[255]=255 (without
the __syncthreads() call, only data[0] would be guaranteed to be seen by the child). When the
child grid returns, thread 0 is guaranteed to see modifications made by the threads in its child grid.
Those modifications become available to the other threads of the parent grid only after the second
__syncthreads() call:
__global__ void child_launch(int *data) {
data[threadIdx.x] = data[threadIdx.x]+1;
}
__syncthreads();
if (threadIdx.x == 0) {
child_launch<<< 1, 256 >>>(data);
cudaDeviceSynchronize();
}
__syncthreads();
(continues on next page)
See Shared and Local Memory, above, for CDP2 version of document.
Shared and Local memory is private to a thread block or thread, respectively, and is not visible or
coherent between parent and child. Behavior is undefined when an object in one of these locations is
referenced outside of the scope within which it belongs, and may cause an error.
The NVIDIA compiler will attempt to warn if it can detect that a pointer to local or shared memory
is being passed as an argument to a kernel launch. At runtime, the programmer may use the __is-
Global() intrinsic to determine whether a pointer references global memory and so may safely be
passed to a child launch.
Note that calls to cudaMemcpy*Async() or cudaMemset*Async() may invoke new child kernels on
the device in order to preserve stream semantics. As such, passing shared or local memory pointers
to these APIs is illegal and will return an error.
It is sometimes difficult for a programmer to be aware of when a variable is placed into local memory by
the compiler. As a general rule, all storage passed to a child kernel should be allocated explicitly from
the global-memory heap, either with cudaMalloc(), new() or by declaring __device__ storage at
global scope. For example:
∕∕ Correct - "value" is global storage
__device__ int value;
__device__ void x() {
value = 5;
child<<< 1, 1 >>>(&value);
}
Warning: Explicit synchronization with child kernels from a parent block (i.e. using cudaDe-
viceSynchronize() in device code) is deprecated in CUDA 11.6, removed for compute_90+ com-
pilation, and is slated for full removal in a future CUDA release.
▶ Dg is of type dim3 and specifies the dimensions and size of the grid
▶ Db is of type dim3 and specifies the dimensions and size of each thread block
▶ Ns is of type size_t and specifies the number of bytes of shared memory that is dynamically al-
located per thread block for this call and addition to statically allocated memory. Ns is an optional
argument that defaults to 0.
▶ S is of type cudaStream_t and specifies the stream associated with this call. The stream must
have been allocated in the same thread block where the call is being made. S is an optional
argument that defaults to 0.
Warning: Explicit synchronization with child kernels from a parent block (i.e. using cudaDe-
viceSynchronize() in device code) is deprecated in CUDA 11.6, removed for compute_90+ com-
pilation, and is slated for full removal in a future CUDA release.
The grid launch is posted to the device and will execute independently of the parent thread. The child
grid may begin execution at any time after launch, but is not guaranteed to begin execution until the
launching thread reaches an explicit launch-synchronization point.
Warning: Explicit synchronization with child kernels from a parent block (i.e. using cudaDe-
viceSynchronize() in device code) is deprecated in CUDA 11.6, removed for compute_90+ com-
pilation, and is slated for full removal in a future CUDA release.
See The Implicit (NULL) Stream, above, for CDP2 version of document.
Within a host program, the unnamed (NULL) stream has additional barrier synchronization semantics
with other streams (see Default Stream for details). The device runtime offers a single implicit, un-
named stream shared between all threads in a block, but as all named streams must be created with
the cudaStreamNonBlocking flag, work launched into the NULL stream will not insert an implicit
dependency on pending work in any other streams (including NULL streams of other thread blocks).
Warning: Explicit synchronization with child kernels from a parent block (i.e. using cudaDe-
viceSynchronize() in device code) is deprecated in CUDA 11.6, removed for compute_90+ com-
pilation, and is slated for full removal in a future CUDA release.
The cudaDeviceSynchronize() function will synchronize on all work launched by any thread in
the thread-block up to the point where cudaDeviceSynchronize() was called. Note that cud-
aDeviceSynchronize() may be called from within divergent code (see Block Wide Synchronization
(CDP1)).
It is up to the program to perform sufficient additional inter-thread synchronization, for example via
a call to __syncthreads(), if the calling thread is intended to synchronize with child grids invoked
from other threads.
Because the implementation is permitted to synchronize on launches from any thread in the block, it
is quite possible that simultaneous calls to cudaDeviceSynchronize() by multiple threads will drain
all work in the first call and then have no effect for the later calls.
See Device and Constant Memory, above, for CDP2 version of document.
Memory declared at file scope with __device__ or __constant__ memory space specifiers behaves
identically when using the device runtime. All kernels may read or write device variables, whether the
kernel was initially launched by the host or device runtime. Equivalently, all kernels will have the same
view of __constant__s as declared at the module scope.
Note: The device runtime does not support legacy module-scope (i.e., Fermi-style) textures and sur-
faces within a kernel launched from the device. Module-scope (legacy) textures may be created from
the host and used in device code as for any kernel, but may only be used by a top-level kernel (i.e., the
one which is launched from the host).
See Shared Memory Variable Declarations, above, for CDP2 version of document.
In CUDA C++ shared memory can be declared either as a statically sized file-scope or function-scoped
variable, or as an extern variable with the size determined at runtime by the kernel’s caller via a launch
configuration argument. Both types of declarations are valid under the device runtime.
__global__ void permute(int n, int *data) {
extern __shared__ int smem[];
if (n <= 1)
return;
smem[threadIdx.x] = data[threadIdx.x];
__syncthreads();
permute_data(smem, n);
__syncthreads();
if (threadIdx.x == 0) {
permute<<< 1, 256, n∕2*sizeof(int) >>>(n∕2, data);
permute<<< 1, 256, n∕2*sizeof(int) >>>(n∕2, data+n∕2);
}
}
See API Errors and Launch Failures, above, for CDP2 version of document.
As usual for the CUDA runtime, any function may return an error code. The last error code returned
is recorded and may be retrieved via the cudaGetLastError() call. Errors are recorded per-thread,
so that each thread can identify the most recent error that it has generated. The error code is of type
cudaError_t.
Similar to a host-side launch, device-side launches may fail for many reasons (invalid arguments, etc).
The user must call cudaGetLastError() to determine if a launch generated an error, however lack
of an error after launch does not imply the child kernel completed successfully.
For device-side exceptions, e.g., access to an invalid address, an error in a child grid will be returned to
the host instead of being returned by the parent’s call to cudaDeviceSynchronize().
The APIs for these launch functions are different to those of the CUDA Runtime API, and are defined
as follows:
extern device cudaError_t cudaGetParameterBuffer(void **params);
extern __device__ cudaError_t cudaLaunchDevice(void *kernel,
void *params, dim3 gridDim,
dim3 blockDim,
unsigned int sharedMemSize = 0,
cudaStream_t stream = 0);
See Device-side Launch from PTX, above, for CDP2 version of document.
This section is for the programming language and compiler implementers who target Parallel Thread
Execution (PTX) and plan to support Dynamic Parallelism in their language. It provides the low-level
details related to supporting kernel launches at the PTX level.
The CUDA-level declaration below is mapped to one of the aforementioned PTX-level declarations and
is found in the system header file cuda_device_runtime_api.h. The function is defined in the
cudadevrt system library, which must be linked with a program in order to use device-side kernel
launch functionality.
The first parameter is a pointer to the kernel to be is launched, and the second parameter is the pa-
rameter buffer that holds the actual parameters to the launched kernel. The layout of the parameter
buffer is explained in Parameter Buffer Layout (CDP1), below. Other parameters specify the launch
configuration, i.e., as grid dimension, block dimension, shared memory size, and the stream associ-
ated with the launch (please refer to Execution Configuration for the detailed description of launch
configuration.
The first parameter specifies the alignment requirement of the parameter buffer and the second pa-
rameter the size requirement in bytes. In the current implementation, the parameter buffer returned
by cudaGetParameterBuffer() is always guaranteed to be 64- byte aligned, and the alignment re-
quirement parameter is ignored. However, it is recommended to pass the correct alignment require-
ment value - which is the largest alignment of any parameter to be placed in the parameter buffer - to
cudaGetParameterBuffer() to ensure portability in the future.
See Toolkit Support for Dynamic Parallelism, above, for CDP2 version of document.
See Including Device Runtime API in CUDA Code, above, for CDP2 version of document.
Similar to the host-side runtime API, prototypes for the CUDA device runtime API are included au-
tomatically during program compilation. There is no need to includecuda_device_runtime_api.h
explicitly.
It is also possible to compile CUDA .cu source files first to object files, and then link these together in
a two-stage process:
$ nvcc -arch=sm_75 -dc hello_world.cu -o hello_world.o
$ nvcc -arch=sm_75 -rdc=true hello_world.o -o hello -lcudadevrt
Please see the Using Separate Compilation section of The CUDA Driver Compiler NVCC guide for more
details.
Warning: Explicit synchronization with child kernels from a parent block (i.e. using cudaDe-
viceSynchronize() in device code) is deprecated in CUDA 11.6, removed for compute_90+ com-
pilation, and is slated for full removal in a future CUDA release.
The following example shows a simple Hello World program incorporating dynamic parallelism:
#include <stdio.h>
printf("World!\n");
}
return 0;
}
This program may be built in a single step from the command line as follows:
$ nvcc -arch=sm_75 -rdc=true hello_world.cu -o hello -lcudadevrt
Warning: Explicit synchronization with child kernels from a parent block (such as using cudaDe-
viceSynchronize() in device code) is deprecated in CUDA 11.6, removed for compute_90+ com-
pilation, and is slated for full removal in a future CUDA release.
Synchronization by one thread may impact the performance of other threads in the same Thread Block,
even when those other threads do not call cudaDeviceSynchronize() themselves. This impact will
depend upon the underlying implementation. In general the implicit synchronization of child kernels
done when a thread block ends is more efficient compared to calling cudaDeviceSynchronize()
explicitly. It is therefore recommended to only call cudaDeviceSynchronize() if it is needed to
synchronize with a child kernel before a thread block ends.
See Implementation Restrictions and Limitations, above, for CDP2 version of document.
Dynamic Parallelism guarantees all semantics described in this document, however, certain hardware
and software resources are implementation-dependent and limit the scale, performance and other
properties of a program which uses the device runtime.
Warning: Explicit synchronization with child kernels from a parent block (i.e. using cudaDe-
viceSynchronize() in device code) is deprecated in CUDA 11.6, removed for compute_90+ com-
pilation, and is slated for full removal in a future CUDA release.
The overall maximum nesting depth is limited to 24, but practically speaking the real limit will be the
amount of memory required by the system for each new level (see Memory Footprint (CDP1) above).
Any launch which would result in a kernel at a deeper level than the maximum will fail. Note that this
may also apply to cudaMemcpyAsync(), which might itself generate a kernel launch. See Configura-
tion Options (CDP1) for details.
By default, sufficient storage is reserved for two levels of synchronization. This maximum synchro-
nization depth (and hence reserved storage) may be controlled by calling cudaDeviceSetLimit()
Warning: Explicit synchronization with child kernels from a parent block (i.e. using cudaDe-
viceSynchronize() in device code) is deprecated in CUDA 11.6, removed for compute_90+ com-
pilation, and is slated for full removal in a future CUDA release.
Limit Behavior
cudaLimitDevRun- Sets the maximum depth at which cudaDeviceSynchronize()
timeSyncDepth may be called. Launches may be performed deeper than this, but
explicit synchronization deeper than this limit will return the cud-
aErrorLaunchMaxDepthExceeded. The default maximum sync
depth is 2.
cudaLimitDevRuntimePend- Controls the amount of memory set aside for buffering kernel
ingLaunchCount launches which have not yet begun to execute, due either to un-
resolved dependencies or lack of execution resources. When the
buffer is full, the device runtime system software will attempt
to track new pending launches in a lower performance virtual-
ized buffer. If the virtualized buffer is also full, i.e. when all
available heap space is consumed, launches will not occur, and
the thread’s last error will be set to cudaErrorLaunchPend-
ingCountExceeded. The default pending launch count is 2048
launches.
cudaLimitStackSize Controls the stack size in bytes of each GPU thread. The CUDA
driver automatically increases the per-thread stack size for each
kernel launch as needed. This size isn’t reset back to the origi-
nal value after each launch. To set the per-thread stack size to
a different value, cudaDeviceSetLimit() can be called to set
this limit. The stack will be immediately resized, and if necessary,
the device will block until all preceding requested tasks are com-
plete. cudaDeviceGetLimit() can be called to get the current
per-thread stack size.
See Memory Allocation and Lifetime, above, for CDP2 version of document.
cudaMalloc() and cudaFree() have distinct semantics between the host and device environments.
When invoked from the host, cudaMalloc() allocates a new region from unused device memory.
When invoked from the device runtime these functions map to device-side malloc() and free().
This implies that within the device environment the total allocatable memory is limited to the device
malloc() heap size, which may be smaller than the available unused device memory. Also, it is an error
to invoke cudaFree() from the host program on a pointer which was allocated by cudaMalloc() on
the device or vice-versa.
13.1. Introduction
The Virtual Memory Management APIs provide a way for the application to directly manage the unified
virtual address space that CUDA provides to map physical memory to virtual addresses accessible by
the GPU. Introduced in CUDA 10.2, these APIs additionally provide a new way to interop with other
processes and graphics APIs like OpenGL and Vulkan, as well as provide newer memory attributes that
a user can tune to fit their applications.
Historically, memory allocation calls (such as cudaMalloc()) in the CUDA programming model have
returned a memory address that points to the GPU memory. The address thus obtained could be used
with any CUDA API or inside a device kernel. However, the memory allocated could not be resized de-
pending on the user’s memory needs. In order to increase an allocation’s size, the user had to explicitly
allocate a larger buffer, copy data from the initial allocation, free it and then continue to keep track
of the newer allocation’s address. This often leads to lower performance and higher peak memory
utilization for applications. Essentially, users had a malloc-like interface for allocating GPU memory,
but did not have a corresponding realloc to complement it. The Virtual Memory Management APIs de-
couple the idea of an address and memory and allow the application to handle them separately. The
APIs allow applications to map and unmap memory from a virtual address range as they see fit.
In the case of enabling peer device access to memory allocations by using cudaEnablePeerAccess,
all past and future user allocations are mapped to the target peer device. This lead to users unwittingly
paying runtime cost of mapping all cudaMalloc allocations to peer devices. However, in most situations
applications communicate by sharing only a few allocations with another device and not all allocations
are required to be mapped to all the devices. With Virtual Memory Management, applications can
specifically choose certain allocations to be accessible from target devices.
The CUDA Virtual Memory Management APIs expose fine grained control to the user for managing the
GPU memory in applications. It provides APIs that let users:
▶ Place memory allocated on different devices into a contiguous VA range.
▶ Perform interprocess communication for memory sharing using platform-specific mechanisms.
▶ Opt into newer memory types on the devices that support them.
In order to allocate memory, the Virtual Memory Management programming model exposes the fol-
lowing functionality:
▶ Allocating physical memory.
▶ Reserving a VA range.
339
CUDA C++ Programming Guide, Release 12.4
if (deviceSupportsVmm != 0) {
∕∕ `device` supports Virtual Memory Management
}
return allocHandle;
}
#else
cuDeviceGetAttribute(&deviceSupportsIpcHandle, CU_DEVICE_ATTRIBUTE_HANDLE_TYPE_
,→WIN32_HANDLE_SUPPORTED, device));
#endif
The memMapIpcDrv sample can be used as an example for using IPC with Virtual Memory Manage-
ment allocations.
Compressible memory can be used to accelerate accesses to data with unstructured spar-
sity and other compressible data patterns. Compression can save DRAM bandwidth, L2
read bandwidth and L2 capacity depending on the data being operated on. Applications
that want to allocate compressible memory on devices that support Compute Data Com-
pression can do so by setting CUmemAllocationProp::allocFlags::compressionType to
CU_MEM_ALLOCATION_COMP_GENERIC. Users must query if device supports Compute Data Compres-
sion by using CU_DEVICE_ATTRIBUTE_GENERIC_COMPRESSION_SUPPORTED. The following code snip-
pet illustrates querying compressible memory support cuDeviceGetAttribute.
int compressionSupported = 0;
cuDeviceGetAttribute(&compressionSupported, CU_DEVICE_ATTRIBUTE_GENERIC_COMPRESSION_
,→SUPPORTED, device);
On devices that support Compute Data Compression, users must opt in at allocation time as shown
below:
prop.allocFlags.compressionType = CU_MEM_ALLOCATION_COMP_GENERIC;
Due to various reasons such as limited HW resources, the allocation may not have compression at-
tributes, the user is expected to query back the properties of the allocated memory using cuMemGe-
tAllocationPropertiesFromHandle and check for compression attribute.
CUmemAllocationPropPrivate allocationProp = {};
cuMemGetAllocationPropertiesFromHandle(&allocationProp, allocationHandle);
if (allocationProp.allocFlags.compressionType == CU_MEM_ALLOCATION_COMP_GENERIC)
{
∕∕ Obtained compressible memory allocation
}
memory belonging to different devices. Applications are expected to return the virtual address range
back to CUDA using cuMemAddressFree. Users must ensure that the entire VA range is unmapped
before calling cuMemAddressFree. These functions are conceptually similar to mmap/munmap (on
Linux) or VirtualAlloc/VirtualFree (on Windows) functions. The following code snippet illustrates the
usage for the function:
CUdeviceptr ptr;
∕∕ `ptr` holds the returned start of virtual address range reserved.
CUresult result = cuMemAddressReserve(&ptr, size, 0, 0, 0); ∕∕ alignment = 0 for�
,→default alignment
The following is defined behavior, assuming these two kernels are ordered monotonically (by streams
or events).
__global__ void foo1(char *A) {
*A = 0x1;
}
The access control mechanism exposed with Virtual Memory Management allows users to be explicit
about which allocations they want to share with other peer devices on the system. As specified earlier,
cudaEnablePeerAccess forces all prior and future cudaMalloc’d allocations to be mapped to the
target peer device. This can be convenient in many cases as user doesn’t have to worry about tracking
the mapping state of every allocation to every device in the system. But for users concerned with
performance of their applications this approach has performance implications. With access control
at allocation granularity Virtual Memory Management exposes a mechanism to have peer mappings
with minimal overhead.
The vectorAddMMAP sample can be used as an example for using the Virtual Memory Management
APIs.
14.1. Introduction
Managing memory allocations using cudaMalloc and cudaFree causes GPU to synchronize across all
executing CUDA streams. The Stream Order Memory Allocator enables applications to order memory
allocation and deallocation with other work launched into a CUDA stream such as kernel launches and
asynchronous copies. This improves application memory use by taking advantage of stream-ordering
semantics to reuse memory allocations. The allocator also allows applications to control the allocator’s
memory caching behavior. When set up with an appropriate release threshold, the caching behavior
allows the allocator to avoid expensive calls into the OS when the application indicates it is willing
to accept a bigger memory footprint. The allocator also supports the easy and secure sharing of
allocations between processes.
For many applications, the Stream Ordered Memory Allocator reduces the need for custom memory
management abstractions, and makes it easier to create high-performance custom memory manage-
ment for applications that need it. For applications and libraries that already have custom memory
allocators, adopting the Stream Ordered Memory Allocator enables multiple libraries to share a com-
mon pool of memory managed by the driver, thus reducing excess memory consumption. Additionally,
the driver can perform optimizations based on its awareness of the allocator and other stream man-
agement APIs. Finally, Nsight Compute and the Next-Gen CUDA debugger is aware of the allocator as
part of their CUDA 11.3 toolkit support.
345
CUDA C++ Programming Guide, Release 12.4
Performing the driver version check before the query avoids hitting a cudaErrorInvalidValue error
on drivers where the attribute was not yet defined. One can use cudaGetLastError to clear the error
instead of avoiding it.
When using an allocation in a stream other than the allocating stream, the user must guarantee that
the access will happen after the allocation operation, otherwise the behavior is undefined. The user
may make this guarantee either by synchronizing the allocating stream, or by using CUDA events to
synchronize the producing and consuming streams.
cudaFreeAsync() inserts a free operation into the stream. The user must guarantee that the free
operation happens after the allocation operation and any use of the allocation. Also, any use of the
allocation after the free operation starts results in undefined behavior. Events and/or stream syn-
chronizing operations should be used to guarantee any access to the allocation on other streams is
complete before the freeing stream begins the free operation.
The user can free allocations allocated with cudaMalloc() with cudaFreeAsync(). The user must
make the same guarantees about accesses being complete before the free operation begins.
cudaMalloc(&ptr, size);
kernel<<<..., stream>>>(ptr, ...);
cudaFreeAsync(ptr, stream);
The user can free memory allocated with cudaMallocAsync with cudaFree(). When freeing such
allocations through the cudaFree() API, the driver assumes that all accesses to the allocation are
complete and performs no further synchronization. The user can use cudaStreamQuery / cudaS-
treamSynchronize / cudaEventQuery / cudaEventSynchronize / cudaDeviceSynchronize to
guarantee that the appropriate asynchronous work is complete and that the GPU will not try to ac-
cess the allocation.
cudaMallocAsync(&ptr, size,stream);
kernel<<<..., stream>>>(ptr, ...);
∕∕ synchronize is needed to avoid prematurely freeing the memory
cudaStreamSynchronize(stream);
cudaFree(ptr);
Note: The mempool current to a device will be local to that device. So allocating without specifying a
memory pool will always yield an allocation local to the stream’s device.
memory pools.
cudaMemPoolCreate(&memPool, &poolProps));
The following code snippet illustrates an example of creating an IPC capable memory pool on a valid
CPU NUMA node.
∕∕ create a pool resident on a CPU NUMA node that is capable of IPC sharing (via a file�
,→descriptor).
int cpu_numa_id = 0;
cudaMemPoolProps poolProps = { };
poolProps.allocType = cudaMemAllocationTypePinned;
poolProps.location.id = cpu_numa_id;
poolProps.location.type = cudaMemLocationTypeHostNuma;
poolProps.handleType = cudaMemHandleTypePosixFileDescriptor;
cudaMemPoolCreate(&ipcMemPool, &poolProps));
∕∕ application phase needing a lot of memory from the stream ordered allocator
for (i=0; i<10; i++) {
for (j=0; j<10; j++) {
cudaMallocAsync(&ptrs[j],size[j], stream);
}
kernel<<<...,stream>>>(ptrs,...);
for (j=0; j<10; j++) {
cudaFreeAsync(ptrs[j], stream);
}
}
∕∕ Process does not need as much memory for the next phase.
∕∕ Synchronize so that the trim operation will know that the allocations are no
∕∕ longer in use.
cudaStreamSynchronize(stream);
cudaMemPoolTrimTo(mempool, 0);
∕∕ Some other process∕allocation mechanism can now use the physical memory
∕∕ released by the trimming operation.
∕∕ resetting the watermarks will make them take on the current value.
void resetStatistics(cudaMemoryPool_t memPool)
{
cuuint64_t value = 0;
cudaMemPoolSetAttribute(memPool, cudaMemPoolAttrReservedMemHigh, &value);
cudaMemPoolSetAttribute(memPool, cudaMemPoolAttrUsedMemHigh, &value);
}
driver may change, enhance, augment and/or reorder the reuse policies.
14.9.1. cudaMemPoolReuseFollowEventDependencies
Before allocating more physical GPU memory, the allocator examines dependency information estab-
lished by CUDA events and tries to allocate from memory freed in another stream.
cudaMallocAsync(&ptr, size, originalStream);
kernel<<<..., originalStream>>>(ptr, ...);
cudaFreeAsync(ptr, originalStream);
cudaEventRecord(event,originalStream);
14.9.2. cudaMemPoolReuseAllowOpportunistic
According to the cudaMemPoolReuseAllowOpportunistic policy, the allocator examines freed al-
locations to see if the free’s stream order semantic has been met (such as the stream has passed the
point of execution indicated by the free). When this is disabled, the allocator will still reuse memory
made available when a stream is synchronized with the CPU. Disabling this policy does not stop the
cudaMemPoolReuseFollowEventDependencies from applying.
cudaMallocAsync(&ptr, size, originalStream);
kernel<<<..., originalStream>>>(ptr, ...);
cudaFreeAsync(ptr, originalStream);
14.9.3. cudaMemPoolReuseAllowInternalDependencies
Failing to allocate and map more physical memory from the OS, the driver will look for memory whose
availability depends on another stream’s pending progress. If such memory is found, the driver will
insert the required dependency into the allocating stream and reuse the memory.
cudaMallocAsync(&ptr, size, originalStream);
kernel<<<..., originalStream>>>(ptr, ...);
cudaFreeAsync(ptr, originalStream);
int canAccess = 0;
cudaError_t error = cudaDeviceCanAccessPeer(&canAccess, accessingDevice,
residentDevice);
if (error != cudaSuccess) {
return error;
(continues on next page)
∕∕ Setting handleTypes to a non zero value will make the pool exportable (IPC capable)
poolProps.handleTypes = CU_MEM_HANDLE_TYPE_POSIX_FILE_DESCRIPTOR;
cudaMemPoolCreate(&memPool, &poolProps));
∕∕ The handle must be sent to the importing process with the appropriate
∕∕ OS specific APIs.
∕∕ in importing process
int fdHandle;
∕∕ The handle needs to be retrieved from the exporting process with the
∕∕ appropriate OS specific APIs.
∕∕ Create an imported pool from the shareable handle.
∕∕ Note that the handle is passed by value here.
cudaMemPoolImportFromShareableHandle(&importedMemPool,
(void*)fdHandle,
CU_MEM_HANDLE_TYPE_POSIX_FILE_DESCRIPTOR,
0);
cudaEventCreate(
&readyIpcEvent, cudaEventDisableTiming | cudaEventInterprocess)
∕∕ Share IPC event and pointer export data with the importing process using
∕∕ any mechanism. Here we copy the data into shared memory
shmem->ptrData = exportData;
shmem->readyIpcEventHandle = readyIpcEventHandle;
∕∕ signal consumers data is ready
∕∕ Importing an allocation
cudaMemPoolPtrExportData *importData = &shmem->prtData;
cudaEvent_t readyIpcEvent;
cudaIpcEventHandle_t *readyIpcEventHandle = &shmem->readyIpcEventHandle;
∕∕ Need to retrieve the ipc event handle and the export data from the
∕∕ exporting process using any mechanism. Here we are using shmem and just
∕∕ need synchronization to make sure the shared memory is filled in.
cudaIpcOpenEventHandle(&readyIpcEvent, readyIpcEventHandle);
∕∕ import the allocation. The operation does not block on the allocation being ready.
cudaMemPoolImportPointer(&ptr, importedMemPool, importData);
∕∕ Wait for the prior stream operations in the allocating stream to complete before
∕∕ using the allocation in the importing process.
cudaStreamWaitEvent(stream, readyIpcEvent);
kernel<<<..., stream>>>(ptr, ...);
When freeing the allocation, the allocation needs to be freed in the importing process before it is
freed in the exporting process. The following code snippet demonstrates the use of CUDA IPC events
to provide the required synchronization between the cudaFreeAsync operations in both processes.
Access to the allocation from the importing process is obviously restricted by the free operation in the
importing process side. It is worth noting that cudaFree can be used to free the allocation in both
processes and that other stream synchronization APIs may be used instead of CUDA IPC events.
∕∕ The free must happen in importing process before the exporting process
kernel<<<..., stream>>>(ptr, ...);
∕∕ Exporting process
∕∕ The exporting process needs to coordinate its free with the stream order
(continues on next page)
∕∕ The free in the importing process doesn’t stop the exporting process
∕∕ from using the allocation.
cudFreeAsync(ptrInExportingProcess,stream);
14.13. Addendums
14.13.3. cuGraphAddMemsetNode
cuGraphAddMemsetNode does not work with memory allocated via the stream ordered allocator.
However, memsets of the allocations can be stream captured.
15.1. Introduction
Graph memory nodes allow graphs to create and own memory allocations. Graph memory nodes have
GPU ordered lifetime semantics, which dictate when memory is allowed to be accessed on the device.
These GPU ordered lifetime semantics enable driver-managed memory reuse, and match those of the
stream ordered allocation APIs cudaMallocAsync and cudaFreeAsync, which may be captured when
creating a graph.
Graph allocations have fixed addresses over the life of a graph including repeated instantiations and
launches. This allows the memory to be directly referenced by other operations within the graph with-
out the need of a graph update, even when CUDA changes the backing physical memory. Within a
graph, allocations whose graph ordered lifetimes do not overlap may use the same underlying physical
memory.
CUDA may reuse the same physical memory for allocations across multiple graphs, aliasing virtual ad-
dress mappings according to the GPU ordered lifetime semantics. For example when different graphs
are launched into the same stream, CUDA may virtually alias the same physical memory to satisfy the
needs of allocations which have single-graph lifetimes.
}
deviceSupportsMemoryNodes = (driverVersion >= 11040) && (deviceSupportsMemoryPools !=�
,→0);
Doing the attribute query inside the driver version check avoids an invalid value return code on 11.0
and 11.1 drivers. Be aware that the compute sanitizer emits warnings when it detects CUDA returning
error codes, and a version check before reading the attribute will avoid this. Graph memory nodes are
only supported on driver versions 11.4 and newer.
359
CUDA C++ Programming Guide, Release 12.4
Note: Graph destruction does not automatically free any live graph-allocated memory, even though it
ends the lifetime of the allocation node. The allocation must subsequently be freed in another graph,
or using cudaFreeAsync()∕cudaFree().
Just like other graph nodes, graph memory nodes are ordered within a graph by dependency edges. A
program must guarantee that operations accessing graph memory:
▶ are ordered after the allocation node
▶ are ordered before the operation freeing the memory
Graph allocation lifetimes begin and usually end according to GPU execution (as opposed to API invo-
cation). GPU ordering is the order that work runs on the GPU as opposed to the order that the work
is enqueued or described. Thus, graph allocations are considered ‘GPU ordered.’
dependencies[0] = b;
dependencies[1] = c;
cudaGraphAddMemFreeNode(&freeNode, graph, dependencies, 2, params.dptr);
∕∕ free node does not depend on kernel node d, so it must not access the freed graph�
,→allocation.
∕∕ node e does not depend on the allocation node, so it must not access the�
,→allocation. This would be true even if the freeNode depended on kernel node e.
cudaGraphAddKernelNode(&e, graph, NULL, 0, &nodeParams);
Note: Because graph allocations may share underlying physical memory with each other, the Virtual
Aliasing Support rules relating to consistency and coherency must be considered. Simply put, the free
operation must be ordered after the full device operation (for example, compute kernel / memcpy)
completes. Specifically, out of band synchronization - for example a handshake through memory as
part of a compute kernel that accesses the graph-allocated memory - is not sufficient for providing
ordering guarantees between the memory writes to graph memory and the free operation of that
graph memory.
The following code snippets demonstrate accessing graph allocations outside of the allocating graph
with ordering properly established by: using a single stream, using events between streams, and using
events baked into the allocating and freeing graph.
Ordering established by using a single stream:
void *dptr;
cudaGraphAddMemAllocNode(&allocNode, allocGraph, NULL, 0, ¶ms);
dptr = params.dptr;
cudaGraphLaunch(allocGraphExec, stream);
kernel<<< …, stream >>>(dptr, …);
cudaFreeAsync(dptr, stream);
void *dptr;
cudaGraphLaunch(allocGraphExec, allocStream);
cudaEventRecord(allocEvent, allocStream)
cudaStreamWaitEvent(stream2, allocEvent);
∕∕ establish the dependency between the stream 3 and the allocation use
cudaStreamRecordEvent(streamUseDoneEvent, stream2);
cudaStreamWaitEvent(stream3, streamUseDoneEvent);
∕∕ it is now safe to launch the freeing graph, which may also access the memory
cudaGraphLaunch(freeGraphExec, stream3);
nodeParams->kernelParams[0] = params.dptr;
∕∕ The allocReadyEventNode provides ordering with the alloc node for use in a�
,→consuming graph.
cudaGraphLaunch(allocGraphExec, allocStream);
∕∕ establish the dependency of stream2 on the event node satisfies the ordering�
,→requirement
cudaStreamWaitEvent(stream2, allocEvent);
kernel<<< …, stream2 >>> (dptr, …);
cudaStreamRecordEvent(streamUseDoneEvent, stream2);
∕∕ the event wait node in the waitAndFreeGraphExec establishes the dependency on the�
,→“readyForFreeEvent” that is needed to prevent the kernel running in stream two from�
cudaGraphLaunch(waitAndFreeGraphExec, stream3);
15.3.4. cudaGraphInstantiateFlagAutoFreeOnLaunch
Under normal circumstances, CUDA will prevent a graph from being relaunched if it has unfreed
memory allocations because multiple allocations at the same address will leak memory. Instantiat-
ing a graph with the cudaGraphInstantiateFlagAutoFreeOnLaunch flag allows the graph to be
relaunched while it still has unfreed allocations. In this case, the launch automatically inserts an asyn-
chronous free of the unfreed allocations.
Auto free on launch is useful for single-producer multiple-consumer algorithms. At each iteration, a
producer graph creates several allocations, and, depending on runtime conditions, a varying set of con-
sumers accesses those allocations. This type of variable execution sequence means that consumers
cannot free the allocations because a subsequent consumer may require access. Auto free on launch
means that the launch loop does not need to track the producer’s allocations - instead, that informa-
tion remains isolated to the producer’s creation and destruction logic. In general, auto free on launch
simplifies an algorithm which would otherwise need to free all the allocations owned by a graph before
each relaunch.
∕∕ Create producer graph which allocates memory and populates it with data
cudaStreamBeginCapture(cudaStreamPerThread, cudaStreamCaptureModeGlobal);
cudaMallocAsync(&data1, blocks * threads, cudaStreamPerThread);
cudaMallocAsync(&data2, blocks * threads, cudaStreamPerThread);
produce<<<blocks, threads, 0, cudaStreamPerThread>>>(data1, data2);
...
cudaStreamEndCapture(cudaStreamPerThread, &graph);
cudaGraphInstantiateWithFlags(&producer,
(continues on next page)
∕∕ Launch in a loop
bool launchConsumer2 = false;
do {
cudaGraphLaunch(producer, myStream);
cudaGraphLaunch(consumer1, myStream);
if (launchConsumer2) {
cudaGraphLaunch(consumer2, myStream);
}
} while (determineAction(&launchConsumer2));
cudaFreeAsync(data1, myStream);
cudaFreeAsync(data2, myStream);
cudaGraphExecDestroy(producer);
cudaGraphExecDestroy(consumer1);
cudaGraphExecDestroy(consumer2);
pattern, if a program accesses a pointer outside of an allocation’s lifetime, the erroneous access may
silently read or write live data owned by another allocation (even if the virtual address of the allocation
is unique). Use of compute sanitizer tools can catch this error.
The following figure shows graphs sequentially launched in the same stream. In this example, each
graph frees all the memory it allocates. Since the graphs in the same stream never run concurrently,
CUDA can and should use the same physical memory to satisfy all the allocations.
allows applications to query their graph memory footprint through the cudaDeviceGetGraphMemAt-
tribute API. Querying the attribute cudaGraphMemAttrReservedMemCurrent returns the amount
of physical memory reserved by the driver for graph allocations in the current process. Querying cud-
aGraphMemAttrUsedMemCurrent returns the amount of physical memory currently mapped by at
least one graph. Either of these attributes can be used to track when new physical memory is ac-
quired by CUDA for the sake of an allocating graph. Both of these attributes are useful for examining
how much memory is saved by the sharing mechanism.
accessDescs[2];
∕∕ boilerplate for the access descs (only ReadWrite and Device access supported by�
,→the add node api)
accessDescs[0].flags = cudaMemAccessFlagsProtReadWrite;
accessDescs[0].location.type = cudaMemLocationTypeDevice;
accessDescs[1].flags = cudaMemAccessFlagsProtReadWrite;
accessDescs[1].location.type = cudaMemLocationTypeDevice;
∕∕ access being requested for device 0 & 2. Device 1 access requirement left�
,→implicit.
accessDescs[0].location.id = 0;
accessDescs[1].location.id = 2;
accessDesc.flags = cudaMemAccessFlagsProtReadWrite;
accessDesc.location.type = cudaMemLocationTypeDevice;
accessDesc.location.id = 1;
cudaStreamBeginCapture(stream);
cudaMallocAsync(&dptr1, size, memPool, stream);
cudaStreamEndCapture(stream, &graph1);
cudaStreamBeginCapture(stream);
cudaMallocAsync(&dptr2, size, memPool, stream);
cudaStreamEndCapture(stream, &graph2);
∕∕The graph node allocating dptr1 would only have the device 0 accessibility even�
,→though memPool now has device 1 accessibility.
∕∕The graph node allocating dptr2 will have device 0 and device 1 accessibility,�
,→since that was the pool accessibility at the time of the cudaMallocAsync call.
The reference manual lists, along with their description, all the functions of the C/C++ standard library
mathematical functions that are supported in device code, as well as all intrinsic functions (that are
only supported in device code).
This section provides accuracy information for some of these functions when applicable. It uses ULP
for quantification. For further information on the definition of the Unit in the Last Place (ULP), please
see Jean-Michel Muller’s paper On the definition of ulp(x), RR-5504, LIP RR-2005-09, INRIA, LIP. 2005,
pp.16 at https://hal.inria.fr/inria-00070503/document.
Mathematical functions supported in device code do not set the global errno variable, nor report
any floating-point exceptions to indicate errors; thus, if error diagnostic mechanisms are required,
the user should implement additional screening for inputs and outputs of the functions. The user is
responsible for the validity of pointer arguments. The user must not pass uninitialized parameters to
the Mathematical functions as this may result in undefined behavior: functions are inlined in the user
program and thus are subject to compiler optimizations.
373
CUDA C++ Programming Guide, Release 12.4
As described in Compilation with NVCC, CUDA source files compiled with nvcc can include a mix of
host code and device code. The CUDA front-end compiler aims to emulate the host compiler behavior
with respect to C++ input code. The input source code is processed according to the C++ ISO/IEC
14882:2003, C++ ISO/IEC 14882:2011, C++ ISO/IEC 14882:2014 or C++ ISO/IEC 14882:2017 specifi-
cations, and the CUDA front-end compiler aims to emulate any host compiler divergences from the
ISO specification. In addition, the supported language is extended with CUDA-specific constructs de-
scribed in this document? , and is subject to the restrictions described below.
C++11 Language Features, C++14 Language Features and C++17 Language Features provide support
matrices for the C++11, C++14, C++17 and C++20 features, respectively. Restrictions lists the lan-
guage restrictions. Polymorphic Function Wrappers and Extended Lambdas describe additional fea-
tures. Code Samples gives code samples.
385
CUDA C++ Programming Guide, Release 12.4
17.5. Restrictions
1. The type signature of the following entities shall not depend on whether __CUDA_ARCH__ is de-
fined or not, or on a particular value of __CUDA_ARCH__:
▶ __global__ functions and function templates
▶ __device__ and __constant__ variables
▶ textures and surfaces
Example:
#if !defined(__CUDA_ARCH__)
typedef int mytype;
#else
typedef double mytype;
#endif
2. If a __global__ function template is instantiated and launched from the host, then the func-
tion template must be instantiated with the same template arguments irrespective of whether
__CUDA_ARCH__ is defined and regardless of the value of __CUDA_ARCH__.
Example:
__device__ int result;
template <typename T>
__global__ void kern(T in)
{
result = in;
}
int main(void)
{
foo();
cudaDeviceSynchronize();
return 0;
}
4. In separate compilation, __CUDA_ARCH__ must not be used in headers such that different ob-
jects could contain different behavior. Or, it must be guaranteed that all objects will compile for
the same compute_arch. If a weak function or template function is defined in a header and its
behavior depends on __CUDA_ARCH__, then the instances of that function in the objects could
conflict if the objects are compiled for different compute arch.
For example, if an a.h contains:
template<typename T>
__device__ T* getptr(void)
{
#if __CUDA_ARCH__ == 700
return NULL; ∕* no address *∕
#else
__shared__ T arr[256];
return arr;
#endif
}
Then if a.cu and b.cu both include a.h and instantiate getptr for the same type, and b.cu
expects a non-NULL address, and compile with:
At link time only one version of the getptr is used, so the behavior would depend on which
version is chosen. To avoid this, either a.cu and b.cu must be compiled for the same compute
arch, or __CUDA_ARCH__ should not be used in the shared header function.
The compiler does not guarantee that a diagnostic will be generated for the unsupported uses of
__CUDA_ARCH__ described above.
17.5.3. Qualifiers
17.5.3.1 Device Memory Space Specifiers
The __device__, __shared__, __managed__ and __constant__ memory space specifiers are not
allowed on:
▶ class, struct, and union data members,
▶ formal parameters,
▶ non-extern variable declarations within a function that executes on the host.
The __device__, __constant__ and __managed__ memory space specifiers are not allowed on vari-
able declarations that are neither extern nor static within a function that executes on the device.
A __device__, __constant__, __managed__ or __shared__ variable definition cannot have a class
type with a non-empty constructor or a non-empty destructor. A constructor for a class type is con-
sidered empty at a point in the translation unit, if it is either a trivial constructor or it satisfies all of
the following conditions:
▶ The constructor function has been defined.
▶ The constructor function has no parameters, the initializer list is empty and the function body is
an empty compound statement.
▶ Its class has no virtual functions, no virtual base classes and no non-static data member initial-
izers.
▶ The default constructors of all base classes of its class can be considered empty.
▶ For all the nonstatic data members of its class that are of class type (or array thereof), the default
constructors can be considered empty.
A destructor for a class is considered empty at a point in the translation unit, if it is either a trivial
destructor or it satisfies all of the following conditions:
▶ The destructor function has been defined.
▶ The destructor function body is an empty compound statement.
▶ Its class has no virtual functions and no virtual base classes.
▶ The destructors of all base classes of its class can be considered empty.
▶ For all the nonstatic data members of its class that are of class type (or array thereof), the de-
structor can be considered empty.
When compiling in the whole program compilation mode (see the nvcc user manual for a description of
this mode), __device__, __shared__, __managed__ and __constant__ variables cannot be defined
as external using the extern keyword. The only exception is for dynamically allocated __shared__
variables as described in index.html#__shared__.
When compiling in the separate compilation mode (see the nvcc user manual for a description of this
mode), __device__, __shared__, __managed__ and __constant__ variables can be defined as ex-
ternal using the extern keyword. nvlink will generate an error when it cannot find a definition for
an external variable (unless it is a dynamically allocated __shared__ variable).
Variables marked with the __managed__ memory space specifier (“managed” variables) have the fol-
lowing restrictions:
▶ The address of a managed variable is not a constant expression.
▶ A managed variable shall not have a const qualified type.
▶ A managed variable shall not have a reference type.
▶ The address or value of a managed variable shall not be used when the CUDA runtime may not
be in a valid state, including the following cases:
▶ In static/dynamic initialization or destruction of an object with static or thread local storage
duration.
▶ In code that executes after exit() has been called (for example, a function marked with gcc’s
“__attribute__((destructor))”).
▶ In code that executes when CUDA runtime may not be initialized (for example, a function
marked with gcc’s “__attribute__((constructor))”).
▶ A managed variable cannot be used as an unparenthesized id-expression argument to a de-
cltype() expression.
▶ Managed variables have the same coherence and consistency behavior as specified for dynami-
cally allocated managed memory.
▶ When a CUDA program containing managed variables is run on an execution platform with mul-
tiple GPUs, the variables are allocated only once, and not per GPU.
▶ A managed variable declaration without the extern linkage is not allowed within a function that
executes on the host.
▶ A managed variable declaration without the extern or static linkage is not allowed within a func-
tion that executes on the device.
Here are examples of legal and illegal uses of managed variables:
__device__ __managed__ int xxx = 10; ∕∕ OK
__device__ __managed__ const int yyy = 10; ∕∕ error: const qualified type
The compiler is free to optimize reads and writes to global or shared memory (for example, by caching
global reads into registers or L1 cache) as long as it respects the memory ordering semantics of mem-
ory fence functions (Memory Fence Functions) and memory visibility semantics of synchronization
functions (Synchronization Functions).
These optimizations can be disabled using the volatile keyword: If a variable located in global or
shared memory is declared as volatile, the compiler assumes that its value can be changed or used at
any time by another thread and therefore any reference to this variable compiles to an actual memory
read or write instruction.
17.5.4. Pointers
Dereferencing a pointer either to global or shared memory in code that is executed on the host, or to
host memory in code that is executed on the device results in an undefined behavior, most often in a
segmentation fault and application termination.
The address obtained by taking the address of a __device__, __shared__ or __constant__ variable
can only be used in device code. The address of a __device__ or __constant__ variable obtained
through cudaGetSymbolAddress() as described in Device Memory can only be used in host code.
17.5.5. Operators
17.5.5.1 Assignment Operator
__constant__ variables can only be assigned from the host code through runtime functions (Device
Memory); they cannot be assigned from the device code.
__shared__ variables cannot have an initialization as part of their declaration.
It is not allowed to assign values to any of the built-in variables defined in Built-in Variables.
It is not allowed to take the address of any of the built-in variables defined in Built-in Variables.
namespace cuda{
namespace utils{
∕∕ Bad: function definition added to namespace nested within cuda
cudaStream_t make_stream(){
cudaStream_t s;
cudaStreamCreate(&s);
return s;
}
} ∕∕ namespace utils
} ∕∕ namespace cuda
namespace utils{
namespace cuda{
∕∕ Okay: namespace cuda may be used nested within a non-reserved namespace
cudaStream_t make_stream(){
cudaStream_t s;
cudaStreamCreate(&s);
return s;
}
} ∕∕ namespace cuda
} ∕∕ namespace utils
17.5.10. Functions
17.5.10.1 External Linkage
A call within some device code of a function declared with the extern qualifier is only allowed if the
function is defined within the same compilation unit as the device code, i.e., a single file or several files
linked together with relocatable device code and nvlink.
Let F denote a function that is either implicitly-declared or is explicitly-defaulted on its first decla-
ration The execution space specifiers (__host__, __device__) for F are the union of the execution
space specifiers of all the functions that invoke it (note that a __global__ caller will be treated as a
__device__ caller for this analysis). For example:
class Base {
int x;
public:
__host__ __device__ Base(void) : x(10) {}
};
__global__ function parameters are passed to the device via constant memory and are limited to
32,764 bytes starting with Volta, and 4 KB on older architectures.
__global__ functions cannot have a variable number of arguments.
__global__ function parameters cannot be pass-by-reference.
In separate compilation mode, if a __device__ or __global__ function is ODR-used in a particu-
lar translation unit, then the parameter and return types of the function must be complete in that
translation unit.
Example:
∕∕first.cu:
struct S;
__device__ void foo(S); ∕∕ error: type 'S' is incomplete
__device__ auto *ptr = foo;
int main() { }
∕∕second.cu:
struct S { int x; };
__device__ void foo(S) { }
∕∕compiler invocation
$nvcc -std=c++14 -rdc=true first.cu second.cu -o first
nvlink error : Prototype doesn't match for '_Z3foo1S' in '∕tmp∕tmpxft_00005c8c_
,→00000000-18_second.o', first defined in '∕tmp∕tmpxft_00005c8c_00000000-18_second.o'
When a __global__ function is launched from device code, each argument must be trivially copyable
and trivially destructible.
When a __global__ function is launched from host code, each argument type is allowed to be non-
trivially copyable or non-trivially-destructible, but the processing for such types does not follow the
standard C++ model, as described below. User code must ensure that this workflow does not affect
program correctness. The workflow diverges from standard C++ in two areas:
1. Memcpy instead of copy constructor invocation
When lowering a __global__ function launch from host code, the compiler generates stub func-
tions that copy the parameters one or more times by value, before eventually using memcpy to
copy the arguments to the __global__ function’s parameter memory on the device. This oc-
curs even if an argument was non-trivially-copyable, and therefore may break programs where
the copy constructor has side effects.
Example:
#include <cassert>
struct S {
int x;
int *ptr;
__host__ __device__ S() { }
__host__ __device__ S(const S &) { ptr = &x; }
};
int main() {
S tmp;
foo<<<1,1>>>(tmp);
cudaDeviceSynchronize();
}
Example:
#include <cassert>
int main() {
S1 V;
foo<<<1,1>>>(V);
cudaDeviceSynchronize();
}
argument has a non-trivial destructor, the destructor may execute in host code even before the
__global__ function has finished execution. This may break programs where the destructor
has side effects.
Example:
struct S {
int *ptr;
S() : ptr(nullptr) { }
S(const S &) { cudaMallocManaged(&ptr, sizeof(int)); }
~S() { cudaFree(ptr); }
};
∕∕error: This store may write to memory that has already been
∕∕ freed (see below).
*(in.ptr) = 4;
int main() {
S V;
*∕
foo<<<1,1>>>(V);
cudaDeviceSynchronize();
}
Developers must use the 12.1 Toolkit and r530 driver or higher to compile, launch, and debug kernels
that accept parameters larger than 4KB. If such kernels are launched on older drivers, CUDA will issue
the error CUDA_ERROR_NOT_SUPPORTED.
When linking device objects, if at least one device object contains a kernel with a parameter larger
than 4KB, the developer must recompile all objects from their respective device sources with the 12.1
toolkit or higher before linking them together. Failure to do so will result in a linker error.
Variable memory space specifiers are allowed in the declaration of a static variable V within the imme-
diate or nested block scope of a function F where:
▶ F is a __global__ or __device__-only function.
▶ F is a __host__ __device__ function and __CUDA_ARCH__ is defined22 .
If no explicit memory space specifier is present in the declaration of V, an implicit __device__ specifier
is assumed during device compilation.
V has the same initialization restrictions as a variable with the same memory space specifiers declared
in namespace scope for example a __device__ variable cannot have a ‘non-empty’ constructor (see
Device Memory Space Specifiers).
Examples of legal and illegal uses of function-scope static variables are shown below.
struct S1_t {
int x;
};
struct S2_t {
int x;
__device__ S2_t(void) { x = 10; }
};
struct S3_t {
int x;
__device__ S3_t(int p) : x(p) { }
};
int x = 33;
static int i6 = x; ∕∕ error: dynamic initialization is not allowed
static S1_t i7 = {x}; ∕∕ error: dynamic initialization is not allowed
The intent is to allow variable memory space specifiers for static variables in a __host__ __device__ function during
22
The address of a __global__ function taken in host code cannot be used in device code (e.g. to launch
the kernel). Similarly, the address of a __global__ function taken in device code cannot be used in
host code.
It is not allowed to take the address of a __device__ function in host code.
friend __global__
void foo3(void) { } ∕∕ error: definition in friend declaration
template<typename T>
friend __global__
void foo4(void) { } ∕∕ error: definition in friend declaration
};
17.5.11. Classes
17.5.11.1 Data Members
Static data members are not supported except for those that are also const-qualified (see Const-
qualified variables).
When a function in a derived class overrides a virtual function in a base class, the execution space
specifiers (i.e., __host__, __device__) on the overridden and overriding functions must match.
It is not allowed to pass as an argument to a __global__ function an object of a class with virtual
functions.
If an object is created in host code, invoking a virtual function for that object in device code has unde-
fined behavior.
If an object is created in device code, invoking a virtual function for that object in host code has unde-
fined behavior.
See Windows-Specific for additional constraints when using the Microsoft host compiler.
Example:
struct S1 { virtual __host__ __device__ void foo() { } };
int main(void) {
void *buf;
cudaMallocManaged(&buf, sizeof(S1), cudaMemAttachGlobal);
ptr1 = new (buf) S1();
kern<<<1,1>>>();
cudaDeviceSynchronize();
ptr2->foo(); ∕∕ error: virtual function call on an object
∕∕ created in device code.
}
It is not allowed to pass as an argument to a __global__ function an object of a class derived from
virtual base classes.
See Windows-Specific for additional constraints when using the Microsoft host compiler.
17.5.11.6 Windows-Specific
The CUDA compiler follows the IA64 ABI for class layout, while the Microsoft host compiler does not.
Let T denote a pointer to member type, or a class type that satisfies any of the following conditions:
▶ T has virtual functions.
▶ T has a virtual base class.
▶ T has multiple inheritance with more than one direct or indirect empty base class.
▶ All direct and indirect base classes B of T are empty and the type of the first field F of T uses B
in its definition, such that B is laid out at offset 0 in the definition of F.
Let C denote T or a class type that has T as a field type or as a base class type. The CUDA compiler
may compute the class layout and size differently than the Microsoft host compiler for the type C.
As long as the type C is used exclusively in host or device code, the program should work correctly.
Passing an object of type C between host and device code has undefined behavior, for example, as an
argument to a __global__ function or through cudaMemcpy*() calls.
Accessing an object of type C or any subobject in device code, or invoking a member function in device
code, has undefined behavior if the object is created in host code.
Accessing an object of type C or any subobject in host code, or invoking a member function in host
code, has undefined behavior if the object is created in device code23 .
17.5.12. Templates
A type or template cannot be used in the type, non-type or template template argument of a
__global__ function template instantiation or a __device__∕__constant__ variable instantiation
if either:
▶ The type or template is defined within a __host__ or __host__ __device__.
▶ The type or template is a class member with private or protected access and its parent class
is not defined within a __device__ or __global__ function.
▶ The type is unnamed.
▶ The type is compounded from any of the types above.
23 One way to debug suspected layout mismatch of a type C is to use printf to output the values of sizeof(C) and
Example:
template <typename T>
__global__ void myKernel(void) { }
class myClass {
private:
struct inner_t { };
public:
static void launch(void)
{
∕∕ error: inner_t is used in template argument
∕∕ but it is private
myKernel<inner_t><<<1,1>>>();
}
};
∕∕ C++14 only
template <typename T> __device__ T d1;
void fn() {
struct S1_t { };
∕∕ error (C++14 only): S1_t is local to the function fn
d1<S1_t> = {};
auto lam1 = [] { };
∕∕ error (C++14 only): a closure type cannot be used for
∕∕ instantiating a variable template
d2<int, decltype(lam1)> = 10;
}
If these attributes are used in host code when __CUDA_ARCH__ is undefined, then they will be present
in the code parsed by the host compiler, which may generate a warning if the attributes are not sup-
ported. For example, clang11 host compiler will generate an ‘unknown attribute’ warning.
return sum;
}
The execution space specifiers for all member functions26 of the closure class associated with a
lambda expression are derived by the compiler as follows. As described in the C++11 standard, the
compiler creates a closure type in the smallest block scope, class scope or namespace scope that con-
tains the lambda expression. The innermost function scope enclosing the closure type is computed,
and the corresponding function’s execution space specifiers are assigned to the closure class member
functions. If there is no enclosing function scope, the execution space specifier is __host__.
Examples of lambda expressions and computed execution space specifiers are shown below (in com-
ments).
auto globalVar = [] { return 0; }; ∕∕ __host__
void f1(void) {
auto l1 = [] { return 1; }; ∕∕ __host__
}
The closure type of a lambda expression cannot be used in the type or non-type argument of a
__global__ function template instantiation, unless the lambda is defined within a __device__ or
__global__ function.
Example:
template <typename T>
__global__ void foo(T in) { };
void bar(void) {
auto temp1 = [] { };
17.5.21.2 std::initializer_list
By default, the CUDA compiler will implicitly consider the member functions of
std::initializer_list to have __host__ __device__ execution space speci-
fiers, and therefore they can be invoked directly from device code. The nvcc flag
--no-host-device-initializer-list will disable this behavior; member functions of
std::initializer_list will then be considered as __host__ functions and will not be directly
invokable from device code.
Example:
#include <initializer_list>
int i = 4;
foo({i,5,6}); ∕∕ (b) initializer list with at least one
∕∕ non-constant element.
∕∕ This form may have better performance than (a).
}
By default, the CUDA compiler will implicitly consider std::move and std::forward function tem-
plates to have __host__ __device__ execution space specifiers, and therefore they can be invoked
directly from device code. The nvcc flag --no-host-device-move-forward will disable this behav-
ior; std::move and std::forward will then be considered as __host__ functions and will not be
directly invokable from device code.
By default, a constexpr function cannot be called from a function with incompatible execution space27 .
The experimental nvcc flag --expt-relaxed-constexpr removes this restriction28 . When this flag
is specified, host code can invoke a __device__ constexpr function and device code can invoke a
__host__ constexpr function. nvcc will define the macro __CUDACC_RELAXED_CONSTEXPR__ when
--expt-relaxed-constexpr has been specified. Note that a function template instantiation may
not be a constexpr function even if the corresponding template is marked with the keyword const-
expr (C++11 Standard Section [dcl.constexpr.p6]).
Let ‘V’ denote a namespace scope variable or a class static member variable that has been marked
constexpr and that does not have execution space annotations (e.g., __device__, __constant__,
__shared__). V is considered to be a host code variable.
If V is of scalar type29 other than long double and the type is not volatile-qualified, the value of V
can be directly used in device code. In addition, if V is of a non-scalar type then scalar elements of
V can be used inside a constexpr __device__ or __host__ __device__ function, if the call to the
function is a constant expression30 . Device source code cannot contain a reference to V or take the
address of V.
Example:
constexpr int xxx = 10;
constexpr int yyy = xxx + 4;
struct S1_t { static constexpr int qqq = 100; };
For an input CUDA translation unit, the CUDA compiler may invoke the host compiler for compiling the
host code within the translation unit. In the code passed to the host compiler, the CUDA compiler will
inject additional compiler generated code, if the input CUDA translation unit contained a definition of
any of the following entities:
▶ __global__ function or function template instantiation
▶ __device__, __constant__
▶ variables with surface or texture type
The compiler generated code contains a reference to the defined entity. If the entity is defined within
an inline namespace and another entity of the same name and type signature is defined in an enclosing
namespace, this reference may be considered ambiguous by the host compiler and host compilation
will fail.
This limitation can be avoided by using unique names for such entities defined within an inline names-
pace.
Example:
__device__ int Gvar;
inline namespace N1 {
__device__ int Gvar;
}
Example:
inline namespace N1 {
namespace N2 {
(continues on next page)
namespace N2 {
__device__ int Gvar;
}
The following entities cannot be declared in namespace scope within an inline unnamed namespace:
▶ __managed__, __device__, __shared__ and __constant__ variables
▶ __global__ function and function templates
▶ variables with surface or texture type
Example:
inline namespace {
namespace N2 {
template <typename T>
__global__ void foo(void); ∕∕ error
template <>
__global__ void foo<int>(void) { } ∕∕ error
17.5.21.7 thread_local
If the closure type associated with a lambda expression is used in a template argument of a
__global__ function template instantiation, the lambda expression must either be defined in the
immediate or nested block scope of a __device__ or __global__ function, or must be an extended
lambda.
Example:
template <typename T>
__global__ void kernel(T in) { }
kernel<<<1,1>>>( [] __device__ { } );
kernel<<<1,1>>>( [] __host__ __device__ { } );
kernel<<<1,1>>>( [] { } );
}
auto lam1 = [] { };
void foo_host(void)
{
∕∕ OK: instantiated with closure type of an extended __device__ lambda
kernel<<<1,1>>>( [] __device__ { } );
▶ The pack parameter must be listed last in the template parameter list.
Example:
∕∕ ok
template <template <typename...> class Wrapper, typename... Pack>
__global__ void foo1(Wrapper<Pack...>);
`__managed__ and __shared__ variables cannot be marked with the keyword constexpr.
Execution space specifiers on a function that is explicitly-defaulted on its first declaration are ignored
by the CUDA compiler. Instead, the CUDA compiler will infer the execution space specifiers as de-
scribed in Implicitly-declared and explicitly-defaulted functions.
Execution space specifiers are not ignored if the function is explicitly-defaulted, but not on its first
declaration.
Example:
struct S1 {
∕∕ warning: __host__ annotation is ignored on a function that
∕∕ is explicitly-defaulted on its first declaration
__host__ S1() = default;
};
struct S2 {
__host__ S2();
};
void host_fn1() {
∕∕ error: referenced outside device function bodies
int (*p1)(int) = fn1;
struct S_local_t {
∕∕ error: referenced outside device function bodies
decltype(fn2(10)) m1;
S_local_t() : m1(10) { }
};
}
(continues on next page)
At present, the -std=c++14 flag is supported only for the following host compilers : gcc version >= 5.1, clang version >=
31
A __device__∕__constant__ variable template cannot have a const qualified type when using the
Microsoft host compiler.
Examples:
∕∕ error: a __device__ variable template cannot
∕∕ have a const qualified type on Windows
template <typename T>
__device__ const T d1(2);
∕∕ OK
template <typename T>
__device__ const T *d3;
8.0, Visual Studio version >= 2017, pgi compiler version >= 19.0, icc compiler version >= 19.0
▶ When using g++ host compiler, an inline variable declared with __managed__ memory space
specifier may not be visible to the debugger.
Modules are not supported in CUDA C++, in either host or device code. Uses of the module, export
and import keywords are diagnosed as errors.
33 At present, the -std=c++20 flag is supported only for the following host compilers : gcc version >= 10.0, clang version >=
10.0, Visual Studio Version >= 2022 and nvc++ version >= 20.7.
Coroutines are not supported in device code. Uses of the co_await, co_yield and co_return key-
words in the scope of a device function are diagnosed as error during device compilation.
The three-way comparison operator is supported in both host and device code, but some uses implicitly
rely on functionality from the Standard Template Library provided by the host implementation. Uses
of those operators may require specifying the flag --expt-relaxed-constexpr to silence warnings
and the functionality requires that the host implementation satisfies the requirements of device code.
Example:
#include<compare>
struct S {
int x, y, z;
auto operator<=>(const S& rhs) const = default;
__host__ __device__ bool operator<=>(int rhs) const { return false; }
};
__host__ __device__ bool f(S a, S b) {
if (a <=> 1) ∕∕ ok, calls a user-defined host-device overload
return true;
return a < b; ∕∕ call to an implicitly-declared function and requires
∕∕ a device-compatible std::strong_ordering implementation
}
Ordinarily, cross execution space calls are not allowed, and cause a compiler diagnostic (warning or
error). This restriction does not apply when the called function is declared with the consteval spec-
ifier. Thus, a __device__ or __global__ function can call a __host__consteval function, and a
__host__ function can call a __device__ consteval function.
Example:
namespace N1 {
∕∕consteval host function
consteval int hcallee() { return 10; }
namespace N2 {
∕∕consteval device function
consteval __device__ int dcallee() { return 10; }
Instances of nvstd::function in host code cannot be initialized with the address of a __de-
vice__ function or with a functor whose operator() is a __device__ function. Instances of
nvstd::function in device code cannot be initialized with the address of a __host__ function or
with a functor whose operator() is a __host__ function.
nvstd::function instances cannot be passed from host code to device code (and vice versa) at
run time. nvstd::function cannot be used in the parameter type of a __global__ function, if the
__global__ function is launched from host code.
Example:
#include <nvfunctional>
void foo(void) {
∕∕ error: initialized with address of __device__ function
nvstd::function<int()> fn1 = foo_d;
template<class _F>
__device__ __host__ function(_F);
∕∕ destructor
__device__ __host__ ~function();
∕∕ assignment operators
__device__ __host__ function& operator=(const function&);
__device__ __host__ function& operator=(function&&);
__device__ __host__ function& operator=(nullptr_t);
__device__ __host__ function& operator=(_F&&);
∕∕ swap
__device__ __host__ void swap(function&) noexcept;
∕∕ function invocation
__device__ _RetType operator()(_ArgTypes...) const;
};
∕∕ specialized algorithms
template <class _R, class... _ArgTypes>
__device__ __host__
void swap(function<_R(_ArgTypes...)>&, function<_R(_ArgTypes...)>&);
}
The execution space annotations are applied to all methods of the closure class associated with the
lambda.
Example:
void foo_host(void) {
∕∕ not an extended lambda: no explicit execution space annotations
auto lam1 = [] { };
∕∕ lam1 and lam2 are not extended lambdas because they are not defined
∕∕ within a __host__ or __host__ __device__ function.
auto lam1 = [] { };
auto lam2 = [] __host__ __device__ { };
void foo(void) {
auto lam1 = [] { };
auto lam2 = [] __device__ { };
auto lam3 = [] __host__ __device__ { };
auto lam4 = [] __device__ () --> double { return 3.14; }
auto lam5 = [] __device__ (int x) --> decltype(&x) { return 0; }
auto lam2 = [] {
auto lam3 = [] {
∕∕ enclosing function for lam4 is "foo"
auto lam4 = [] __host__ __device__ { };
};
};
}
auto lam6 = [] {
∕∕ enclosing function for lam7 does not exist
auto lam7 = [] __host__ __device__ { };
};
3. If an extended lambda is defined within the immediate or nested block scope of one or more
nested lambda expression, the outermost such lambda expression must be defined inside the
immediate or nested block scope of a function.
Example:
auto lam1 = [] {
∕∕ error: outer enclosing lambda is not defined within a
∕∕ non-lambda-operator() function.
auto lam2 = [] __host__ __device__ { };
};
4. The enclosing function for the extended lambda must be named and its address can be taken. If
the enclosing function is a class member, then the following conditions must be satisfied:
▶ All classes enclosing the member function must have a name.
▶ The member function must not have private or protected access within its parent class.
▶ All enclosing classes must not have private or protected access within their respective par-
ent classes.
Example:
void foo(void) {
∕∕ OK
auto lam1 = [] __device__ { return 0; };
{
∕∕ OK
auto lam2 = [] __device__ { return 0; };
∕∕ OK
auto lam3 = [] __device__ __host__ { return 0; };
}
}
struct S1_t {
S1_t(void) {
∕∕ Error: cannot take address of enclosing function
auto lam4 = [] __device__ { return 0; };
}
};
class C0_t {
void foo(void) {
(continues on next page)
5. It must be possible to take the address of the enclosing routine unambiguously, at the point
where the extended lambda has been defined. This may not be feasible in some cases e.g. when
a class typedef shadows a template type argument of the same name.
Example:
template <typename> struct A {
typedef void Bar;
void test();
};
int main() {
A<int> xxx;
xxx.test();
}
7. The enclosing function for an extended lambda cannot have deduced return type.
Example:
auto foo(void) {
∕∕ Error: the return type of foo is deduced.
auto lam1 = [] __host__ __device__ { return 0; };
}
int main() {
foo<char, int, float> f1;
foo<char, int> f2;
bar1(f1, f2);
bar2(f1, 10);
bar3<int, 10>();
}
Example:
template <typename T>
__global__ void kern(T in) { in(); }
10. With Visual Studio host compilers, the enclosing function must have external linkage. The restric-
tion is present because this host compiler does not support using the address of non-extern link-
age functions as template arguments, which is needed by the CUDA compiler transformations
to support extended lambdas.
11. With Visual Studio host compilers, an extended lambda shall not be defined within the body of
an ‘if-constexpr’ block.
12. An extended lambda has the following restrictions on captured variables:
▶ In the code sent to the host compiler, the variable may be passed by value to a sequence
of helper functions before being used to direct-initialize the field of the class type used to
int a = 1;
∕∕ Error: an extended __device__ lambda cannot capture
∕∕ variables by reference.
auto lam3 = [&a] __device__ () { return a; };
struct S1_t { };
S1_t s1;
∕∕ Error: a type local to a function cannot be used in the type
∕∕ of a captured variable.
(continues on next page)
36 In contrast, the C++ standard specifies that the captured variable is used to direct-initialize the field of the closure type.
std::initializer_list<int> b = {11,22,33};
∕∕ Error: an init-capture cannot be of type std::initializer_list.
auto lam8 = [x = b] __device__ () { };
13. When parsing a function, the CUDA compiler assigns a counter value to each extended lambda
within that function. This counter value is used in the substituted named type passed to the
host compiler. Hence, whether or not an extended lambda is defined within a function should
not depend on a particular value of __CUDA_ARCH__, or on __CUDA_ARCH__ being undefined.
Example
template <typename T>
__global__ void kernel(T in) { in(); }
14. As described above, the CUDA compiler replaces a __device__ extended lambda defined
in a host function with a placeholder type defined in namespace scope. Unless the trait
__nv_is_extended_device_lambda_with_preserved_return_type() returns true for the
closure type of the extended lambda, the placeholder type does not define a operator() func-
tion equivalent to the original lambda declaration. An attempt to determine the return type or
parameter types of the operator() function of such a lambda may therefore work incorrectly
in host code, as the code processed by the host compiler will be semantically different than the
input code processed by the CUDA compiler. However, it is OK to introspect the return type or
parameter types of the operator() function within device code. Note that this restriction does
not apply to __host__ __device__ extended lambdas, or to __device__ extended lambdas
for which the trait __nv_is_extended_device_lambda_with_preserved_return_type()
returns true.
Example
#include <type_traits>
const char& getRef(const char* p) { return *p; }
void foo(void) {
auto lam1 = [] __device__ { return "10"; };
15. For an extended device lambda: - Introspecting the parameter type of operator() is only sup-
ported in device code. - Introspecting the return type of operator() is supported only in device
code, unless the trait function __nv_is_extended_device_lambda_with_preserved_return_type()
returns true.
16. If the functor object represented by an extended lambda is passed from host to device code
(e.g., as the argument of a __global__ function), then any expression in the body of the
lambda expression that captures variables must be remain unchanged irrespective of whether
the __CUDA_ARCH__ macro is defined, and whether the macro has a particular value. This re-
striction arises because the lambda’s closure class layout depends on the order in which captured
variables are encountered when the compiler processes the lambda expression; the program may
execute incorrectly if the closure class layout differs in device and host compilation.
Example
__device__ int result;
void foo(void) {
int x1 = 1;
auto lam1 = [=] __host__ __device__ {
∕∕ Error: "x1" is only captured when __CUDA_ARCH__ is defined.
#ifdef __CUDA_ARCH__
return x1 + 1;
#else
return 10;
#endif
};
kernel<<<1,1>>>(lam1);
}
17. As described previously, the CUDA compiler replaces an extended __device__ lambda expres-
sion with an instance of a placeholder type in the code sent to the host compiler. This placeholder
type does not define a pointer-to-function conversion operator in host code, however the con-
version operator is provided in device code. Note that this restriction does not apply to __host__
__device__ extended lambdas.
Example
template <typename T>
__global__ void kern(T in) {
int (*fp)(double) = in;
void foo(void) {
auto lam_d = [] __device__ (double) { return 1; };
auto lam_hd = [] __host__ __device__ (double) { return 1; };
kern<<<1,1>>>(lam_d);
kern<<<1,1>>>(lam_hd);
18. As described previously, the CUDA compiler replaces an extended __device__ or __host__
__device__ lambda expression with an instance of a placeholder type in the code sent to
the host compiler. This placeholder type may define C++ special member functions (e.g.
constructor, destructor). As a result, some standard C++ type traits may return different
results for the closure type of the extended lambda, in the CUDA frontend compiler versus the
host compiler. The following type traits are affected: std::is_trivially_copyable,
std::is_trivially_constructible, std::is_trivially_copy_constructible,
std::is_trivially_move_constructible, std::is_trivially_destructible.
Care must be taken that the results of these type traits are not used in __global__ function
template instantiation or in __device__ ∕ __constant__ ∕ __managed__ variable template
instantiation.
Example
template <bool b>
void __global__ foo() { printf("hi"); }
∕∕ ERROR: this kernel launch may fail, because CUDA frontend compiler
∕∕ and host compiler may disagree on the result of
∕∕ std::is_trivially_copyable() trait on the closure type of the
∕∕ extended lambda
foo<std::is_trivially_copyable<T>::value><<<1,1>>>();
cudaDeviceSynchronize();
}
int main() {
int x = 0;
auto lam1 = [=] __host__ __device__ () { return x; };
dolaunch<decltype(lam1)>();
}
The CUDA compiler will generate compiler diagnostics for a subset of cases described in 1-12; no
diagnostic will be generated for cases 13-17, but the host compiler may fail to compile the generated
code.
in case of an extended __host__ __device__ lambda, the host compiler encounters the indirect
function call and may not be able to easily inline the original __host__ __device__ lambda body.
struct S1_t {
int xxx;
__host__ __device__ S1_t(void) : xxx(10) { };
void doit(void) {
};
int main(void) {
S1_t s1;
s1.doit();
}
C++17 solves this problem by adding a new “*this” capture mode. In this mode, the compiler makes a
copy of the object denoted by “*this” instead of capturing the pointer this by value. The “*this” cap-
ture mode is described in more detail here: http:∕∕www.open-std.org∕jtc1∕sc22∕wg21∕docs∕
papers∕2016∕p0018r3.html .
The CUDA compiler supports the “*this” capture mode for lambdas defined within __device__
and __global__ functions and for extended __device__ lambdas defined in host code, when the
--extended-lambda nvcc flag is used.
Here’s the above example modified to use “*this” capture mode:
#include <cstdio>
struct S1_t {
int xxx;
__host__ __device__ S1_t(void) : xxx(10) { };
void doit(void) {
};
int main(void) {
S1_t s1;
s1.doit();
}
“*this” capture mode is not allowed for unannotated lambdas defined in host code, or for extended
__host____device__ lambdas. Examples of supported and unsupported usage:
struct S1_t {
int xxx;
__host__ __device__ S1_t(void) : xxx(10) { };
void host_func(void) {
namespace N2 {
template <typename T> int foo(T);
In the example above, the CUDA compiler replaced the extended lambda with a placeholder type
that involves the N1 namespace. As a result, the namespace N1 participates in the ADL lookup
for foo(in) in the body of N2::doit, and host compilation fails because multiple overload can-
didates N1::foo and N2::foo are found.
private:
unsigned char r_, g_, b_, a_;
__device__
PixelRGBA operator+(const PixelRGBA& p1, const PixelRGBA& p2)
{
return PixelRGBA(p1.r_ + p2.r_, p1.g_ + p2.g_,
p1.b_ + p2.b_, p1.a_ + p2.a_);
}
int main()
{
...
useValues<int><<<blocks, threads>>>(buffer);
...
}
class Sub {
public:
__device__ float operator() (float a, float b) const
{
return a - b;
}
};
∕∕ Device code
template<class O> __global__
void VectorOperation(const float * A, const float * B, float * C,
unsigned int N, O op)
{
unsigned int iElement = blockDim.x * blockIdx.x + threadIdx.x;
if (iElement < N)
C[iElement] = op(A[iElement], B[iElement]);
}
∕∕ Host code
int main()
{
...
VectorOperation<<<blocks, threads>>>(v1, v2, v3, N, Add());
...
}
This section gives the formula used to compute the value returned by the texture functions of Tex-
ture Functions depending on the various attributes of the texture reference (see Texture and Surface
Memory).
The texture bound to the texture reference is represented as an array T of
▶ N texels for a one-dimensional texture,
▶ N x M texels for a two-dimensional texture,
▶ N x M x L texels for a three-dimensional texture.
It is fetched using non-normalized texture coordinates x, y, and z, or the normalized texture coordinates
x/N, y/M, and z/L as described in Texture Memory. In this section, the coordinates are assumed to be
in the valid range. Texture Memory explained how out-of-range coordinates are remapped to the valid
range based on the addressing mode.
439
CUDA C++ Programming Guide, Release 12.4
The general specifications and features of a compute device depend on its compute capability (see
Compute Capability).
Table 20 and Table 21 show the features and technical specifications associated with each compute
capability that is currently supported.
Floating-Point Standard reviews the compliance with the IEEE floating-point standard.
Sections Compute Capability 5.x, Compute Capability 6.x, Compute Capability 7.x, Compute Capability
8.x and Compute Capability 9.0 give more details on the architecture of devices of compute capabilities
5.x, 6.x, 7.x, 8.x and 9.0 respectively.
443
CUDA C++ Programming Guide, Release 12.4
Atomic Yes
functions
operating
on 64-bit in-
teger values
in shared
memory
(Atomic
Functions)
Atomic No Yes
functions
operating
on 128-bit
integer
values
in global
memory
(Atomic
Functions)
Atomic No Yes
functions
operating
on 128-bit
integer
values in
shared
memory
(Atomic
Functions)
Atomic Yes
addition
operating
on 32-bit
floating
point values
in global
and shared
memory
(atomi-
cAdd())
continues on next page
Atomic No Yes
addition
operating
on 64-bit
floating
point values
in global
memory
and shared
memory
(atomi-
cAdd())
Atomic No Yes
addition op-
erating on
float2 and
float4 float-
ing point
vectors
in global
memory
(atomi-
cAdd())
Warp vote Yes
functions
(Warp Vote
Functions)
Memory Yes
fence func-
tions (Mem-
ory Fence
Functions)
Synchro- Yes
nization
functions
(Synchro-
nization
Functions)
Surface Yes
functions
(Surface
Functions)
continues on next page
Unified Yes
Mem-
ory Pro-
gramming
(Unified
Memory
Program-
ming)
Dynamic Yes
Parallelism
(CUDA
Dynamic
Parallelism)
Half- No Yes
precision
floating-
point op-
erations:
addition,
subtraction,
multipli-
cation,
compari-
son, warp
shuffle
functions,
conversion
Bfloat16- No Yes
precision
floating-
point op-
erations:
addition,
subtraction,
multipli-
cation,
compari-
son, warp
shuffle
functions,
conversion
Tensor No Yes
Cores
continues on next page
Thread No Yes
Block Clus-
ter
Tensor No Yes
Memory
Accelerator
(TMA) unit
Note that the KB and K units used in the following table correspond to 1024 bytes (i.e., a KiB) and 1024
respectively.
19.4.1. Architecture
An SM consists of:
▶ 128 CUDA cores for arithmetic operations (see Arithmetic Instructions for throughputs of arith-
metic operations),
▶ 32 special function units for single-precision floating-point transcendental functions,
▶ 4 warp schedulers.
When an SM is given warps to execute, it first distributes them among the four schedulers. Then, at
every instruction issue time, each scheduler issues one instruction for one of its assigned warps that
is ready to execute, if any.
An SM has:
▶ a read-only constant cache that is shared by all functional units and speeds up reads from the
constant memory space, which resides in device memory,
▶ a unified L1/texture cache of 24 KB used to cache reads from global memory,
▶ 64 KB of shared memory for devices of compute capability 5.0 or 96 KB of shared memory for
devices of compute capability 5.2.
The unified L1/texture cache is also used by the texture unit that implements the various addressing
modes and data filtering mentioned in Texture and Surface Memory.
There is also an L2 cache shared by all SMs that is used to cache accesses to local or global mem-
ory, including temporary register spills. Applications may query the L2 cache size by checking the
l2CacheSize device property (see Device Enumeration).
The cache behavior (e.g., whether reads are cached in both the unified L1/texture cache and L2 or in
L2 only) can be partially configured on a per-access basis using modifiers to the load instruction.
19.5.1. Architecture
An SM consists of:
▶ 64 (compute capability 6.0) or 128 (6.1 and 6.2) CUDA cores for arithmetic operations,
▶ 16 (6.0) or 32 (6.1 and 6.2) special function units for single-precision floating-point transcenden-
tal functions,
▶ 2 (6.0) or 4 (6.1 and 6.2) warp schedulers.
When an SM is given warps to execute, it first distributes them among its schedulers. Then, at every
instruction issue time, each scheduler issues one instruction for one of its assigned warps that is ready
to execute, if any.
An SM has:
▶ a read-only constant cache that is shared by all functional units and speeds up reads from the
constant memory space, which resides in device memory,
▶ a unified L1/texture cache for reads from global memory of size 24 KB (6.0 and 6.2) or 48 KB (6.1),
▶ a shared memory of size 64 KB (6.0 and 6.2) or 96 KB (6.1).
The unified L1/texture cache is also used by the texture unit that implements the various addressing
modes and data filtering mentioned in Texture and Surface Memory.
There is also an L2 cache shared by all SMs that is used to cache accesses to local or global mem-
ory, including temporary register spills. Applications may query the L2 cache size by checking the
l2CacheSize device property (see Device Enumeration).
The cache behavior (e.g., whether reads are cached in both the unified L1/texture cache and L2 or in
L2 only) can be partially configured on a per-access basis using modifiers to the load instruction.
Figure 35: Strided Shared Memory Accesses in 32 bit bank size mode.
Left Linear addressing with a stride of one 32-bit word (no bank conflict).
Middle
Linear addressing with a stride of two 32-bit words (two-way bank conflict).
Right
Linear addressing with a stride of three 32-bit words (no bank conflict).
19.6.1. Architecture
An SM consists of:
▶ 64 FP32 cores for single-precision arithmetic operations,
▶ 32 FP64 cores for double-precision arithmetic operations,39
▶ 64 INT32 cores for integer math,
▶ 8 mixed-precision Tensor Cores for deep learning matrix arithmetic
▶ 16 special function units for single-precision floating-point transcendental functions,
▶ 4 warp schedulers.
An SM statically distributes its warps among its schedulers. Then, at every instruction issue time, each
scheduler issues one instruction for one of its assigned warps that is ready to execute, if any.
An SM has:
▶ a read-only constant cache that is shared by all functional units and speeds up reads from the
constant memory space, which resides in device memory,
▶ a unified data cache and shared memory with a total size of 128 KB (Volta) or 96 KB (Turing).
Shared memory is partitioned out of unified data cache, and can be configured to various sizes (See
Shared Memory.) The remaining data cache serves as an L1 cache and is also used by the texture unit
that implements the various addressing and data filtering modes mentioned in Texture and Surface
Memory.
39 2 FP64 cores for double-precision arithmetic operations for devices of compute capabilities 7.5
These intrinsics are available on all architectures, not just Volta or Turing, and in most cases a single
code-base will suffice for all architectures. Note, however, that for Pascal and earlier architectures, all
threads in mask must execute the same warp intrinsic instruction in convergence, and the union of all
values in mask must be equal to the warp’s active mask. The following code pattern is valid on Volta,
but not on Pascal or earlier architectures.
if (tid % warpSize < 16) {
...
float swapped = __shfl_xor_sync(0xffffffff, val, 16);
...
} else {
...
float swapped = __shfl_xor_sync(0xffffffff, val, 16);
...
}
The replacement for __ballot(1) is __activemask(). Note that threads within a warp can diverge
even within a single code path. As a result, __activemask() and __ballot(1) may return only a
subset of the threads on the current code path. The following invalid code example sets bit i of
output to 1 when data[i] is greater than threshold. __activemask() is used in an attempt to
enable cases where dataLen is not a multiple of 32.
∕∕ Sets bit in output[] to 1 if the correspond element in data[i]
∕∕ is greater than 'threshold', using 32 threads in a warp.
This code is invalid because CUDA does not guarantee that the warp will diverge ONLY at the loop
condition. When divergence happens for other reasons, conflicting results will be computed for the
same 32-bit output element by different subsets of threads in the warp. A correct code might use a
non-divergent loop condition together with __ballot_sync() to safely enumerate the set of threads
in the warp participating in the threshold calculation as follows.
for (int i = warpLane; i - warpLane < dataLen; i += warpSize) {
unsigned active = __ballot_sync(0xFFFFFFFF, i < dataLen);
if (i < dataLen) {
unsigned bitPack = __ballot_sync(active, data[i] > threshold);
if (warpLane == 0) {
output[i ∕ 32] = bitPack;
}
}
}
∕∕ Inter-warp reduction
for (int i = BLOCK_SIZE ∕ 2; i >= 32; i ∕= 2) {
if (tid < i) {
s_buff[tid] += s_buff[tid+i];
}
__syncthreads();
}
∕∕ Intra-warp reduction
∕∕ Butterfly reduction simplifies syncwarp mask
if (tid < 32) {
float temp;
temp = s_buff[tid ^ 16]; __syncwarp();
s_buff[tid] += temp; __syncwarp();
temp = s_buff[tid ^ 8]; __syncwarp();
s_buff[tid] += temp; __syncwarp();
temp = s_buff[tid ^ 4]; __syncwarp();
s_buff[tid] += temp; __syncwarp();
temp = s_buff[tid ^ 2]; __syncwarp();
s_buff[tid] += temp; __syncwarp();
}
if (tid == 0) {
*output = s_buff[0] + s_buff[1];
}
__syncthreads();
built-in __syncthreads() and PTX instruction bar.sync (and their derivatives) are enforced
per thread and thus will not succeed until reached by all non-exited threads in the block. Code
exploiting the previous behavior will likely deadlock and must be modified to ensure that all non-
exited threads reach the barrier.
The racecheck and synccheck tools provided by compute-saniter can help with locating violations.
To aid migration while implementing the above-mentioned corrective actions, developers can opt-in to
the Pascal scheduling model that does not support independent thread scheduling. See Application
Compatibility for details.
∕∕ Host code
int carveout = 50; ∕∕ prefer shared memory capacity 50% of maximum
∕∕ Named Carveout Values:
∕∕ carveout = cudaSharedmemCarveoutDefault; ∕∕ (-1)
∕∕ carveout = cudaSharedmemCarveoutMaxL1; ∕∕ (0)
(continues on next page)
In addition to an integer percentage, several convenience enums are provided as listed in the code
comments above. Where a chosen integer percentage does not map exactly to a supported capacity
(SM 7.0 devices support shared capacities of 0, 8, 16, 32, 64, or 96 KB), the next larger capacity is used.
For instance, in the example above, 50% of the 96 KB maximum is 48 KB, which is not a supported
shared memory capacity. Thus, the preference is rounded up to 64 KB.
Compute capability 7.x devices allow a single thread block to address the full capacity of shared mem-
ory: 96 KB on Volta, 64 KB on Turing. Kernels relying on shared memory allocations over 48 KB per
block are architecture-specific, as such they must use dynamic shared memory (rather than statically
sized arrays) and require an explicit opt-in using cudaFuncSetAttribute() as follows.
∕∕ Device code
__global__ void MyKernel(...)
{
extern __shared__ float buffer[];
...
}
∕∕ Host code
int maxbytes = 98304; ∕∕ 96 KB
cudaFuncSetAttribute(MyKernel, cudaFuncAttributeMaxDynamicSharedMemorySize, maxbytes);
MyKernel <<<gridDim, blockDim, maxbytes>>>(...);
Otherwise, shared memory behaves the same way as for devices of compute capability 5.x (See Shared
Memory).
19.7.1. Architecture
A Streaming Multiprocessor (SM) consists of:
▶ 64 FP32 cores for single-precision arithmetic operations in devices of compute capability 8.0 and
128 FP32 cores in devices of compute capability 8.6, 8.7 and 8.9,
▶ 32 FP64 cores for double-precision arithmetic operations in devices of compute capability 8.0
and 2 FP64 cores in devices of compute capability 8.6, 8.7 and 8.9
▶ 64 INT32 cores for integer math,
▶ 4 mixed-precision Third-Generation Tensor Cores supporting half-precision (fp16),
__nv_bfloat16, tf32, sub-byte and double precision (fp64) matrix arithmetic for compute
capabilities 8.0, 8.6 and 8.7 (see Warp matrix functions for details),
▶ 4 mixed-precision Fourth-Generation Tensor Cores supporting fp8, fp16, __nv_bfloat16,
tf32, sub-byte and fp64 for compute capability 8.9 (see Warp matrix functions for details),
▶ 16 special function units for single-precision floating-point transcendental functions,
▶ 4 warp schedulers.
An SM statically distributes its warps among its schedulers. Then, at every instruction issue time, each
scheduler issues one instruction for one of its assigned warps that is ready to execute, if any.
An SM has:
▶ a read-only constant cache that is shared by all functional units and speeds up reads from the
constant memory space, which resides in device memory,
▶ a unified data cache and shared memory with a total size of 192 KB for devices of compute ca-
pability 8.0 and 8.7 (1.5x Volta’s 128 KB capacity) and 128 KB for devices of compute capabilities
8.6 and 8.9.
Shared memory is partitioned out of the unified data cache, and can be configured to various sizes
(see Shared Memory section). The remaining data cache serves as an L1 cache and is also used by the
texture unit that implements the various addressing and data filtering modes mentioned in Texture
and Surface Memory.
The API can specify the carveout either as an integer percentage of the maximum supported shared
memory capacity of 164 KB for devices of compute capability 8.0 and 8.7 and 100 KB for devices
of compute capabilities 8.6 and 8.9 respectively, or as one of the following values: {cudaShared-
memCarveoutDefault, cudaSharedmemCarveoutMaxL1, or cudaSharedmemCarveoutMaxShared.
When using a percentage, the carveout is rounded up to the nearest supported shared memory capac-
ity. For example, for devices of compute capability 8.0, 50% will map to a 100 KB carveout instead of
an 82 KB one. Setting the cudaFuncAttributePreferredSharedMemoryCarveout is considered a
hint by the driver; the driver may choose a different configuration, if needed.
Devices of compute capability 8.0 and 8.7 allow a single thread block to address up to 163 KB of shared
memory, while devices of compute capabilities 8.6 and 8.9 allow up to 99 KB of shared memory. Ker-
nels relying on shared memory allocations over 48 KB per block are architecture-specific, and must
use dynamic shared memory rather than statically sized shared memory arrays. These kernels require
an explicit opt-in by using cudaFuncSetAttribute() to set the cudaFuncAttributeMaxDynamic-
SharedMemorySize; see Shared Memory for the Volta architecture.
Note that the maximum amount of shared memory per thread block is smaller than the maximum
shared memory partition available per SM. The 1 KB of shared memory not made available to a thread
block is reserved for system use.
19.8.1. Architecture
A Streaming Multiprocessor (SM) consists of:
▶ 128 FP32 cores for single-precision arithmetic operations,
▶ 64 FP64 cores for double-precision arithmetic operations,
▶ 64 INT32 cores for integer math,
▶ 4 mixed-precision fourth-generation Tensor Cores supporting the new FP8 input type in either
E4M3 or E5M2 for exponent (E) and mantissa (M), half-precision (fp16), __nv_bfloat16, tf32,
INT8 and double precision (fp64) matrix arithmetic (see Warp Matrix Functions for details) with
sparsity support,
▶ 16 special function units for single-precision floating-point transcendental functions,
▶ 4 warp schedulers.
An SM statically distributes its warps among its schedulers. Then, at every instruction issue time, each
scheduler issues one instruction for one of its assigned warps that is ready to execute, if any.
An SM has:
▶ a read-only constant cache that is shared by all functional units and speeds up reads from the
constant memory space, which resides in device memory,
▶ a unified data cache and shared memory with a total size of 256 KB for devices of compute
capability 9.0 (1.33x NVIDIA Ampere GPU Architecture’s 192 KB capacity).
Shared memory is partitioned out of the unified data cache, and can be configured to various sizes
(see Shared Memory section). The remaining data cache serves as an L1 cache and is also used by the
texture unit that implements the various addressing and data filtering modes mentioned in Texture
and Surface Memory.
The driver API must be initialized with cuInit() before any function from the driver API is called. A
CUDA context must then be created that is attached to a specific device and made current to the
calling host thread as detailed in Context.
Within a CUDA context, kernels are explicitly loaded as PTX or binary objects by the host code as
465
CUDA C++ Programming Guide, Release 12.4
described in Module. Kernels written in C++ must therefore be compiled separately into PTX or binary
objects. Kernels are launched using API entry points as described in Kernel Execution.
Any application that wants to run on future device architectures must load PTX, not binary code. This
is because binary code is architecture-specific and therefore incompatible with future architectures,
whereas PTX code is compiled to binary code at load time by the device driver.
Here is the host code of the sample from Kernels written using the driver API:
int main()
{
int N = ...;
size_t size = N * sizeof(float);
∕∕ Initialize
cuInit(0);
∕∕ Create context
CUcontext cuContext;
cuCtxCreate(&cuContext, 0, cuDevice);
∕∕ Invoke kernel
int threadsPerBlock = 256;
int blocksPerGrid =
(N + threadsPerBlock - 1) ∕ threadsPerBlock;
void* args[] = { &d_A, &d_B, &d_C, &N };
cuLaunchKernel(vecAdd,
blocksPerGrid, 1, 1, threadsPerBlock, 1, 1,
0, 0, args, 0);
...
}
20.1. Context
A CUDA context is analogous to a CPU process. All resources and actions performed within the driver
API are encapsulated inside a CUDA context, and the system automatically cleans up these resources
when the context is destroyed. Besides objects such as modules and texture or surface references,
each context has its own distinct address space. As a result, CUdeviceptr values from different
contexts reference different memory locations.
A host thread may have only one device context current at a time. When a context is created with
cuCtxCreate(), it is made current to the calling host thread. CUDA functions that operate in a
context (most functions that do not involve device enumeration or context management) will return
CUDA_ERROR_INVALID_CONTEXT if a valid context is not current to the thread.
Each host thread has a stack of current contexts. cuCtxCreate() pushes the new context onto the
top of the stack. cuCtxPopCurrent() may be called to detach the context from the host thread.
The context is then “floating” and may be pushed as the current context for any host thread. cuCtx-
PopCurrent() also restores the previous current context, if any.
A usage count is also maintained for each context. cuCtxCreate() creates a context with a usage
count of 1. cuCtxAttach() increments the usage count and cuCtxDetach() decrements it. A con-
text is destroyed when the usage count goes to 0 when calling cuCtxDetach() or cuCtxDestroy().
The driver API is interoperable with the runtime and it is possible to access the primary context (see
Initialization) managed by the runtime from the driver API via cuDevicePrimaryCtxRetain().
Usage count facilitates interoperability between third party authored code operating in the same con-
text. For example, if three libraries are loaded to use the same context, each library would call cuC-
txAttach() to increment the usage count and cuCtxDetach() to decrement the usage count when
the library is done using the context. For most libraries, it is expected that the application will have
created a context before loading or initializing the library; that way, the application can create the con-
text using its own heuristics, and the library simply operates on the context handed to it. Libraries that
wish to create their own contexts - unbeknownst to their API clients who may or may not have created
contexts of their own - would use cuCtxPushCurrent() and cuCtxPopCurrent() as illustrated in
the following figure.
20.2. Module
Modules are dynamically loadable packages of device code and data, akin to DLLs in Windows, that
are output by nvcc (see Compilation with NVCC). The names for all symbols, including functions, global
variables, and texture or surface references, are maintained at module scope so that modules written
by independent third parties may interoperate in the same CUDA context.
This code sample loads a module and retrieves a handle to some kernel:
CUmodule cuModule;
cuModuleLoad(&cuModule, "myModule.ptx");
CUfunction myKernel;
cuModuleGetFunction(&myKernel, cuModule, "MyKernel");
This code sample compiles and loads a new module from PTX code and parses compilation errors:
#define BUFFER_SIZE 8192
CUmodule cuModule;
CUjit_option options[3];
void* values[3];
char* PTXCode = "some PTX code";
char error_log[BUFFER_SIZE];
int err;
options[0] = CU_JIT_ERROR_LOG_BUFFER;
values[0] = (void*)error_log;
options[1] = CU_JIT_ERROR_LOG_BUFFER_SIZE_BYTES;
values[1] = (void*)BUFFER_SIZE;
options[2] = CU_JIT_TARGET_FROM_CUCONTEXT;
values[2] = 0;
err = cuModuleLoadDataEx(&cuModule, PTXCode, 3, options, values);
if (err != CUDA_SUCCESS)
printf("Link error:\n%s\n", error_log);
This code sample compiles, links, and loads a new module from multiple PTX codes and parses link and
compilation errors:
#define BUFFER_SIZE 8192
CUmodule cuModule;
(continues on next page)
aligns double and long long (and long on a 64-bit system) on a one-word boundary instead of a
two-word boundary (for example, using gcc’s compilation flag -mno-align-double) since in device
code these types are always aligned on a two-word boundary.
CUdeviceptr is an integer, but represents a pointer, so its alignment requirement is __alig-
nof(void*).
The following code sample uses a macro (ALIGN_UP()) to adjust the offset of each parameter to meet
its alignment requirement and another macro (ADD_TO_PARAM_BUFFER()) to add each parameter to
the parameter buffer passed to the CU_LAUNCH_PARAM_BUFFER_POINTER option.
#define ALIGN_UP(offset, alignment) \
(offset) = ((offset) + (alignment) - 1) & ~((alignment) - 1)
char paramBuffer[1024];
size_t paramBufferSize = 0;
int i;
ADD_TO_PARAM_BUFFER(i, __alignof(i));
float4 f4;
ADD_TO_PARAM_BUFFER(f4, 16); ∕∕ float4's alignment is 16
char c;
ADD_TO_PARAM_BUFFER(c, __alignof(c));
float f;
ADD_TO_PARAM_BUFFER(f, __alignof(f));
CUdeviceptr devPtr;
ADD_TO_PARAM_BUFFER(devPtr, __alignof(devPtr));
float2 f2;
ADD_TO_PARAM_BUFFER(f2, 8); ∕∕ float2's alignment is 8
void* extra[] = {
CU_LAUNCH_PARAM_BUFFER_POINTER, paramBuffer,
CU_LAUNCH_PARAM_BUFFER_SIZE, ¶mBufferSize,
CU_LAUNCH_PARAM_END
};
cuLaunchKernel(cuFunction,
blockWidth, blockHeight, blockDepth,
gridWidth, gridHeight, gridDepth,
0, 0, 0, extra);
The alignment requirement of a structure is equal to the maximum of the alignment requirements of
its fields. The alignment requirement of a structure that contains built-in vector types, CUdeviceptr,
or non-aligned double and long long, might therefore differ between device code and host code.
Such a structure might also be padded differently. The following structure, for example, is not padded
at all in host code, but it is padded in device code with 12 bytes after field f since the alignment
requirement for field f4 is 16.
typedef struct {
float f;
float4 f4;
(continues on next page)
In particular, this means that applications written using the driver API can invoke libraries written using
the runtime API (such as cuFFT, cuBLAS, …).
All functions from the device and version management sections of the reference manual can be used
interchangeably.
20.5.1. Introduction
The Driver Entry Point Access APIs provide a way to retrieve the address of a CUDA driver func-
tion. Starting from CUDA 11.3, users can call into available CUDA driver APIs using function pointers
obtained from these APIs.
These APIs provide functionality similar to their counterparts, dlsym on POSIX platforms and GetPro-
cAddress on Windows. The provided APIs will let users:
▶ Retrieve the address of a driver function using the CUDA Driver API.
▶ Retrieve the address of a driver function using the CUDA Runtime API.
▶ Request per-thread default stream version of a CUDA driver function. For more details, see Re-
trieve per-thread default stream versions
▶ Access new CUDA features on older toolkits but with a newer driver.
The above headers do not define actual function pointers themselves; they define the typedefs for
function pointers. For example, cudaTypedefs.h has the below typedefs for the driver API cuMemAl-
loc:
typedef CUresult (CUDAAPI *PFN_cuMemAlloc_v3020)(CUdeviceptr_v2 *dptr, size_t�
,→bytesize);
CUDA driver symbols have a version based naming scheme with a _v* extension in its name except
for the first version. When the signature or the semantics of a specific CUDA driver API changes, we
increment the version number of the corresponding driver symbol. In the case of the cuMemAlloc
driver API, the first driver symbol name is cuMemAlloc and the next symbol name is cuMemAlloc_v2.
The typedef for the first version which was introduced in CUDA 2.0 (2000) is PFN_cuMemAlloc_v2000.
The typedef for the next version which was introduced in CUDA 3.2 (3020) is PFN_cuMemAlloc_v3020.
The typedefs can be used to more easily define a function pointer of the appropriate type in code:
PFN_cuMemAlloc_v3020 pfn_cuMemAlloc_v2;
PFN_cuMemAlloc_v2000 pfn_cuMemAlloc_v1;
The above method is preferable if users are interested in a specific version of the API. Additionally,
the headers have predefined macros for the latest version of all driver symbols that were available
when the installed CUDA toolkit was released; these typedefs do not have a _v* suffix. For CUDA 11.3
toolkit, cuMemAlloc_v2 was the latest version and so we can also define its function pointer as below:
PFN_cuMemAlloc pfn_cuMemAlloc;
The driver API requires CUDA version as an argument to get the ABI compatible version for the re-
quested driver symbol. CUDA Driver APIs have a per-function ABI denoted with a _v* extension. For
example, consider the versions of cuStreamBeginCapture and their corresponding typedefs from
cudaTypedefs.h:
∕∕ cuda.h
CUresult CUDAAPI cuStreamBeginCapture(CUstream hStream);
CUresult CUDAAPI cuStreamBeginCapture_v2(CUstream hStream, CUstreamCaptureMode mode);
∕∕ cudaTypedefs.h
typedef CUresult (CUDAAPI *PFN_cuStreamBeginCapture_v10000)(CUstream hStream);
typedef CUresult (CUDAAPI *PFN_cuStreamBeginCapture_v10010)(CUstream hStream,�
,→CUstreamCaptureMode mode);
From the above typedefs in the code snippet, version suffixes _v10000 and _v10010 indicate that
the above APIs were introduced in CUDA 10.0 and CUDA 10.1 respectively.
#include <cudaTypedefs.h>
Referring to the code snippet above, to retrieve the address to the _v1 version of the driver API cuS-
treamBeginCapture, the CUDA version argument should be exactly 10.0 (10000). Similarly, the CUDA
version for retrieving the address to the _v2 version of the API should be 10.1 (10010). Specifying a
higher CUDA version for retrieving a specific version of a driver API might not always be portable. For
example, using 11030 here would still return the _v2 symbol, but if a hypothetical _v3 version is re-
leased in CUDA 11.3, the cuGetProcAddress API would start returning the newer _v3 symbol instead
when paired with a CUDA 11.3 driver. Since the ABI and function signatures of the _v2 and _v3 sym-
bols might differ, calling the _v3 function using the _v10010 typedef intended for the _v2 symbol
would exhibit undefined behavior.
To retrieve the latest version of a driver API for a given CUDA Toolkit, we can also specify
CUDA_VERSION as the version argument and use the unversioned typedef to define the function
pointer. Since _v2 is the latest version of the driver API cuStreamBeginCapture in CUDA 11.3, the
below code snippet shows a different method to retrieve it.
∕∕ Assuming we are using CUDA 11.3 Toolkit
#include <cudaTypedefs.h>
∕∕ Intialize the entry point. Specifying CUDA_VERSION will give the function pointer to�
,→the
Note that requesting a driver API with an invalid CUDA version will return an error
CUDA_ERROR_NOT_FOUND. In the above code examples, passing in a version less than 10000
(CUDA 10.0) would be invalid.
The runtime API uses the CUDA runtime version to get the ABI compatible version for the requested
driver symbol. In the below code snippet, the minimum CUDA runtime version required would be CUDA
11.2 as cuMemAllocAsync was introduced then.
#include <cudaTypedefs.h>
∕∕ Intialize the entry point. Assuming CUDA runtime version >= 11.2
cudaGetDriverEntryPoint("cuMemAllocAsync", &pfn_cuMemAllocAsync, cudaEnableDefault, &
,→driverStatus);
Some CUDA driver APIs can be configured to have default stream or per-thread default stream seman-
tics. Driver APIs having per-thread default stream semantics are suffixed with _ptsz or _ptds in their
name. For example, cuLaunchKernel has a per-thread default stream variant named cuLaunchK-
ernel_ptsz. With the Driver Entry Point Access APIs, users can request for the per-thread default
stream version of the driver API cuLaunchKernel instead of the default stream version. Configuring
the CUDA driver APIs for default stream or per-thread default stream semantics affects the synchro-
nization behavior. More details can be found here.
The default stream or per-thread default stream versions of a driver API can be obtained by one of the
following ways:
▶ Use the compilation flag --default-stream per-thread or define the macro
CUDA_API_PER_THREAD_DEFAULT_STREAM to get per-thread default stream behavior.
It is always recommended to install the latest CUDA toolkit to access new CUDA driver features, but
if for some reason, a user does not want to update or does not have access to the latest toolkit, the
API can be used to access new CUDA features with only an updated CUDA driver. For discussion, let
us assume the user is on CUDA 11.3 and wants to use a new driver API cuFoo available in the CUDA
12.0 driver. The below code snippet illustrates this use-case:
int main()
{
∕∕ Assuming we have CUDA 12.0 driver installed.
∕∕ Manually define the prototype as cudaTypedefs.h in CUDA 11.3 does not have the�
,→ cuFoo typedef
typedef CUresult (CUDAAPI *PFN_cuFoo)(...);
PFN_cuFoo pfn_cuFoo = NULL;
CUdriverProcAddressQueryResult driverStatus;
∕∕ Get the address for cuFoo API using cuGetProcAddress. Specify CUDA version as
∕∕ 12000 since cuFoo was introduced then or get the driver version dynamically
∕∕ using cuDriverGetVersion
int driverVersion;
cuDriverGetVersion(&driverVersion);
CUresult status = cuGetProcAddress("cuFoo", &pfn_cuFoo, driverVersion, CU_GET_
,→PROC_ADDRESS_DEFAULT, &driverStatus);
assert(0);
}
cuDeviceGetUuid was introduced in CUDA 9.2. This API has a newer revision (cuDeviceGetUuid_v2)
introduced in CUDA 11.4. To preserve minor version compatibility, cuDeviceGetUuid will not be ver-
sion bumped to cuDeviceGetUuid_v2 in cuda.h until CUDA 12.0. This means that calling it by ob-
taining a function pointer to it via cuGetProcAddress might have different behavior. Example using
the API directly:
#include <cuda.h>
CUuuid uuid;
CUdevice dev;
CUresult status;
In this example, assume the user is compiling with CUDA 11.4. Note that this will perform the behavior
of cuDeviceGetUuid, not _v2 version. Now an example of using cuGetProcAddress:
#include <cudaTypedefs.h>
CUuuid uuid;
CUdevice dev;
CUresult status;
CUdriverProcAddressQueryResult driverStatus;
PFN_cuDeviceGetUuid pfn_cuDeviceGetUuid;
status = cuGetProcAddress("cuDeviceGetUuid", &pfn_cuDeviceGetUuid, CUDA_VERSION, CU_
,→GET_PROC_ADDRESS_DEFAULT, &driverStatus);
In this example, assume the user is compiling with CUDA 11.4. This will get the function pointer of
cuDeviceGetUuid_v2. Calling the function pointer will then invoke the new _v2 function, not the
same cuDeviceGetUuid as shown in the previous example.
Let’s take the same issue and make one small tweak. The last example used the compile time constant
of CUDA_VERSION to determine which function pointer to obtain. More complications arise if the user
queries the driver version dynamically using cuDriverGetVersion or cudaDriverGetVersion to
pass to cuGetProcAddress. Example:
#include <cudaTypedefs.h>
CUuuid uuid;
CUdevice dev;
CUresult status;
(continues on next page)
status = cuDriverGetVersion(&cudaVersion);
∕∕ handle status
PFN_cuDeviceGetUuid pfn_cuDeviceGetUuid;
status = cuGetProcAddress("cuDeviceGetUuid", &pfn_cuDeviceGetUuid, cudaVersion, CU_
,→GET_PROC_ADDRESS_DEFAULT, &driverStatus);
In this example, assume the user is compiling with CUDA 11.3. The user would debug, test, and deploy
this application with the known behavior of getting cuDeviceGetUuid (not the _v2 version). Since
CUDA has guaranteed ABI compatibility between minor versions, this same application is expected
to run after the driver is upgraded to CUDA 11.4 (without updating the toolkit and runtime) with-
out requiring recompilation. This will have undefined behavior though, because now the typedef for
PFN_cuDeviceGetUuid will still be of the signature for the original version, but since cudaVersion
would now be 11040 (CUDA 11.4), cuGetProcAddress would return the function pointer to the _v2
version, meaning calling it might have undefined behavior.
Note in this case the original (not the _v2 version) typedef looks like:
typedef CUresult (CUDAAPI *PFN_cuDeviceGetUuid_v9020)(CUuuid *uuid, CUdevice_v1 dev);
So in this case, the API/ABI is going to be the same and the runtime API call will likely not cause issues–
only the potential for unknown uuid return. In Implications to API/ABI, we discuss a more problematic
case of API/ABI compatibility.
Above, was a specific concrete example. Now for instance let’s use a theoretical example that still has
issues with compatibility across driver versions. Example:
CUresult cuFoo(int bar); ∕∕ Introduced in CUDA 11.4
CUresult cuFoo_v2(int bar); ∕∕ Introduced in CUDA 11.5
CUresult cuFoo_v3(int bar, void* jazz); ∕∕ Introduced in CUDA 11.6
Notice that the API has been modified twice since original creation in CUDA 11.4 and the latest in CUDA
11.6 also modified the API/ABI interface to the function. The usage in user code compiled against
CUDA 11.5 is:
#include <cuda.h>
#include <cudaTypedefs.h>
CUresult status;
int cudaVersion;
CUdriverProcAddressQueryResult driverStatus;
status = cuDriverGetVersion(&cudaVersion);
∕∕ handle status
PFN_cuFoo_v11040 pfn_cuFoo_v11040;
PFN_cuFoo_v11050 pfn_cuFoo_v11050;
if(cudaVersion < 11050 ) {
∕∕ We know to get the CUDA 11.4 version
status = cuGetProcAddress("cuFoo", &pfn_cuFoo_v11040, cudaVersion, CU_GET_PROC_
,→ADDRESS_DEFAULT, &driverStatus);
In this example, without updates for the new typedef in CUDA 11.6 and recompiling the application
with those new typedefs and case handling, the application will get the cuFoo_v3 function pointer
returned and any usage of that function would then cause undefined behavior. The point of this ex-
ample was to illustrate that even explicit version checks for cuGetProcAddress may not safely cover
the minor version bumps within a CUDA major release.
The above examples were focused on the issues with the Driver API usage for obtaining the func-
tion pointers to driver APIs. Now we will discuss the potential issues with the Runtime API usage for
cudaApiGetDriverEntryPoint.
We will start by using the Runtime APIs similar to the above.
#include <cuda.h>
#include <cudaTypedefs.h>
#include <cuda_runtime.h>
CUresult status;
cudaError_t error;
int driverVersion, runtimeVersion;
CUdriverProcAddressQueryResult driverStatus;
The function pointer in this example is even more complicated than the driver only examples above
because there is no control over which version of the function to obtain; it will always get the API for
the current CUDA Runtime version. See the following table for more information:
V11.3 => 11.3 CUDA Runtime and Toolkit (includes header files cuda.h and cudaTypedefs.
,→h)
V11.4 => 11.4 CUDA Runtime and Toolkit (includes header files cuda.h and cudaTypedefs.
,→h)
v1 => cuDeviceGetUuid
v2 => cuDeviceGetUuid_v2
x => Implies the typedef function pointer won't match the returned
function pointer. In these cases, the typedef at compile time
using a CUDA 11.4 runtime, would match the _v2 version, but the
returned function pointer would be the original (non _v2) function.
The problem in the table comes in with a newer CUDA 11.4 Runtime and Toolkit and older driver (CUDA
11.3) combination, labeled as v1x in the above. This combination would have the driver returning the
pointer to the older function (non _v2), but the typedef used in the application would be for the new
function pointer.
More complications arise when we consider different combinations of the CUDA version with which an
application is compiled, CUDA runtime version, and CUDA driver version that an application dynamically
links against.
#include <cuda.h>
#include <cudaTypedefs.h>
#include <cuda_runtime.h>
CUresult status;
cudaError_t error;
int driverVersion, runtimeVersion;
CUdriverProcAddressQueryResult driverStatus;
enum cudaDriverEntryPointQueryResult runtimeStatus;
PFN_cuDeviceGetUuid pfn_cuDeviceGetUuidDriver;
status = cuGetProcAddress("cuDeviceGetUuid", &pfn_cuDeviceGetUuidDriver, CUDA_VERSION,
,→ CU_GET_PROC_ADDRESS_DEFAULT, &driverStatus);
∕∕ Ask the driver for the function based on the driver version (obtained via runtime)
error = cudaDriverGetVersion(&driverVersion);
PFN_cuDeviceGetUuid pfn_cuDeviceGetUuidDriverDriverVer;
status = cuGetProcAddress ("cuDeviceGetUuid", &pfn_cuDeviceGetUuidDriverDriverVer,�
,→driverVersion, CU_GET_PROC_ADDRESS_DEFAULT, &driverStatus);
If the application is compiled against CUDA Version 11.3, it would have the typedef for the original
function, but if compiled against CUDA Version 11.4, it would have the typedef for the _v2 function.
Because of that, notice the number of cases where the typedef does not match the actual version
returned/used.
In the above examples using cuDeviceGetUuid, the implications of the mismatched API are minimal,
and may not be entirely noticeable to many users as the _v2 was added to support Multi-Instance GPU
(MIG) mode. So, on a system without MIG, the user might not even realize they are getting a different
API.
More problematic is an API which changes its application signature (and hence ABI) such as cuCtx-
Create. The _v2 version, introduced in CUDA 3.2 is currently used as the default cuCtxCreate when
using cuda.h but now has a newer version introduced in CUDA 11.4 (cuCtxCreate_v3). The API sig-
nature has been modified as well, and now takes extra arguments. So, in some of the cases above,
where the typedef to the function pointer doesn’t match the returned function pointer, there is a
chance for non-obvious ABI incompatibility which would lead to undefined behavior.
For example, assume the following code compiled against a CUDA 11.3 toolkit with a CUDA 11.4 driver
installed:
PFN_cuCtxCreate cuUnknown;
CUdriverProcAddressQueryResult driverStatus;
(continues on next page)
Running this code where cudaVersion is set to anything >=11040 (indicating CUDA 11.4) could have
undefined behavior due to not having adequately supplied all the parameters required for the _v3
version of the cuCtxCreate_v3 API.
if (CUDA_SUCCESS == status) {
if (CU_GET_PROC_ADDRESS_VERSION_NOT_SUFFICIENT == driverStatus) {
printf("We can use the new feature when you upgrade cudaVersion to 11.4, but�
,→CUDA driver is good to go!\n");
∕∕ Indicating cudaVersion was < 11.4 but run against a CUDA driver >= 11.4
}
else if (CU_GET_PROC_ADDRESS_SYMBOL_NOT_FOUND == driverStatus) {
printf("Please update both CUDA driver and cudaVersion to at least 11.4 to�
,→use the new feature!\n");
∕∕ Indicating driver is < 11.4 since string not found, doesn't matter what�
,→cudaVersion was
}
else if (CU_GET_PROC_ADDRESS_SUCCESS == driverStatus && pfn) {
printf("You're using cudaVersion and CUDA driver >= 11.4, using new feature!\n
,→");
pfn();
}
}
the symbol was not found when searching in the CUDA driver. This can be due to a few reasons such
as unsupported CUDA function due to older driver as well as just having a typo. In the latter, similar to
the last example if the user had put symbol as CUDeviceGetExecAffinitySupport - notice the capital CU
to start the string - cuGetProcAddress would not be able to find the API because the string doesn’t
match. In the former case an example might be the user developing an application against a CUDA
driver supporting the new API, and deploying the application against an older CUDA driver. Using the
last example, if the developer developed against CUDA 11.4 or later but was deployed against a CUDA
11.3 driver, during their development they may have had a succesful cuGetProcAddress, but when
deploying an application running against a CUDA 11.3 driver the call would no longer work with the
CU_GET_PROC_ADDRESS_SYMBOL_NOT_FOUND returned in driverStatus.
The following table lists the CUDA environment variables. Environment variables related to the Multi-
Process Service are documented in the Multi-Process Service section of the GPU Deployment and
Management guide.
483
CUDA C++ Programming Guide, Release 12.4
485
CUDA C++ Programming Guide, Release 12.4
487
CUDA C++ Programming Guide, Release 12.4
489
CUDA C++ Programming Guide, Release 12.4
Note: This chapter applies to devices with compute capability 5.0 or higher unless stated otherwise.
For devices with compute capability lower than 5.0, refer to the CUDA toolkit documentation for CUDA
11.8.
491
CUDA C++ Programming Guide, Release 12.4
With CUDA Unified Memory, data movement still takes place, and hints may improve performance.
These hints are not required for correctness or functionality, that is, programmers may focus on par-
allelizing their applications across GPUs and CPUs first, and worry about data-movement later in the
development cycle as a performance optimzation.
There are two main ways to obtain CUDA Unified Memory:
▶ System-Allocated Memory: memory allocated on the host with system APIs: stack variables,
global-/file-scope variables, malloc() / mmap() (see System Allocator for examples), thread lo-
cals, etc.
▶ CUDA APIs that explicitly allocate Unified Memory: memory allocated with, for example, cu-
daMallocManaged(), are available on more systems and may perform better than System-
Allocated Memory.
hostNativeAtomicSupported,
pageableMemoryAccessUse-
sHostPageTables,
directManagedMemAccess-
FromHost
No Unified Memory support. Set to 0: managedMemory CUDA for Tegra Memory Man-
agement
The behavior of an application that attempts to use Unified Memory on a system that does not support
it is undefined. The following properties enable CUDA applications to check the level of system support
for Unified Memory, and to be portable between systems with different levels of support:
▶ pageableMemoryAccess: This property is set to 1 on systems with CUDA Unified Memory
support where all threads may access System-Allocated Memory and CUDA Managed Memory.
These systems include NVIDIA Grace Hopper, IBM Power9 + Volta, and modern Linux systems
with HMM enabled (see next bullet), among others.
▶ Linux HMM requires Linux kernel version 6.1.24+, 6.2.11+ or 6.3+, devices with compute
capability 7.5 or higher and a CUDA driver version 535+ installed with Open Kernel Modules.
▶ concurrentManagedAccess: This property is set to 1 on systems with full CUDA Managed
Memory support. When this property is set to 0, there is only partial support for Unified Memory
in CUDA Managed Memory. For Tegra support of Unified Memory, see CUDA for Tegra Memory
Management.
A program may query the level of GPU support for CUDA Unified Memory, by querying the attributes
in Table Overview of levels of unified memory support above using cudaGetDeviceProperties().
System (malloc())
*ptr = v; *ptr = v;
} }
,→synchronize cudaDeviceSynchronize();
cudaMemcpy(&host, d_ptr, sizeof(int), printf("value = %d\n", *ptr);
cudaMemcpyDefault); free(ptr);
printf("value = %d\n", host); return 0;
cudaFree(d_ptr); }
return 0;
}
System (Stack)
*ptr = v; *ptr = v;
} }
,→synchronize cudaDeviceSynchronize();
cudaMemcpy(&host, d_ptr, sizeof(int), printf("value = %d\n", value);
cudaMemcpyDefault); return 0;
printf("value = %d\n", host); }
cudaFree(d_ptr);
return 0;
}
Managed (cudaMallocManaged())
*ptr = v; *ptr = v;
} }
,→synchronize cudaDeviceSynchronize();
cudaMemcpy(&host, d_ptr, sizeof(int), printf("value = %d\n", *ptr);
cudaMemcpyDefault); cudaFree(ptr);
printf("value = %d\n", host); return 0;
cudaFree(d_ptr); }
return 0;
}
Managed (__managed__)
*ptr = v; *ptr = v;
} }
,→synchronize cudaDeviceSynchronize();
cudaMemcpy(&host, d_ptr, sizeof(int), printf("value = %d\n", value);
cudaMemcpyDefault); return 0;
printf("value = %d\n", host); }
cudaFree(d_ptr);
return 0;
}
These examples combine two numbers together on the GPU with a per-thread ID returning the values
in an array:
▶ Without Unified Memory: both host- and device-side storage for the return values is required
(host_ret and ret in the example), as is an explicit copy between the two using cudaMemcpy().
▶ With Unified Memory: GPU accesses data directly from the host. ret may be used without a
separate host_ret allocation and no copy routine is required, greatly simplifying and reducing
the size of the program. With:
▶ System Allocated: no other changes required.
▶ Managed Memory: data allocation changed to use cudaMallocManaged(), which returns
a pointer valid from both host and device code.
On systems with full CUDA Unified Memory support, all memory is unified memory. This includes
memory allocated with system allocation APIs, such as malloc(), mmap(), C++ new() operator, and
also automatic variables on CPU thread stacks, thread locals, global variables, and so on.
System-Allocated Memory may be popullated on first touch, depending on the API and system settings
used. First touch means that: - The allocation APIs allocate virtual memory and return immediately,
and - physical memory is populated when a thread accesses the memory for the first time.
Usually, the physical memory will be chosen “close” to the processor that thread is running on. For
example, - GPU thread accesses it first: physical GPU memory of GPU that thread runs on is chosen.
- CPU thread accesses it first: physical CPU memory in the memory NUMA node of the CPU core that
thread runs on is chosen.
CUDA Unified Memory Hint and Prefetch APIs, cudaMemAdvise and cudaMemPreftchAsync, may be
used on System-Allocated Memory. These APIs are covered below in the Data Usage Hints section.
int main() {
∕∕ Allocate 100 bytes of memory, accessible to both Host and Device code
char *s = (char*)malloc(100);
∕∕ Physical allocation placed in CPU memory because host accesses "s" first
strncpy(s, "Hello Unified Memory\n", 99);
∕∕ Here we pass "s" to a kernel without explicitly copying
printme<<< 1, 1 >>>(s);
cudaDeviceSynchronize();
∕∕ Free as for normal CUDA allocations
cudaFree(s);
return 0;
}
On systems with CUDA Managed Memory support, unified memory may be allocated using:
__host__ cudaError_t cudaMallocManaged(void **devPtr, size_t size);
This API is syntactically identical to cudaMalloc(): it allocates size bytes of managed memory and
sets devPtr to refer to the allocation. CUDA Managed Memory is also deallocated with cudaFree().
On systems with full CUDA Managed Memory support, managed memory allocations may be accessed
concurrently by all CPUs and GPUs in the system. Replacing host calls to cudaMalloc() with cud-
aMallocManaged(), does not impact program semantics on these systems; device code is not able
to call cudaMallocManaged().
The following example shows the use of cudaMallocManaged():
__global__ void printme(char *str) {
printf(str);
}
int main() {
∕∕ Allocate 100 bytes of memory, accessible to both Host and Device code
char *s;
cudaMallocManaged(&s, 100);
∕∕ Note direct Host-code use of "s"
strncpy(s, "Hello Unified Memory\n", 99);
∕∕ Here we pass "s" to a kernel without explicitly copying
printme<<< 1, 1 >>>(s);
cudaDeviceSynchronize();
∕∕ Free as for normal CUDA allocations
cudaFree(s);
return 0;
}
Note: For systems that support CUDA Managed Memory allocations, but do not provide full support,
see Coherency and Concurrency. Implementation details (may change any time): - Devices of compute
capability 5.x allocate CUDA Managed Memory on the GPU. - Devices of compute capability 6.x and
greater populate the memory on first touch, just like System-Allocated Memory APIs.
CUDA __managed__ variables behave as if they were allocated via cudaMallocManaged() (see Ex-
plicit Allocation Using cudaMallocManaged() ). They simplify programs with global variables, making it
particularly easy to exchange data between host and device without manual allocations or copying.
On systems with full CUDA Unified Memory support, file-scope or global-scope variables cannot be
directly accessed by device code. But a pointer to these variables may be passed to the kernel as an
argument, see System Allocator for examples.
System Allocator
int main() {
∕∕ Requires System-Allocated Memory support
int value;
write_value<<<1, 1>>>(&value, 1);
∕∕ Synchronize required
∕∕ (before, cudaMemcpy was synchronizing)
cudaDeviceSynchronize();
printf("value = %d\n", value);
return 0;
}
Managed
int main() {
write_value<<<1, 1>>>(&value, 1);
∕∕ Synchronize required
∕∕ (before, cudaMemcpy was synchronizing)
cudaDeviceSynchronize();
printf("value = %d\n", value);
return 0;
}
Note the absence of explicit cudaMemcpy() commands and the fact that the return array ret is visible
on both CPU and GPU.
CUDA __managed__ variable implies __device__ and is equivalent to __managed__ __device__,
which is also allowed. Variables marked __constant__ may not be marked as __managed__.
A valid CUDA context is necessary for the correct operation of __managed__ variables. Accessing
__managed__ variables can trigger CUDA context creation if a context for the current device hasn’t
already been created. In the example above, accessing x before the kernel launch triggers context
creation on device 0. In the absence of that access, the kernel launch would have triggered context
creation.
C++ objects declared as __managed__ are subject to certain specific constraints, particularly where
static initializers are concerned. Please refer to C++ Language Support for a list of these constraints.
Note: For devices with CUDA Managed Memory without full support, stream visibility of __managed__
variables is discussed in the section on Managing Data Visibility and Concurrent CPU + GPU Access
with Streams.
The main difference between Unified Memory and CUDA Mapped Memory is that CUDA Mapped Mem-
ory does not guarantee that all kinds of memory accesses (for example atomics) are supported on all
systems, while Unified Memory does. The limited set of memory operations that are guaranteed to be
portably supported by CUDA Mapped Memory is available on more systems than Unified Memory.
CUDA Programs may check whether a pointer addresses a CUDA Managed Memory allocation by call-
ing cudaPointerGetAttributes() and testing whether the pointer attribute value is cudaMemo-
ryTypeManaged.
This API returns cudaMemoryTypeHost for system-allocated memory that has been registered
with cudaHostRegister() and cudaMemoryTypeUnregistered for system-allocated memory that
CUDA is unaware of.
Pointer attributes do not state where the memory resides, they state how the memory was allocated
or registered.
The following example shows how to detect the type of pointer at runtime:
char const* kind(cudaPointerAttributes a, bool pma, bool cma) {
switch(a.type) {
case cudaMemoryTypeHost: return pma?
"Unified: CUDA Host or Registered Memory" :
"Not Unified: CUDA Host or Registered Memory";
case cudaMemoryTypeDevice: return "Not Unified: CUDA Device Memory";
case cudaMemoryTypeManaged: return cma?
"Unified: CUDA Managed Memory" : "Not Unified: CUDA Managed Memory";
case cudaMemoryTypeUnregistered: return pma?
"Unified: System-Allocated Memory" :
"Not Unified: System-Allocated Memory";
default: return "unknown";
}
}
int main() {
int* ptr[5];
ptr[0] = (int*)malloc(sizeof(int));
cudaMallocManaged(&ptr[1], sizeof(int));
cudaMallocHost(&ptr[2], sizeof(int));
cudaMalloc(&ptr[3], sizeof(int));
ptr[4] = &managed_var;
cudaFree(ptr[3]);
cudaFreeHost(ptr[2]);
cudaFree(ptr[1]);
free(ptr[0]);
return 0;
}
The following example shows how to detect the Unified Memory support level at runtime:
int main() {
int d;
cudaGetDevice(&d);
int pma = 0;
cudaDeviceGetAttribute(&pma, cudaDevAttrPageableMemoryAccess, d);
printf("Full Unified Memory Support: %s\n", pma == 1? "YES" : "NO");
int cma = 0;
cudaDeviceGetAttribute(&cma, cudaDevAttrConcurrentManagedAccess, d);
printf("CUDA Managed Memory with full support: %s\n", cma == 1? "YES" : "NO");
return 0;
}
22.2.3. Multi-GPU
Managed allocations on systems with devices of compute capability 6.x are visible to all GPUs and can
migrate to any processor on-demand. Unified Memory performance hints (see Performance Tuning)
allow developers to explore custom usage patterns, such as read duplication of data across GPUs and
direct access to peer GPU memory without migration.
22.2. Unified memory on devices with full CUDA Unified Memory support 501
CUDA C++ Programming Guide, Release 12.4
On software coherent systems, the Linux kernel along with the CUDA driver manage page table mir-
roring and migrating pages between host and device for accesses. This means that any access to
memory may cause a page fault and migration of the page between host and device. Note that on
these systems, some operations such as atomics to file-backed memory are not supported at all, see
Atomic accesses for details.
Here is an example code that works on any system that satisfies the basic requirements for Unified
Memory (see System Requirements):
int* data;
cudaMallocManaged(&data, sizeof(int));
*data = 42;
kernel<<<1, 1>>>("managed", data);
These new access patterns are supported on systems with pageable memory access:
Malloc
void test_malloc() {
int* host_data = (int*)malloc(sizeof(int));
*host_data = 42;
kernel<<<1, 1>>>("malloc", host_data);
ASSERT(cudaDeviceSynchronize() == cudaSuccess,
"CUDA failed with '%s'", cudaGetErrorString(cudaGetLastError()));
free(host_data);
}
File-Scope variable
void test_static() {
static int host_data = 42;
kernel<<<1, 1>>>("static", &host_data);
ASSERT(cudaDeviceSynchronize() == cudaSuccess,
"CUDA failed with '%s'", cudaGetErrorString(cudaGetLastError()));
}
Extern variable
void test_extern() {
kernel<<<1, 1>>>("extern", ext_data);
ASSERT(cudaDeviceSynchronize() == cudaSuccess,
"CUDA failed with '%s'", cudaGetErrorString(cudaGetLastError()));
}
Stack variable
void test_stack() {
int stack_data = 42;
kernel<<<1, 1>>>("stack", &stack_data);
ASSERT(cudaDeviceSynchronize() == cudaSuccess,
"CUDA failed with '%s'", cudaGetErrorString(cudaGetLastError()));
}
File-backed memory
void test_file_backed() {
int fd = open("sam_access_extern.cpp", O_RDONLY);
ASSERT(fd >= 0, "Invalid file handle");
struct stat file_stat;
int status = fstat(fd, &file_stat);
ASSERT(status >= 0, "Invalid file stats");
char* mapped = (char*)mmap(0, file_stat.st_size, PROT_READ, MAP_PRIVATE, fd, 0);
ASSERT(mapped != MAP_FAILED, "Cannot map file into memory");
kernel_char<<<1, 1>>>("file-backed", mapped);
ASSERT(cudaDeviceSynchronize() == cudaSuccess,
"CUDA failed with '%s'", cudaGetErrorString(cudaGetLastError()));
ASSERT(munmap(mapped, file_stat.st_size) == 0, "Cannot unmap file");
ASSERT(close(fd) == 0, "Cannot close file");
}
In the example above, data could be initialized by a third party CPU library, and then directly accessed
by the GPU kernel. On systems with pageable memory access, users may also prefetch pageable
memory to the GPU by using cudaMemPrefetchAsync. This could yield performance benefits through
optimized data locality.
Note that according to the CUDA language specification, __device__ code must not use global vari-
ables variables directly without using a pointer or the __managed__ specifier, see Global-Scope Man-
aged Variables, because global variables are implicitly marked as __host__ in CUDA. The example be-
low shows code which is currently not compilable with CUDA, as well as how to access global variables
on systems with pageable memory access:
∕∕ this variable is declared at global scope
int global_variable;
22.2. Unified memory on devices with full CUDA Unified Memory support 503
CUDA C++ Programming Guide, Release 12.4
System Allocator
∕∕ directManagedMemAccessFromHost=0:�
,→CPU faults and triggers device-to-host migrations
cudaDeviceSynchronize(); ∕∕ directManagedMemAccessFromHost=0:�
,→GPU faults and triggers host-to-device migrations
free(ret);
}
Managed
void test_managed() {
int *ret;
cudaMallocManaged(&ret, 1000 * sizeof(int));
cudaMemAdvise(ret, 1000 * sizeof(int), cudaMemAdviseSetAccessedBy, cudaCpuDeviceId);
,→ ∕∕ set direct access hint
∕∕ directManagedMemAccessFromHost=0:�
,→CPU faults and triggers device-to-host migrations
cudaDeviceSynchronize(); ∕∕ directManagedMemAccessFromHost=0:�
,→GPU faults and triggers host-to-device migrations
cudaFree(ret);
}
After write kernel is completed, ret will be created and initialized in GPU memory. Next, the CPU will
access ret followed by append kernel using the same ret memory again. This code will show different
behavior depending on the system architecture and support of hardware coherency:
▶ On systems with directManagedMemAccessFromHost=1: CPU accesses to the managed buffer
will not trigger any migrations; the data will remain resident in GPU memory and any subsequent
GPU kernels can continue to access it directly without inflicting faults or migrations.
▶ On systems with directManagedMemAccessFromHost=0: CPU accesses to the managed buffer
will page fault and initiate data migration; any GPU kernel trying to access the same data first
time will page fault and migrate pages back to GPU memory.
Additionally, while atomic accesses from the device to CPU-resident memory do not always cause
22.2. Unified memory on devices with full CUDA Unified Memory support 505
CUDA C++ Programming Guide, Release 12.4
Since unified memory can be accessed from either the host or the device, cudaMemcpy*() relies
on the type of transfer, specified using cudaMemcpyKind, to determine whether the data should be
accessed as a host pointer or a device pointer.
In general, the source data is accessed from the host if cudaMemcpyHostTo* is specified and from
the device if cudaMemcpyDeviceTo* is specified. Similar rules apply to the destination data for cu-
daMemcpy*ToHost and cudaMemcpy*ToDevice respectively.
When using cudaMemset*() with unified memory, the data is always accessed from the device.
∕∕ we have signaled the parent, now we read from the shared memory before
∕∕ parent will prefetch the memory to its device
read_kernel<<<1, 1>>>(shmem_c, device, false ∕* is_parent *∕);
status = cudaStreamSynchronize(NULL);
pthread_barrier_wait(barrier);
} else {
∕∕ child and parent both read, then we first wait until child is done reading,
∕∕ then wait again until child has written its message
printf("Parent read: '%s'\n", shmem_c);
pthread_barrier_wait(barrier);
pthread_barrier_wait(barrier);
printf("After barrier, parent read: '%s'\n", shmem_c);
∕∕ wait for signal from child which reads again from a device kernel
∕∕ then prefetch to (our) device and read again from the device
pthread_barrier_wait(barrier);
∕∕ here we move the message to GPU
status = cudaMemPrefetchAsync(shmem, shmem_size, device);
if (status != cudaSuccess) ERR("Parent: unable to prefetch shared memory: %d\n",�
,→int(status));
Note that it is not possible to share memory between different hosts and their devices using this
technique.
Furthermore, note that managed memory pointers cannot be shared between processes. The Unified
Memory system will not correctly manage memory handles associated with these pointers and results
will be undefined if either the child or parent accesses managed data following a fork().
22.2. Unified memory on devices with full CUDA Unified Memory support 507
CUDA C++ Programming Guide, Release 12.4
▶ Applications with a large memory footprint should minimize translation-lookaside buffer (TLB)
cache misses.
▶ For systems with direct access to GPU-resident memory from the host, avoid frequent small
writes to GPU-resident memory from the host.
▶ Exploit asynchronous access to system memory by utilizing available resources in parallel if pos-
sible.
▶ Consider tuning your application for the granularity of memory transfers of your system.
To achieve the same level of performance as what’s possible without using Unified Memory, the appli-
cation has to guide the Unified Memory driver subsystem into avoiding the aforementioned pitfalls. It
is worthy to note that the Unified Memory driver subsystem can detect common data access patterns
and achieve some of these objectives automatically without application participation. But when the
data access patterns are non-obvious, explicit guidance from the application is crucial. CUDA 8.0 in-
troduces useful APIs for providing the runtime with memory usage hints (cudaMemAdvise()) and for
explicit prefetching (cudaMemPrefetchAsync()). These tools allow the same capabilities as explicit
memory copy and pinning APIs without reverting to the limitations of explicit GPU memory allocation.
Many of the sections for unified memory performance tuning assume prior knowledge on memory
pages and page sizes. This section attempts to define all necessary terms and explain why page sizes
matter for performance.
All current processors with a memory management unit (MMU), including both CPUs and GPUs, divide
the addressable physical memory range into a hierarchy of chunks of different sizes: memory pages,
cache lines and potentially others. Memory pages are the largest of these chunks in the hierarchy at
the hardware level: this means for example that a physical allocation of memory can never be smaller
than the page size of the processor.
Currently, most CPUs inlcuding all x86_64 based architectures, are using a page size of 4KiB. ARM-
based architectures may support page sizes of 4KiB, 16KiB, 32KiB and 64KiB, depending on the exact
CPU. Finally, all NVIDIA GPUs are using a 2MiB page size, significantly larger than the page sizes of
CPUs. Note that these sizes are subject to change in future hardware.
Current software memory management typically uses virtual addresses and may define their own page
sizes. The default page size usually corresponds to the physical page size, but larger page sizes may
be emulated, since virtual addresses must be translated into physical addresses for any access in a
virtual memory system anyway.
Large page sizes lead to fewer misses in the caches for address translation, typically referred to as
translation lookaside buffers (TLBs). On the other hand, large page sizes may lead to higher fragmen-
tation in the virtual memory space, and occasional latency spikes in the applications in case a memory
page must be migrated from one physical memory to another.
Because physical pages may change in the future and software can usually emulate larger page sizes,
applications should generally not tune their performance to a given physical page size. However, ap-
plications should be aware of different page sizes, especially in unified memory systems where it can
be beneficial to match the page sizes of memory resident on the CPU and GPU.
The Overview of memory allocators for Unified Memory includes some details on what page sizes var-
ious allocators support. The system page size refers to the default page size of the operating system,
usually the page size supported by the processor for physical allocations.
For more details on how to ensure system-allocated memory uses larger page sizes to match the page
size of the GPU, see Minimizing TLB cache misses.
Data prefetching means migrating data to a processor’s memory and potentially mapping it in that
processor’s page tables before the processor begins accessing that data. The intent of data prefetch-
ing is to avoid faults while also establishing data locality. This is most valuable for applications that
access data primarily from a single processor at any given time. As the accessing processor changes
during the lifetime of the application, the data can be prefetched accordingly to follow the execution
flow of the application. Since work is launched in streams in CUDA, it is expected of data prefetching
to also be a streamed operation as shown in the following API:
cudaError_t cudaMemPrefetchAsync(const void *devPtr,
size_t count,
int dstDevice,
cudaStream_t stream);
where the memory region specified by devPtr pointer and count number of bytes, with ptr rounded
down to the nearest page boundary and count rounded up to the nearest page boundary, is migrated
to the dstDevice by enqueueing a migration operation in stream. Passing in cudaCpuDeviceId for
dstDevice will cause data to be migrated to CPU memory.
Consider a simple code example below:
System Allocator
void test_prefetch_sam(cudaStream_t s) {
char *data = (char*)malloc(N);
init_data(data, N); ∕∕ execute on CPU
cudaMemPrefetchAsync(data, N, myGpuId, s); ∕∕ prefetch to GPU
mykernel<<<(N + TPB - 1) ∕ TPB, TPB, 0, s>>>(data, N); ∕∕ execute on GPU
cudaMemPrefetchAsync(data, N, cudaCpuDeviceId, s); ∕∕ prefetch to CPU
cudaStreamSynchronize(s);
use_data(data, N);
free(data);
}
Managed
void test_prefetch_managed(cudaStream_t s) {
char *data;
cudaMallocManaged(&data, N);
init_data(data, N); ∕∕ execute on CPU
cudaMemPrefetchAsync(data, N, myGpuId, s); ∕∕ prefetch to GPU
mykernel<<<(N + TPB - 1) ∕ TPB, TPB, 0, s>>>(data, N); ∕∕ execute on GPU
cudaMemPrefetchAsync(data, N, cudaCpuDeviceId, s); ∕∕ prefetch to CPU
cudaStreamSynchronize(s);
use_data(data, N);
cudaFree(data);
}
Without performance hints the kernel mykernel will fault on first access to data which creates addi-
tional overhead of the fault processing and generally slows down the application. By prefetching data
in advance it is possible to avoid page faults and achieve better performance.
22.2. Unified memory on devices with full CUDA Unified Memory support 509
CUDA C++ Programming Guide, Release 12.4
This API follows stream ordering semantics, i.e. the migration does not begin until all prior operations
in the stream have completed, and any subsequent operation in the stream does not begin until the
migration has completed.
Data prefetching alone is insufficient when multiple processors need to simultaneously access the
same data. In such scenarios, it’s useful for the application to provide hints on how the data will
actually be used. The following advisory API can be used to specify data usage:
cudaError_t cudaMemAdvise(const void *devPtr,
size_t count,
enum cudaMemoryAdvise advice,
int device);
where advice, specified for data contained in region starting from devPtr address and with the
length of count bytes, rounded to the nearest page boundary, can take the following values:
▶ cudaMemAdviseSetReadMostly: This implies that the data is mostly going to be read from
and only occasionally written to. This allows the driver to create read-only copies of the data in
a processor’s memory when that processor accesses it. Similarly, if cudaMemPrefetchAsync is
called on this region, it will create a read-only copy of the data on the destination processor. When
a processor writes to this data, all copies of the corresponding page are invalidated except for
the one where the write occurred. The device argument is ignored for this advice. On systems
with pageable memory access, note that this advice does not apply to system-allocated memory,
but only managed memory. This advice allows multiple processors to simultaneously access the
same data at maximal bandwidth as illustrated in the following code snippet:
void test_advise_managed(cudaStream_t stream) {
char *dataPtr;
size_t dataSize = 64 * TPB; ∕∕ 16 KiB
∕∕ Allocate memory using malloc or cudaMallocManaged
cudaMallocManaged(&dataPtr, dataSize);
∕∕ Set the advice on the memory region
cudaMemAdvise(dataPtr, dataSize, cudaMemAdviseSetReadMostly, myGpuId);
int outerLoopIter = 0;
while (outerLoopIter < maxOuterLoopIter) {
∕∕ The data is written to in the outer loop on the CPU
init_data(dataPtr, dataSize);
∕∕ The data is made available to all GPUs by prefetching.
∕∕ Prefetching here causes read duplication of data instead
∕∕ of data migration
for (int device = 0; device < maxDevices; device++) {
cudaMemPrefetchAsync(dataPtr, dataSize, device, stream);
}
∕∕ The kernel only reads this data in the inner loop
int innerLoopIter = 0;
while (innerLoopIter < maxInnerLoopIter) {
mykernel<<<32, TPB, 0, stream>>>((const char *)dataPtr, dataSize);
innerLoopIter++;
}
outerLoopIter++;
}
cudaFree(dataPtr);
}
▶ cudaMemAdviseSetPreferredLocation: This advice sets the preferred location for the data
to be the memory belonging to device. Passing in a value of cudaCpuDeviceId for device sets
the preferred location as CPU memory. Setting the preferred location does not cause data to mi-
grate to that location immediately. Instead, it guides the migration policy when a fault occurs
on that memory region. If the data is already in its preferred location and the faulting proces-
sor can establish a mapping without requiring the data to be migrated, then the migration will
be avoided. On the other hand, if the data is not in its preferred location or if a direct mapping
cannot be established, then it will be migrated to the processor accessing it. It is important to
note that setting the preferred location does not prevent data prefetching done using cudaMem-
PrefetchAsync.
▶ cudaMemAdviseSetAccessedBy: This advice implies that the data will be accessed by device.
This does not cause data migration and has no impact on the location of the data per se. Instead,
it causes the data to always be mapped in the specified processor’s page tables, as long as the
location of the data permits a mapping to be established. If the data gets migrated for any rea-
son, the mappings are updated accordingly. This advice is useful in scenarios where data locality
is not important, but avoiding faults is. Consider for example a system containing multiple GPUs
with peer-to-peer access enabled, where the data located on one GPU is occasionally accessed
by other GPUs. In such scenarios, migrating data over to the other GPUs is not as important
because the accesses are infrequent and the overhead of migration may be too high. But pre-
venting faults can still help improve performance, and so having a mapping set up in advance is
useful. Note that on CPU access of this data, the data may be migrated to CPU memory because
the CPU cannot access GPU memory directly. Any GPU that had the cudaMemAdviceSetAc-
cessedBy flag set for this data will now have its mapping updated to point to the page in CPU
memory. On hardware coherent systems, system-allocated memory is mapped by default on the
device’s page tables, but managed memory is currently not mapped by default.
Each advice can be also unset by using one of the following values: cudaMemAdviseUnsetRead-
Mostly, cudaMemAdviseUnsetPreferredLocation and cudaMemAdviseUnsetAccessedBy.
A program can query memory range attributes assigned through cudaMemAdvise or cudaMem-
PrefetchAsync by using the following API:
cudaMemRangeGetAttribute(void *data,
size_t dataSize,
enum cudaMemRangeAttribute attribute,
const void *devPtr,
size_t count);
This function queries an attribute of the memory range starting at devPtr with a size of count bytes.
The memory range must refer to managed memory allocated via cudaMallocManaged or declared via
__managed__ variables. It is possible to query the following attributes:
▶ cudaMemRangeAttributeReadMostly: the result returned will be 1 if all pages in the given
memory range have read-duplication enabled, or 0 otherwise.
▶ cudaMemRangeAttributePreferredLocation: the result returned will be a GPU device id or
cudaCpuDeviceId if all pages in the memory range have the corresponding processor as their
preferred location, otherwise cudaInvalidDeviceId will be returned. An application can use
this query API to make decision about staging data through CPU or GPU depending on the pre-
ferred location attribute of the managed pointer. Note that the actual location of the pages in
the memory range at the time of the query may be different from the preferred location.
22.2. Unified memory on devices with full CUDA Unified Memory support 511
CUDA C++ Programming Guide, Release 12.4
▶ cudaMemRangeAttributeAccessedBy: will return the list of devices that have that advise set
for that memory range.
▶ cudaMemRangeAttributeLastPrefetchLocation: will return the last location to which all
pages in the memory range were prefetched explicitly using cudaMemPrefetchAsync. Note that
this simply returns the last location that the application requested to prefetch the memory range
to. It gives no indication as to whether the prefetch operation to that location has completed or
even begun.
Additionally, multiple attributes can be queried by using corresponding cudaMemRangeGetAt-
tributes function.
Note that on systems with pageable memory access, this currently only applies to managed memory,
but not system-allocated memory.
Since systems with pageable memory access contain hardware support for address translations be-
tween host and device, various different allocators may be used to allocate Unified Memory. The fol-
lowing table shows an overview of a selection of allocators with their respective features. Note that
all information in this section is subject to change in future CUDA versions, and we document a few
details as implementation details.
Table Overview of allocators with (partial) unified memory support shows the difference in semantics
of several allocators that may be considered to allocate data accessible from multiple processors at
a time, including host and device. For additional details about cudaMemPoolCreate, see the Stream
2 This feature is only available with a future update of CUDA 12.4. Can be overridden with cudaMemAdvise. Even if access-
based migrations are disabled, if the backing memory space is full, memory might migrate.
1 For mmap, file-backed memory is placed on the CPU by default, unless specified otherwise through cudaMemAdviseSet-
mmap MAP_HUGETLB / MAP_HUGE_SHIFT). In this case, any huge page size configured on the system is supported.
5 Page-sizes for GPU-resident memory may evolve in future CUDA versions.
Ordered Memory Allocator section, for additional details about cuMemCreate, see the Virtual Memory
Management section.
In order to simplify the table above, we leave out some details around cudaMalloc, described in the
Device Memory section, and cudaHostAlloc / cudaMallocHost or described in the Page-Locked
Host Memory section.
In addition to the table above, we add more information about additional allocators in the list below.
Note that all items below are subject to change in future CUDA versions:
▶ System allocators such as mmap allow sharing the memory between processes using the
MAP_SHARED flag. This is supported in CUDA and can be used to share memory between dif-
ferent devices connected to the same host. However, this is currently not supported for sharing
memory between multiple hosts as well as multiple devices. See Interprocess Communication
(IPC) with Unified Memory for details.
▶ cudaHostAlloc and cudaMallocHost currently allocate memory pinned to the host (always
placed in CPU memory), accessible from both host and device, but which does not migrate to the
device. See also Page-Locked Host Memory for details on this type of memory. The same applies
to memory registered with CUDA through cudaHostRegister: currently, these memory ranges
will not migrate to the device, but are accessible to both host and device.
▶ On systems with Hardware Coherency (ATS) where device memory is exposed as a NUMA domain
to the system, special allocators such as numa_alloc_on_node may be used to pin memory to
the given NUMA node, either host or device. This memory is accessible from both host and device
and does not migrate. Similarly, mbind can be used to pin memory to the given NUMA node(s),
and can cause file-backed memory to be placed on the given NUMA node(s) before it is first
accessed.
▶ For access to Unified Memory or other CUDA memory through a network on multiple hosts, con-
sult the documentation of the communication library used, for example NCCL, NVSHMEM, Open-
MPI, UCX, etc.
In general, atomic accesses can be performed by the CPU using std::atomic or std::atomic_ref
from the C++ standard library header <atomic>. When allocating std::atomic through a specialized
allocator such as cudaMallocManaged, one may call std::atomic_init to initialize the allocated
memory.
Similarly, on the device, atomic accesses can be performed using cuda::atomic or
cuda::atomic_ref from the CUDA toolkit as defined in <cuda∕atomic>. When allocating
cuda::atomic through a specialized allocator such as cudaMallocManaged, to initialize the
allocated memory, you can call cuda::std::atomic_init or use cuda::atomic_ref.
On systems with pageable memory access, since device threads can access system-allocated memory
like any other type of memory, atomics can easily be shared between host and device threads, using
cuda::atomic*. Note that atomics of device threads using cuda::atomic* and host threads using
std::atomic* or other variants such as GCC atomic builtins cannot be mixed. If most accesses are
made by device threads, we recommend that the physical backing memory is placed on the device
memory, see Data Usage Hints. Note that atomics at system-wide scope can be performed with the
C-style atomicAdd_system function as well, see Atomic Functions.
On systems without Hardware Coherency, atomic accesses from the device to file-backed host mem-
ory are not supported. The following example code is valid on systems with Hardware Coherency but
undefined on other systems:
22.2. Unified memory on devices with full CUDA Unified Memory support 513
CUDA C++ Programming Guide, Release 12.4
#include <cuda∕atomic>
#include <cstdio>
#include <fcntl.h>
#include <sys∕mman.h>
int main() {
∕∕ this will be closed∕deleted by default on exit
FILE* tmp_file = tmpfile64();
∕∕ need to allcate space in the file, we do this with posix_fallocate here
int status = posix_fallocate(fileno(tmp_file), 0, 4096);
if (status != 0) ERR("Failed to allocate space in temp file\n");
void* ptr = mmap(NULL, 4096, PROT_READ | PROT_WRITE, MAP_PRIVATE, fileno(tmp_file),�
,→0);
Furthermore, note that on systems without Hardware Coherency, atomic accesses to unified memory
may incur page faults which can lead to significant latencies. However, devices with compute capa-
bility 9.0 and higher support atomic accesses to host memory wihtout page faults for some types of
atomics.
On systems with Hardware Coherency, atomics from the device to host memory generally do not re-
quire page faults unless a page fault is necessary for any type of memory access.
The GPU has direct access to the system’s page table and to CPU-resident memory. For such a direct
access, both the GPU’s MMU and the CPU’s MMU need to coordinate to translate the address and
retrieve the requested data. Specifically, both MMUs contain TLBs to cache address translations. Fre-
quent accesses to a wide range of addresses can lead to TLB thrashing, which may severly decrease
performance of the memory system. The following steps can be taken to identify and mitigate TLB
cache misses:
1. Profilers such as NVIDIA Nsight Systems and Linux perf can be used to identify TLB misses for
the CPU’s TLB.
2. If an application has a high TLB miss rate, a first step can be to try to reduce the working memory
size of kernels accessing memory, increasing the effectiveness of TLBs.
3. A second step can be to try to improve memory access patterns: this can be difficult for parallel
programs running on the GPU, but access patterns such that all active threads request similar
addresses will increase the effectiveness of TLBs.
4. A third step can be to increase the page size of the allocation to match the page size of the device,
see Page Sizes for details. Managed memory currently always allocates the GPU-default page
size. However, for system-allocated memory, the page size can be increased using transparent
huge pages (THP), HugeTLB or the VMM APIs.
In order to determine whether huge pages are necessary for performance improvements or not, it
is useful to be able to determine whether the application produces many TLB misses, see also the
section on Page Sizes for details around page sizes and impact on TLB caches.
Determining TLB cache misses for the CPU is easy on Linux-based systems through perf stat, and
is a good proxy for whether the GPU experiences TLB misses for the same memory.
Tracking page fault events (faults), TLB load and miss events (dTLB-loads, dTLB-load-misses),
and LLC events (cache-misses, cache-references) cover enough information to help you decide
whether you need huge pages. All trackable events can be listed using perf list.
Below is sample output from perf stat:
perf stat -e 'faults,dTLB-loads,dTLB-load-misses,cache-misses,cache-references'
,→<EXECUTABLE> <ARGUMENTS>
70,080 faults:u
34,285,589 dTLB-loads:u
30,657,940 dTLB-load-misses:u # 89.42% of all dTLB cache hits
51,676,464 cache-misses:u # 44.81% of all cache refs
115,346,452 cache-references:u
...
When you encounter significant TLB miss to LLC miss ratios on the CPU, we recommend that you
explore the potential performance improvement by using huge pages.
Note: On ARM-based systems, the defaut system page size is typically 16KiB or 64KiB, meaning that
default transparent huge page size may not match the GPU’s default page size. See Page Sizes for
details. In this case, it is recommended to enable huge pages through HugeTLB.
22.2. Unified memory on devices with full CUDA Unified Memory support 515
CUDA C++ Programming Guide, Release 12.4
Note that Linux THP support does not guarantee the automatic allocation of huge pages. One way
to increase the likelihood of allocating huge pages is to set the top-pad to the huge-page size in
bytes. If the desired huge page size is 2MiB for example, one can use the environment variable MAL-
LOC_TOP_PAD_=2097152 or the tunables environment variable GLIBC_TUNABLE=glibc.malloc.
hugetlb=1:glibc.malloc.top_pad=2097152. These tunables apply to all libc-based allocation
functions, including calloc, posix_memalign, and others, see the libc Memory Allocation Tunables
documentation for further details. The following example shows how to manually increase likelihood
of allocating huge pages by using posix_memalign to obtain allocations aligned to the huge page
size directly:
void *ptr = nullptr;
size_t nbytes = 1<<30; ∕∕ 1GiB;
∕∕ allocation usually backed by huge pages on systems with THP enabled
ptr = malloc(nbytes);
...
void *ptr2 = nullptr;
size_t huge_page_size = 1<<21; ∕∕ 2MiB;
∕∕ allocation with posix_memalign and madvise can help ensuring
∕∕ the allocated memory is actually backed by huge pages
posix_memalign(&ptr2, huge_page_size, nbytes);
madvise(ptr2, nbytes, MADV_HUGEPAGE);
Refer to the Linux kernel documentation on THP for more information about THP in general.
Note that THP can be setup through shared memory files, too, but this requires
Linux provides the HugeTLB mechanism to pre-allocate and obtain huge pages of any size through
huge page pools. Usually, these pools must be setup at boot time through kernel boot parameters,
but can be mounted as a file-system, too, see further below in this section.
You can allocate huge pages using mmap or shm_open ∕ shmget along with the right flags: the
allocated memory size must be perfectly divisible by the page size, and the MAP_HUGETLB flag has to
be set. The code below shows how to achieve this for 2MiB page sizes.
size_t huge_page_size = 1<<21; ∕∕ 2MiB;
size_t npages = 1<<30 ∕ huge_page_size + 1;
void *ptr = mmap(NULL, npages * huge_page_size,
PROT_READ | PROT_WRITE,
MAP_PRIVATE | MAP_ANON | MAP_HUGETLB | MAP_HUGE_2MB,
-1, 0);
Refer to the Linux kernel documentation on HugeTLB for more information about how huge pages are
obtained from designated page pools.
HugeTLB pages can also be obtained in shared memory files or through a mounted file-system, see
Using HugeTLB in the Linux kernel documentation. This is particularly useful for workloads within
containers where such a file-system can be mounted when starting the container.
If the host accesses Unified Memory, cache misses may introduce more traffic than expected between
host and device. Many CPU architectures require all memory operations to go through the cache
hierarchy, including writes. If system memory is resident on the GPU, this means that frequent writes
by the CPU to this memory can cause cache misses, thus transferring the data first from the GPU to
CPU before writing the actual value into the requested memory range. On software coherent systems,
this may introduce additional page faults, while on hardware coherent systems, it may cause higher
latencies between CPU operations. Thus, in order to share data produced by the host with the device,
consider writing to CPU-resident memory and reading the values directly from the device. The code
below shows how to achieve this with unified memory.
System Allocator
Managed
int* data;
size_t data_size = sizeof(int);
cudaMallocManaged(&data, data_size);
∕∕ ensure that data stays local to the host and avoid faults
cudaMemAdvise(data, data_size, cudaMemAdviseSetPreferredLocation, cudaCpuDeviceId);
cudaMemAdvise(data, data_size, cudaMemAdviseSetAccessedBy, cudaCpuDeviceId);
22.2. Unified memory on devices with full CUDA Unified Memory support 517
CUDA C++ Programming Guide, Release 12.4
If an application needs to share results from work on the device with the host, there are several possible
options:
1. The device writes its result to GPU-resident memory, the result is transferred using cudaMem-
cpy*, and the host reads the transferred data.
2. The device directly writes its result to CPU-resident memory, and the host reads that data.
3. The device writes to GPU-resident memory, and the host directly accesses that data.
If independent work can be scheduled on the device while the result is transferred/accessed by the
host, options 1 or 3 are preferred. If the device is starved until the host has accessed the result, option
2 might be preferred. This is because the device can generally write at a higher bandwidth than the
host can read, unless many host threads are used to read the data.
1. Explicit Copy
Finally, in the option 1 above, instead of using cudaMemcpy* to transfer data, one could use a host or
device kernel to perform this transfer explicitly. For contiguous data, using the CUDA copy-engines
is preferred because operations performed by copy-engines can be overlapped with work on both the
host and device. Copy-engines are used in all cudaMemcpy* and cudaMemPrefetchAsync APIs. For
the same reason, option 1 is preferred over option 3 for large enough data: if both host and device
perform work that does not saturate their respective memory systems, the transfer can be performed
by the copy-engines concurrently with the work performed by both host and device.
We recommend to explicitly specify the direction of transfer with cudaMemcpy*, see memcpy behavior.
Copy-engines are generally used for both transfers between host and device as well as between peer
devices within an NVLink-connected system. Due to the limited total number of copy-engines, some
systems may have a lower bandwidth of cudaMemcpy* compared to using the device to explicitly per-
form the transfer. In such a case, if the transfer is in the critical path of the application, it may be
preferred to use an explicit device-based transfer.
On software coherent systems, any access to unified memory may cause a page fault, see System
allocator. Thus, data points accessed in close succession should be located in the same memory page.
On the other hand, on hardware coherent systems, unified memory is usually not migrated automati-
cally, unless the backing memory for a physical allocation is full. Thus, similar rules as for any memory
access apply: data points accessed in close succession should be located in the same cache line.
22.2. Unified memory on devices with full CUDA Unified Memory support 519
CUDA C++ Programming Guide, Release 12.4
GPU architectures of compute capability lower than 6.0 do not support fine-grained movement of the
managed data to GPU on-demand. Whenever a GPU kernel is launched all managed memory generally
has to be transferred to GPU memory to avoid faulting on memory access. With compute capability
6.x a new GPU page faulting mechanism is introduced that provides more seamless Unified Memory
functionality. Combined with the system-wide virtual address space, page faulting provides several
benefits. First, page faulting means that the CUDA system software doesn’t need to synchronize all
managed memory allocations to the GPU before each kernel launch. If a kernel running on the GPU
accesses a page that is not resident in its memory, it faults, allowing the page to be automatically mi-
grated to the GPU memory on-demand. Alternatively, the page may be mapped into the GPU address
space for access over the PCIe or NVLink interconnects (mapping on access can sometimes be faster
than migration). Note that Unified Memory is system-wide: GPUs (and CPUs) can fault on and migrate
memory pages either from CPU memory or from the memory of other GPUs in the system.
Devices of compute capability lower than 6.0 cannot allocate more managed memory than the physical
size of GPU memory.
22.3.2.3 Multi-GPU
On systems with devices of compute capabilities lower than 6.0 managed allocations are automati-
cally visible to all GPUs in a system via the peer-to-peer capabilities of the GPUs. Managed memory
allocations behave similar to unmanaged memory allocated using cudaMalloc(): the current active
device is the home for the physical allocation but other GPUs in the system will access the memory at
reduced bandwidth over the PCIe bus.
On Linux the managed memory is allocated in GPU memory as long as all GPUs that are actively being
used by a program have the peer-to-peer support. If at any time the application starts using a GPU
that doesn’t have peer-to-peer support with any of the other GPUs that have managed allocations on
them, then the driver will migrate all managed allocations to system memory. In this case, all GPUs
experience PCIe bandwidth restrictions.
On Windows, if peer mappings are not available (for example, between GPUs of different architectures),
then the system will automatically fall back to using zero-copy memory, regardless of whether both
GPUs are actually used by a program. If only one GPU is actually going to be used, it is necessary to
set the CUDA_VISIBLE_DEVICES environment variable before launching the program. This constrains
which GPUs are visible and allows managed memory to be allocated in GPU memory.
Alternatively, on Windows users can also set CUDA_MANAGED_FORCE_DEVICE_ALLOC to a non-zero
value to force the driver to always use device memory for physical storage. When this environment vari-
able is set to a non-zero value, all devices used in that process that support managed memory have to
be peer-to-peer compatible with each other. The error ::cudaErrorInvalidDevice will be returned
if a device that supports managed memory is used and it is not peer-to-peer compatible with any of the
other managed memory supporting devices that were previously used in that process, even if ::cud-
aDeviceReset has been called on those devices. These environment variables are described in CUDA
Environment Variables. Note that starting from CUDA 8.0 CUDA_MANAGED_FORCE_DEVICE_ALLOC has
no effect on Linux operating systems.
Simultaneous access to managed memory on devices of compute capability lower than 6.0 is not pos-
sible, because coherence could not be guaranteed if the CPU accessed a Unified Memory allocation
while a GPU kernel was active.
To ensure coherency on pre-6.x GPU architectures, the Unified Memory programming model puts con-
straints on data accesses while both the CPU and GPU are executing concurrently. In effect, the GPU
has exclusive access to all managed data while any kernel operation is executing, regardless of whether
the specific kernel is actively using the data. When managed data is used with cudaMemcpy*() or
cudaMemset*(), the system may choose to access the source or destination from the host or the
device, which will put constraints on concurrent CPU access to that data while the cudaMemcpy*() or
cudaMemset*() is executing. See Memcpy()/Memset() Behavior With Managed Memory for further
details.
22.3. Unified memory on devices without full CUDA Unified Memory support 521
CUDA C++ Programming Guide, Release 12.4
It is not permitted for the CPU to access any managed allocations or variables while the GPU is ac-
tive for devices with concurrentManagedAccess property set to 0. On these systems concurrent
CPU/GPU accesses, even to different managed memory allocations, will cause a segmentation fault
because the page is considered inaccessible to the CPU.
__device__ __managed__ int x, y=2;
__global__ void kernel() {
x = 10;
}
int main() {
kernel<<< 1, 1 >>>();
y = 20; ∕∕ Error on GPUs not supporting concurrent access
cudaDeviceSynchronize();
return 0;
}
In example above, the GPU program kernel is still active when the CPU touches y. (Note how it occurs
before cudaDeviceSynchronize().) The code runs successfully on devices of compute capability 6.x
due to the GPU page faulting capability which lifts all restrictions on simultaneous access. However,
such memory access is invalid on pre-6.x architectures even though the CPU is accessing different
data than the GPU. The program must explicitly synchronize with the GPU before accessing y:
__device__ __managed__ int x, y=2;
__global__ void kernel() {
x = 10;
}
int main() {
kernel<<< 1, 1 >>>();
cudaDeviceSynchronize();
y = 20; ∕∕ Success on GPUs not supporing concurrent access
return 0;
}
As this example shows, on systems with pre-6.x GPU architectures, a CPU thread may not access any
managed data in between performing a kernel launch and a subsequent synchronization call, regard-
less of whether the GPU kernel actually touches that same data (or any managed data at all). The mere
potential for concurrent CPU and GPU access is sufficient for a process-level exception to be raised.
Note that if memory is dynamically allocated with cudaMallocManaged() or cuMemAllocManaged()
while the GPU is active, the behavior of the memory is unspecified until additional work is launched or
the GPU is synchronized. Attempting to access the memory on the CPU during this time may or may
not cause a segmentation fault. This does not apply to memory allocated using the flag cudaMemAt-
tachHost or CU_MEM_ATTACH_HOST.
Note that explicit synchronization is required even if kernel runs quickly and finishes before the CPU
touches y in the above example. Unified Memory uses logical activity to determine whether the GPU is
idle. This aligns with the CUDA programming model, which specifies that a kernel can run at any time
following a launch and is not guaranteed to have finished until the host issues a synchronization call.
Any function call that logically guarantees the GPU completes its work is valid. This includes cudaDe-
viceSynchronize(); cudaStreamSynchronize() and cudaStreamQuery() (provided it returns
cudaSuccess and not cudaErrorNotReady) where the specified stream is the only stream still exe-
cuting on the GPU; cudaEventSynchronize() and cudaEventQuery() in cases where the specified
event is not followed by any device work; as well as uses of cudaMemcpy() and cudaMemset() that
are documented as being fully synchronous with respect to the host.
Dependencies created between streams will be followed to infer completion of other streams by syn-
chronizing on a stream or event. Dependencies can be created via cudaStreamWaitEvent() or im-
plicitly when using the default (NULL) stream.
It is legal for the CPU to access managed data from within a stream callback, provided no other stream
that could potentially be accessing managed data is active on the GPU. In addition, a callback that is
not followed by any device work can be used for synchronization: for example, by signaling a condition
variable from inside the callback; otherwise, CPU access is valid only for the duration of the callback(s).
There are several important points of note:
▶ It is always permitted for the CPU to access non-managed zero-copy data while the GPU is active.
▶ The GPU is considered active when it is running any kernel, even if that kernel does not make use
of managed data. If a kernel might use data, then access is forbidden, unless device property
concurrentManagedAccess is 1.
▶ There are no constraints on concurrent inter-GPU access of managed memory, other than those
that apply to multi-GPU access of non-managed memory.
▶ There are no constraints on concurrent GPU kernels accessing managed data.
Note how the last point allows for races between GPU kernels, as is currently the case for non-managed
GPU memory. As mentioned previously, managed memory functions identically to non-managed
memory from the perspective of the GPU. The following code example illustrates these points:
int main() {
cudaStream_t stream1, stream2;
cudaStreamCreate(&stream1);
cudaStreamCreate(&stream2);
int *non_managed, *managed, *also_managed;
cudaMallocHost(&non_managed, 4); ∕∕ Non-managed, CPU-accessible memory
cudaMallocManaged(&managed, 4);
cudaMallocManaged(&also_managed, 4);
∕∕ Point 1: CPU can access non-managed data.
kernel<<< 1, 1, 0, stream1 >>>(managed);
*non_managed = 1;
∕∕ Point 2: CPU cannot access any managed data while GPU is busy,
∕∕ unless concurrentManagedAccess = 1
∕∕ Note we have not yet synchronized, so "kernel" is still active.
*also_managed = 2; ∕∕ Will issue segmentation fault
∕∕ Point 3: Concurrent GPU kernels can access the same data.
kernel<<< 1, 1, 0, stream2 >>>(managed);
∕∕ Point 4: Multi-GPU concurrent access is also permitted.
cudaSetDevice(1);
kernel<<< 1, 1 >>>(managed);
return 0;
}
22.3. Unified memory on devices without full CUDA Unified Memory support 523
CUDA C++ Programming Guide, Release 12.4
22.3.2.4.3 Managing Data Visibility and Concurrent CPU + GPU Access with Streams
Until now it was assumed that for SM architectures before 6.x: 1) any active kernel may use any man-
aged memory, and 2) it was invalid to use managed memory from the CPU while a kernel is active.
Here we present a system for finer-grained control of managed memory designed to work on all de-
vices supporting managed memory, including older architectures with concurrentManagedAccess
equal to 0.
The CUDA programming model provides streams as a mechanism for programs to indicate dependence
and independence among kernel launches. Kernels launched into the same stream are guaranteed to
execute consecutively, while kernels launched into different streams are permitted to execute con-
currently. Streams describe independence between work items and hence allow potentially greater
efficiency through concurrency.
Unified Memory builds upon the stream-independence model by allowing a CUDA program to explicitly
associate managed allocations with a CUDA stream. In this way, the programmer indicates the use of
data by kernels based on whether they are launched into a specified stream or not. This enables op-
portunities for concurrency based on program-specific data access patterns. The function to control
this behavior is:
cudaError_t cudaStreamAttachMemAsync(cudaStream_t stream,
void *ptr,
size_t length=0,
unsigned int flags=0);
The cudaStreamAttachMemAsync() function associates length bytes of memory starting from ptr
with the specified stream. (Currently, length must always be 0 to indicate that the entire region
should be attached.) Because of this association, the Unified Memory system allows CPU access to
this memory region so long as all operations in stream have completed, regardless of whether other
streams are active. In effect, this constrains exclusive ownership of the managed memory region by
an active GPU to per-stream activity instead of whole-GPU activity.
Most importantly, if an allocation is not associated with a specific stream, it is visible to all running
kernels regardless of their stream. This is the default visibility for a cudaMallocManaged() allocation
or a __managed__ variable; hence, the simple-case rule that the CPU may not touch the data while
any kernel is running.
By associating an allocation with a specific stream, the program makes a guarantee that only kernels
launched into that stream will touch that data. No error checking is performed by the Unified Memory
system: it is the programmer’s responsibility to ensure that guarantee is honored.
In addition to allowing greater concurrency, the use of cudaStreamAttachMemAsync() can (and typ-
ically does) enable data transfer optimizations within the Unified Memory system that may affect
latencies and other overhead.
Associating data with a stream allows fine-grained control over CPU + GPU concurrency, but what data
is visible to which streams must be kept in mind when using devices of compute capability lower than
6.0. Looking at the earlier synchronization example:
__device__ __managed__ int x, y=2;
__global__ void kernel() {
x = 10;
}
(continues on next page)
Here we explicitly associate y with host accessibility, thus enabling access at all times from the CPU.
(As before, note the absence of cudaDeviceSynchronize() before the access.) Accesses to y by
the GPU running kernel will now produce undefined results.
Note that associating a variable with a stream does not change the associating of any other variable.
For example, associating x with stream1 does not ensure that only x is accessed by kernels launched
in stream1, thus an error is caused by this code:
__device__ __managed__ int x, y=2;
__global__ void kernel() {
x = 10;
}
int main() {
cudaStream_t stream1;
cudaStreamCreate(&stream1);
cudaStreamAttachMemAsync(stream1, &x);∕∕ Associate “x” with stream1.
cudaDeviceSynchronize(); ∕∕ Wait for “x” attachment to occur.
kernel<<< 1, 1, 0, stream1 >>>(); ∕∕ Note: Launches into stream1.
y = 20; ∕∕ ERROR: “y” is still associated globally
∕∕ with all streams by default
return 0;
}
Note how the access to y will cause an error because, even though x has been associated with a stream,
we have told the system nothing about who can see y. The system therefore conservatively assumes
that kernel might access it and prevents the CPU from doing so.
The primary use for cudaStreamAttachMemAsync() is to enable independent task parallelism using
CPU threads. Typically in such a program, a CPU thread creates its own stream for all work that it
generates because using CUDA’s NULL stream would cause dependencies between threads.
The default global visibility of managed data to any GPU stream can make it difficult to avoid interac-
tions between CPU threads in a multi-threaded program. Function cudaStreamAttachMemAsync()
is therefore used to associate a thread’s managed allocations with that thread’s own stream, and the
association is typically not changed for the life of the thread.
Such a program would simply add a single call to cudaStreamAttachMemAsync() to use unified mem-
ory for its data accesses:
∕∕ This function performs some task, in its own private stream.
void run_task(int *in, int *out, int length) {
∕∕ Create a stream for us to use.
(continues on next page)
22.3. Unified memory on devices without full CUDA Unified Memory support 525
CUDA C++ Programming Guide, Release 12.4
In this example, the allocation-stream association is established just once, and then data is used re-
peatedly by both the host and device. The result is much simpler code than occurs with explicitly
copying data between host and device, although the result is the same.
In the previous example cudaMallocManaged() specifies the cudaMemAttachHost flag, which cre-
ates an allocation that is initially invisible to device-side execution. (The default allocation would be
visible to all GPU kernels on all streams.) This ensures that there is no accidental interaction with an-
other thread’s execution in the interval between the data allocation and when the data is acquired for
a specific stream.
Without this flag, a new allocation would be considered in-use on the GPU if a kernel launched by
another thread happens to be running. This might impact the thread’s ability to access the newly
allocated data from the CPU (for example, within a base-class constructor) before it is able to explicitly
attach it to a private stream. To enable safe independence between threads, therefore, allocations
should be made specifying this flag.
Note: An alternative would be to place a process-wide barrier across all threads after the allocation
has been attached to the stream. This would ensure that all threads complete their data/stream as-
sociations before any kernels are launched, avoiding the hazard. A second barrier would be needed
before the stream is destroyed because stream destruction causes allocations to revert to their de-
fault visibility. The cudaMemAttachHost flag exists both to simplify this process, and because it is not
always possible to insert global barriers where required.
See Memcpy()/Memset() Behavior With Unified Memory for a general overview of cudaMemcpy* / cu-
daMemset* behavior on devices with concurrentManagedAccess set. On devices where concur-
rentManagedAccess is not set, the following rules apply:
If cudaMemcpyHostTo* is specified and the source data is unified memory, then it will be accessed
from the host if it is coherently accessible from the host in the copy stream (1); otherwise it will be ac-
cessed from the device. Similar rules apply to the destination when cudaMemcpy*ToHost is specified
and the destination is unified memory.
If cudaMemcpyDeviceTo* is specified and the source data is unified memory, then it will be accessed
from the device. The source must be coherently accessible from the device in the copy stream (2);
otherwise, an error is returned. Similar rules apply to the destination when cudaMemcpy*ToDevice is
specified and the destination is unified memory.
If cudaMemcpyDefault is specified, then unified memory will be accessed from the host either if it
cannot be coherently accessed from the device in the copy stream (2) or if the preferred location for
the data is cudaCpuDeviceId and it can be coherently accessed from the host in the copy stream (1);
otherwise, it will be accessed from the device.
When using cudaMemset*() with unified memory, the data must be coherently accessible from the
device in the stream being used for the cudaMemset() operation (2); otherwise, an error is returned.
When data is accessed from the device either by cudaMemcpy* or cudaMemset*, the stream of opera-
tion is considered to be active on the GPU. During this time, any CPU access of data that is associated
with that stream or data that has global visibility, will result in a segmentation fault if the GPU has
a zero value for the device attribute concurrentManagedAccess. The program must synchronize
appropriately to ensure the operation has completed before accessing any associated data from the
CPU.
1. Coherently accessible from the host in a given stream means that the memory neither has global
visibility nor is it associated with the given stream.
2. Coherently accessible from the device in a given stream means that the memory either has global
visibility or is associated with the given stream.
22.3. Unified memory on devices without full CUDA Unified Memory support 527
CUDA C++ Programming Guide, Release 12.4
529
CUDA C++ Programming Guide, Release 12.4
23.2.1. Driver
Lazy Loading requires R515+ user-mode library, but it supports Forward Compatibility, meaning it can
run on top of older kernel mode drivers.
Without R515+ user-mode library, Lazy Loading is not available in any shape or form, even if toolkit
version is 11.7+.
23.2.2. Toolkit
Lazy Loading was introduced in CUDA 11.7, and received a significant upgrade in CUDA 11.8.
If your application uses CUDA Runtime, then in order to see benefits from Lazy Loading your application
must use 11.7+ CUDA Runtime.
As CUDA Runtime is usually linked statically into programs and libraries, this means that you have to
recompile your program with CUDA 11.7+ toolkit and use CUDA 11.7+ libraries.
Otherwise you will not see the benefits of Lazy Loading, even if your driver version supports it.
If only some of your libraries are 11.7+, you will only see benefits of Lazy Loading in those libraries.
Other libraries will still load everything eagerly.
23.2.3. Compiler
Lazy Loading does not require any compiler support. Both SASS and PTX compiled with pre-11.7 com-
pilers can be loaded with Lazy Loading enabled, and will see full benefits of the feature. However, 11.7+
CUDA Runtime is still required, as described above.
int main() {
CUmoduleLoadingMode mode;
assert(CUDA_SUCCESS == cuInit(0));
assert(CUDA_SUCCESS == cuModuleGetLoadingMode(&mode));
std::cout << "CUDA Module Loading Mode is " << ((mode == CU_MODULE_LAZY_
,→ LOADING) ? "lazy" : "eager") << std::endl;
return 0;
}
23.5.2. Allocators
Lazy Loading delays loading code from initialization phase of the program closer to execution phase.
Loading code onto the GPU requires memory allocation.
If your application tries to allocate the entire VRAM on startup, e.g. to use it for its own allocator, then it
might turn out that there will be no more memory left to load the kernels. This is despite the fact that
overall Lazy Loading frees up more memory for the user. CUDA will need to allocate some memory to
load each kernel, which usually happens at first launch time of each kernel. If your application allocator
greedily allocated everything, CUDA will fail to allocate memory.
Possible solutions:
▶ use cudaMallocAsync() instead of an allocator that allocates the entire VRAM on startup
▶ add some buffer to compensate for the delayed loading of kernels
▶ preload all kernels that will be used in the program before trying to initialize your allocator
23.5.3. Autotuning
Some applications launch several kernels implementing the same functionality to determine which one
is the fastest. While it is overall advisable to run at least one warmup iteration, it becomes especially
important with Lazy Loading. After all, including time taken to load the kernel will skew your results.
Possible solutions:
▶ do at least one warmup interaction prior to measurement
▶ preload the benchmarked kernel prior to launching it
24.1. Notice
This document is provided for information purposes only and shall not be regarded as a warranty of a
certain functionality, condition, or quality of a product. NVIDIA Corporation (“NVIDIA”) makes no repre-
sentations or warranties, expressed or implied, as to the accuracy or completeness of the information
contained in this document and assumes no responsibility for any errors contained herein. NVIDIA shall
have no liability for the consequences or use of such information or for any infringement of patents
or other rights of third parties that may result from its use. This document is not a commitment to
develop, release, or deliver any Material (defined below), code, or functionality.
NVIDIA reserves the right to make corrections, modifications, enhancements, improvements, and any
other changes to this document, at any time without notice.
Customer should obtain the latest relevant information before placing orders and should verify that
such information is current and complete.
NVIDIA products are sold subject to the NVIDIA standard terms and conditions of sale supplied at the
time of order acknowledgement, unless otherwise agreed in an individual sales agreement signed by
authorized representatives of NVIDIA and customer (“Terms of Sale”). NVIDIA hereby expressly objects
to applying any customer general terms and conditions with regards to the purchase of the NVIDIA
product referenced in this document. No contractual obligations are formed either directly or indirectly
by this document.
NVIDIA products are not designed, authorized, or warranted to be suitable for use in medical, military,
aircraft, space, or life support equipment, nor in applications where failure or malfunction of the NVIDIA
product can reasonably be expected to result in personal injury, death, or property or environmental
damage. NVIDIA accepts no liability for inclusion and/or use of NVIDIA products in such equipment or
applications and therefore such inclusion and/or use is at customer’s own risk.
NVIDIA makes no representation or warranty that products based on this document will be suitable for
any specified use. Testing of all parameters of each product is not necessarily performed by NVIDIA.
It is customer’s sole responsibility to evaluate and determine the applicability of any information con-
tained in this document, ensure the product is suitable and fit for the application planned by customer,
and perform the necessary testing for the application in order to avoid a default of the application or
the product. Weaknesses in customer’s product designs may affect the quality and reliability of the
NVIDIA product and may result in additional or different conditions and/or requirements beyond those
contained in this document. NVIDIA accepts no liability related to any default, damage, costs, or prob-
lem which may be based on or attributable to: (i) the use of the NVIDIA product in any manner that is
contrary to this document or (ii) customer product designs.
No license, either expressed or implied, is granted under any NVIDIA patent right, copyright, or other
NVIDIA intellectual property right under this document. Information published by NVIDIA regarding
third-party products or services does not constitute a license from NVIDIA to use such products or
535
CUDA C++ Programming Guide, Release 12.4
services or a warranty or endorsement thereof. Use of such information may require a license from a
third party under the patents or other intellectual property rights of the third party, or a license from
NVIDIA under the patents or other intellectual property rights of NVIDIA.
Reproduction of information in this document is permissible only if approved in advance by NVIDIA
in writing, reproduced without alteration and in full compliance with all applicable export laws and
regulations, and accompanied by all associated conditions, limitations, and notices.
THIS DOCUMENT AND ALL NVIDIA DESIGN SPECIFICATIONS, REFERENCE BOARDS, FILES, DRAWINGS,
DIAGNOSTICS, LISTS, AND OTHER DOCUMENTS (TOGETHER AND SEPARATELY, “MATERIALS”) ARE
BEING PROVIDED “AS IS.” NVIDIA MAKES NO WARRANTIES, EXPRESSED, IMPLIED, STATUTORY, OR
OTHERWISE WITH RESPECT TO THE MATERIALS, AND EXPRESSLY DISCLAIMS ALL IMPLIED WAR-
RANTIES OF NONINFRINGEMENT, MERCHANTABILITY, AND FITNESS FOR A PARTICULAR PURPOSE.
TO THE EXTENT NOT PROHIBITED BY LAW, IN NO EVENT WILL NVIDIA BE LIABLE FOR ANY DAMAGES,
INCLUDING WITHOUT LIMITATION ANY DIRECT, INDIRECT, SPECIAL, INCIDENTAL, PUNITIVE, OR CON-
SEQUENTIAL DAMAGES, HOWEVER CAUSED AND REGARDLESS OF THE THEORY OF LIABILITY, ARIS-
ING OUT OF ANY USE OF THIS DOCUMENT, EVEN IF NVIDIA HAS BEEN ADVISED OF THE POSSIBILITY
OF SUCH DAMAGES. Notwithstanding any damages that customer might incur for any reason whatso-
ever, NVIDIA’s aggregate and cumulative liability towards customer for the products described herein
shall be limited in accordance with the Terms of Sale for the product.
24.2. OpenCL
OpenCL is a trademark of Apple Inc. used under license to the Khronos Group Inc.
24.3. Trademarks
NVIDIA and the NVIDIA logo are trademarks or registered trademarks of NVIDIA Corporation in the
U.S. and other countries. Other company and product names may be trademarks of the respective
companies with which they are associated.
Copyright
©2007-2024, NVIDIA Corporation & affiliates. All rights reserved