Lecture 1: An Introduction To CUDA: Mike Giles
Lecture 1: An Introduction To CUDA: Mike Giles
Mike Giles
[email protected]
Lecture 1 – p. 1
Overview
hardware view
software view
CUDA programming
Lecture 1 – p. 2
Hardware view
At the top-level, a PCIe graphics card with a many-core
GPU and high-speed graphics “device” memory sits inside
a standard PC/server with one or two multicore CPUs:
GDDR5
DDR4 or HBM
HPC (Tesla):
P100 (PCIe): 3584 cores, 12GB HBM2 (£5k)
P100 (PCIe): 3584 cores, 16GB HBM2 (£6k)
P100 (NVlink): 3584 cores, 16GB HBM2 (£8k?) Lecture 1 – p. 5
Hardware view
building block is a “streaming multiprocessor” (SM):
128 cores (64 in P100) and 64k registers
96KB (64KB in P100) of shared memory
48KB (24KB in P100) L1 cache
8-16KB (?) cache for constants
up to 2K threads per SM
different chips have different numbers of these SMs:
product SMs bandwidth memory power
GTX 1060 10 192 GB/s 6 GB 120W
GTX 1070 16 256 GB/s 8 GB 150W
GTX 1080 20 320 GB/s 8 GB 180W
GTX Titan X 28 480 GB/s 12 GB 250W
P100 56 720 GB/s 16 GB HBM2 300W
Lecture 1 – p. 6
Hardware View
Pascal GPU ✟✟
✟ ✟
✟ ✏✏
✏
✟✟✏✏
SM SM SM SM
❏ ❅
❏ ❅
L2 cache ❏ ❅
❏ ❅
❏ ❅
❏ shared memory
SM SM SM SM ❏
❏ L1 cache
❏
❏
Lecture 1 – p. 7
Hardware view
There were multiple products in the Kepler generation
Lecture 1 – p. 8
Hardware view
building block is a “streaming multiprocessor” (SM):
192 cores and 64k registers
64KB of shared memory / L1 cache
8KB cache for constants
48KB texture cache for read-only arrays
up to 2K threads per SM
Lecture 1 – p. 9
Hardware View
Kepler GPU ✟✟
✟ ✟
✟ ✏✏
✏
✟✟✏✏
SM SM SM SM
❏ ❅
❏ ❅
L2 cache ❏ ❅
❏ ❅
❏ ❅
❏
SM SM SM SM ❏ L1 cache /
❏ shared memory
❏
❏
Lecture 1 – p. 10
Multithreading
Key hardware feature is that the cores in a SM are SIMT
(Single Instruction Multiple Threads) cores:
groups of 32 cores execute the same instructions
simultaneously, but with different data
similar to vector computing on CRAY supercomputers
32 threads all doing the same thing at the same time
natural for graphics processing and much scientific
computing
SIMT is also a natural choice for many-core chips to
simplify each core
Lecture 1 – p. 11
Multithreading
Lots of active threads is the key to high performance:
no “context switching”; each thread has its own
registers, which limits the number of active threads
threads on each SM execute in groups of 32 called
“warps” – execution alternates between “active” warps,
with warps becoming temporarily “inactive” when
waiting for data
Lecture 1 – p. 12
Multithreading
originally, each thread completed one operation before
the next started to avoid complexity of pipeline overlaps
✲
✲ ✲ time
✲1 2345
✲ ✲
✲1 2345
✲ ✲
✲1 2345
Lecture 1 – p. 14
Software view
At a lower level, within the GPU:
each instance of the execution kernel executes on a SM
if the number of instances exceeds the number of SMs,
then more than one will run at a time on each SM if
there are enough registers and shared memory, and the
others will wait in a queue and execute later
all threads within one instance can access local shared
memory but can’t see what the other instances are
doing (even if they are on the same SM)
there are no guarantees on the order in which the
instances execute
Lecture 1 – p. 15
CUDA
CUDA (Compute Unified Device Architecture) is NVIDIA’s
program development environment:
based on C/C++ with some extensions
FORTRAN support provided by compiler from PGI
(owned by NVIDIA) and also in IBM XL compiler
lots of example code and good documentation
– fairly short learning curve for those with experience of
OpenMP and MPI programming
large user community on NVIDIA forums
Lecture 1 – p. 16
CUDA Components
Installing CUDA on a system, there are 3 components:
driver
low-level software that controls the graphics card
toolkit
nvcc CUDA compiler
Nsight IDE plugin for Eclipse or Visual Studio
profiling and debugging tools
several libraries
SDK
lots of demonstration examples
some error-checking utilities
not officially supported by NVIDIA
almost no documentation Lecture 1 – p. 17
CUDA programming
Already explained that a CUDA program has two pieces:
host code on the CPU which interfaces to the GPU
kernel code which runs on the GPU
We will only use the runtime API in this course, and that is
all I use in my own research.
Lecture 1 – p. 18
CUDA programming
At the host code level, there are library routines for:
memory allocation on graphics card
data transfer to/from device memory
constants
ordinary data
error-checking
timing
Lecture 1 – p. 19
CUDA programming
In its simplest form it looks like:
kernel_routine<<<gridDim, blockDim>>>(args);
gridDim = 4
blockDim = 64
blockIdx ranges from 0 to 3
threadIdx ranges from 0 to 63
blockIdx.x=1, threadIdx.x=44
❄
r
Lecture 1 – p. 22
CUDA programming
The kernel code looks fairly normal once you get used to
two things:
code is written from the point of view of a single thread
quite different to OpenMP multithreading
similar to MPI, where you use the MPI “rank” to
identify the MPI process
all local variables are private to that thread
need to think about where each variable lives (more on
this in the next lecture)
any operation involving data in the device memory
forces its transfer to/from registers in the GPU
often better to copy the value into a local register
variable
Lecture 1 – p. 23
Host code
int main(int argc, char **argv) {
float *h_x, *d_x; // h=host, d=device
int nblocks=2, nthreads=8, nsize=2*8;
my_first_kernel<<<nblocks,nthreads>>>(d_x);
cudaMemcpy(h_x,d_x,nsize*sizeof(float),
cudaMemcpyDeviceToHost);
cudaFree(d_x); free(h_x);
} Lecture 1 – p. 24
Kernel code
#include <helper_cuda.h>
Lecture 1 – p. 25
CUDA programming
Suppose we have 1000 blocks, and each one has 128
threads – how does it get executed?
❄ ❄ ❄ ❄
SM SM SM SM
Lecture 1 – p. 27
CUDA programming
In this simple case, we had a 1D grid of blocks, and a 1D
set of threads within each block.
Lecture 1 – p. 28
CUDA programming
A similar approach is used for 3D threads and 2D / 3D grids;
can be very useful in 2D / 3D finite difference applications.
1D thread ID defined by
threadIdx.x +
threadIdx.y * blockDim.x +
threadIdx.z * blockDim.x * blockDim.y
and this is then broken up into warps of size 32.
Lecture 1 – p. 29
Practical 1
start from code shown above (but with comments)
learn how to compile / run code within Nsight IDE
(integrated into Visual Studio for Windows,
or Eclipse for Linux)
test error-checking and printing from kernel functions
modify code to add two vectors together (including
sending them over from the host to the device)
if time permits, look at CUDA SDK examples
Lecture 1 – p. 30
Practical 1
Things to note:
memory allocation
cudaMalloc((void **)&d x, nbytes);
data copying
cudaMemcpy(h x,d x,nbytes,
cudaMemcpyDeviceToHost);
reminder: prefix h and d to distinguish between
arrays on the host and on the device is not mandatory,
just helpful labelling
kernel routine is declared by global prefix, and is
written from point of view of a single thread
Lecture 1 – p. 31
Practical 1
Second version of the code is very similar to first, but uses
an SDK header file for various safety checks – gives useful
feedback in the event of errors.
Lecture 1 – p. 32
Practical 1
One thing to experiment with is the use of printf within
a CUDA kernel function:
essentially the same as standard printf; minor
difference in integer return code
each thread generates its own output; use conditional
code if you want output from only one thread
output goes into an output buffer which is transferred
to the host and printed later (possibly much later?)
buffer has limited size (1MB by default), so could lose
some output if there’s too much
need to use either cudaDeviceSynchronize(); or
cudaDeviceReset(); at the end of the main code to
make sure the buffer is flushed before termination
Lecture 1 – p. 33
Practical 1
The practical also has a third version of the code which
uses “managed memory” based on Unified Memory.
In this version
there is only one array / pointer, not one for CPU and
another for GPU
the programmer is not responsible for moving the data
to/from the GPU
everything is handled automatically by the CUDA
run-time system
Lecture 1 – p. 34
Practical 1
This leads to simpler code, but it’s important to understand
what is happening because it may hurt performance:
Lecture 1 – p. 35
ARCUS-B cluster
external network
arcus-b
G G G G G G G G G G
Lecture 1 – p. 38
Nsight
Importing the practicals: select General – Existing Projects
Lecture 1 – p. 39
Nsight
Lecture 1 – p. 40