Technical Brief: Nvidia Geforce 8800 Gpu Architecture Overview
Technical Brief: Nvidia Geforce 8800 Gpu Architecture Overview
TB-02787-001_v01 i
NVIDIA GeForce 8800 Architecture Technical Brief
ii TB-02787-001_v1.0
November 8, 2006
GeForce 8800 Architecture Overview
Table of Contents
Preface.......................................................................................................................... vii
GeForce 8800 Architecture Overview ............................................................................1
Unified, Massively Parallel Shader Design............................................................................ 1
DirectX 10 Native Design................................................................................................... 3
Lumenex Engine: Industry-Leading Image Quality............................................................... 5
SLI Technology ................................................................................................................ 7
Quantum Effects GPU-Based Physics .................................................................................. 7
PureVideo and PureVideo HD............................................................................................. 9
Extreme High Definition Gaming (XHD) ............................................................................ 11
Built for Microsoft Windows Vista ..................................................................................... 12
CUDA: Compute Unified Device Architecture.................................................................. 12
The Four Pillars........................................................................................................... 15
The Classic GPU Pipeline… A Retrospective.................................................................17
GeForce 8800 Architecture in Detail ............................................................................19
Unified Pipeline and Shader Design .................................................................................. 20
Unified Shaders In-Depth ............................................................................................ 21
Stream Processing Architecture........................................................................................ 25
Scalar Processor Design Improves GPU Efficiency .......................................................... 27
Lumenex Engine: High-Quality Antialiasing, HDR, and Anisotropic Filtering .......................... 27
Decoupled Shader/Math, Branching, and Early-Z ............................................................... 31
Decoupled Shader Math and Texture Operations ........................................................... 31
Branching Efficiency Improvements .............................................................................. 32
Early-Z Comparison Checking....................................................................................... 33
GeForce 8800 GTX GPU Design and Performance........................................................35
Host Interface and Stream Processors .............................................................................. 36
Raw Processing and Texturing Filtering Power............................................................... 36
ROP and Memory Subsystems...................................................................................... 37
Balanced Architecture ..................................................................................................... 38
TB-02787-001_v1.0 iii
November 8, 2006
NVIDIA GeForce 8800 Architecture Technical Brief
iv TB-02787-001_v1.0
November 8, 2006
GeForce 8800 Architecture Overview
List of Figures
TB-02787-001_v1.0 v
November 8, 2006
NVIDIA GeForce 8800 Architecture Technical Brief
List of Tables
vi TB-02787-001_v1.0
November 8, 2006
Preface
Welcome to our technical brief describing the NVIDIA® GeForce® 8800 GPU
architecture.
We have structured the material so that the initial few pages discuss key GeForce
8800 architectural features, present important DirectX 10 capabilities, and describe
how GeForce 8 Series GPUs and DirectX 10 work together. If you read no further,
you will have a basic understanding of how GeForce 8800 GPUs enable
dramatically enhanced 3D game features, performance, and visual realism.
In the next section we go much deeper, beginning with operations of the classic
GPU pipeline, followed by showing how GeForce 8800 GPU architecture radically
changes the way GPU pipelines operate. We describe important new design features
of GeForce 8800 architecture as it applies to both the GeForce 8800 GTX and the
GeForce 8800 GTS GPUs. Throughout the document, all specific GPU design and
performance characteristics are related to the GeForce 8800 GTX.
Next we’ll look a little closer at the new DirectX 10 pipeline, including a
presentation of key DirectX 10 features and Shader Model 4.0. Refer to the
NVIDIA technical brief titled Microsoft DirectX 10: The Next-Generation Graphics API
(TP-02820-001) for a detailed discussion of DirectX 10 features.
We hope you find this information informative.
TB-02787-001_v1.0 vii
November 4, 2006
NVIDIA GeForce 8800 Architecture Technical Brief
viii TB-02787-001_v1.0
November 8, 2006
GeForce 8800 Architecture
Overview
TB-02787-001_v1.0 1
November 4, 2006
NVIDIA GeForce 8800 Architecture Technical Brief
Don’t worry—we’ll describe all the gory details of Figure 1 very shortly! Compared
to the GeForce 7900 GTX, a single GeForce 8800 GTX GPU delivers 2× the
performance on current applications, with up to 11× scaling measured in certain
shader operations. As future games become more shader intensive, we expect the
GeForce 8800 GTX to surpass DirectX 9–compatible GPU architectures in
performance.
In general, shader-intensive and high dynamic-range (HDR)–intensive applications
shine on GeForce 8800 architecture GPUs. Teraflops of raw floating-point
processing power are combined to deliver unmatched gaming performance, graphics
realism, and real-time, film-quality effects.
The groundbreaking NVIDIA® GigaThread™ technology implemented in GeForce
8 Series GPUs supports thousands of independent, simultaneously executing
threads, maximizing GPU utilization.
2 TB-02787-001_v1.0
November 8, 2006
GeForce 8800 Architecture Overview
The GeForce 8800 GPU’s unified shader architecture is built for extreme 3D
graphics performance, industry-leading image quality, and full compatibility with
DirectX 10. Not only do GeForce 8800 GPUs provide amazing DirectX 10 gaming
experiences, but they also deliver the fastest and best quality DirectX 9 and
OpenGL gaming experience today. (Note that Microsoft Windows Vista is required
to utilize DirectX 10).
We’ll briefly discuss DirectX 10 features supported by all GeForce 8800 GPUs, and
then take a look at important new image quality enhancements built into every
GeForce 8800 GPU. After describing other essential GeForce 8800 Series
capabilities, we’ll take a deep dive into the GeForce 8800 GPU architecture,
followed by a closer look at the DirectX 10 pipeline and its features.
TB-02787-001_v1.0 3
November 8, 2006
NVIDIA GeForce 8800 Architecture Technical Brief
Courtesy of Crytek
4 TB-02787-001_v1.0
November 8, 2006
GeForce 8800 Architecture Overview
TB-02787-001_v1.0 5
November 8, 2006
NVIDIA GeForce 8800 Architecture Technical Brief
An entirely new 10-bit display architecture works in concert with 10-bit DACs to
deliver over a billion colors (compared to 16.7 million in the prior generation),
permitting incredibly rich and vibrant photos and videos. With the next generation
of 10-bit content and displays, the Lumenex engine will be able to display images of
amazing depth and richness.
For more details on GeForce 8800 GPU image quality improvements, refer to
Lumenex Engine: The New Standard in GPU Image Quality (TB-02824-001).
6 TB-02787-001_v1.0
November 8, 2006
GeForce 8800 Architecture Overview
SLI Technology
NVIDIA’s SLI technology is the industry’s leading multi-GPU technology. It
delivers up to 2× the performance of a single GPU configuration for unequaled
gaming experiences by allowing two graphics cards to run in parallel on a single
motherboard. The must-have feature for performance PCI Express graphics, SLI
dramatically scales performance on today’s hottest games. Running two GeForce
8800 GTX boards in an SLI configuration allows extremely high image-quality
settings at extreme resolutions.
TB-02787-001_v1.0 7
November 8, 2006
NVIDIA GeForce 8800 Architecture Technical Brief
8 TB-02787-001_v1.0
November 8, 2006
GeForce 8800 Architecture Overview
TB-02787-001_v1.0 9
November 8, 2006
NVIDIA GeForce 8800 Architecture Technical Brief
10 TB-02787-001_v1.0
November 8, 2006
GeForce 8800 Architecture Overview
TB-02787-001_v1.0 11
November 8, 2006
NVIDIA GeForce 8800 Architecture Technical Brief
12 TB-02787-001_v1.0
November 8, 2006
GeForce 8800 Architecture Overview
CUDA enables new applications with a standard platform for extracting valuable
information from vast quantities of raw data, and provides the following key
benefits in this area:
Enables high-density computing to be deployed on standard enterprise
workstations and server environments for data-intensive applications.
Divides complex computing tasks into smaller elements that are processed
simultaneously in the GPU to enable real-time decision making.
Provides a standard platform based on industry-leading NVIDIA hardware and
software for a wide range of high data bandwidth, computationally intensive
applications.
Combines with multicore CPU systems to provide a flexible computing
platform.
Controls complex programs and coordinates inherently parallel computation on
the GPU processed by thousands of computing threads.
TB-02787-001_v1.0 13
November 8, 2006
NVIDIA GeForce 8800 Architecture Technical Brief
CUDA SDK unlocks the power of the GPU using industry-standard C language:
Industry-standard C compiler simplifies software for complex computational
problems.
Complete development solution includes an industry-standard C compiler,
standard math libraries, and a dedicated driver for thread computing on either
Linux or Windows.
Full support of hardware debugging and a profiler for program optimization.
NVIDIA “assembly for computing” (NVasc) provides lower-level access to the
GPU for computer language development and research applications.
14 TB-02787-001_v1.0
November 8, 2006
GeForce 8800 Architecture Overview
TB-02787-001_v1.0 15
November 8, 2006
NVIDIA GeForce 8800 Architecture Technical Brief
16 TB-02787-001_v1.0
November 8, 2006
The Classic GPU Pipeline…
A Retrospective
TB-02787-001_v1.0 17
November 4, 2006
NVIDIA GeForce 8800 Architecture Technical Brief
After the GPU receives vertex data from the host CPU, the vertex stage is the first
major stage. Back in the DirectX 7 timeframe, fixed-function transform and lighting
hardware operated at this stage (such as with NVIDIA’s GeForce 256 in 1999), and
then programmable vertex shaders came along with DirectX 8. This was followed
by programmable pixel shaders in DirectX 9 Shader Model 2, and dynamic flow
control in DirectX 9 Shader Model 3. DirectX 10 expands programmability features
much further, and shifts more graphics processing to the GPU, significantly
reducing CPU overhead.
The next step in the classic pipeline is the setup, where vertices are assembled into
primitives such as triangles, lines, or points. The primitives are then converted by
the rasterization stage into pixel fragments (or just “fragments”), but are not
considered full pixels at this stage. Fragments undergo many other operations such
as shading, Z-testing, possible frame buffer blending, and antialiasing. Fragments are
finally considered pixels when they have been written into the frame buffer.
As a point of confusion, the “pixel shader” stage should technically be called the
“fragment shader” stage, but we’ll stick with pixel shader as the more generally
accepted term. In the past, the fragments may have only been flat shaded or have
simple texture color values applied. Today, a GPU’s programmable pixel shading
capability permits numerous shading effects to be applied while working in concert
with complex multitexturing methods.
Specifically, shaded fragments (with color and Z values) from the pixel stage are
then sent to the ROP (Raster Operations in NVIDIA parlance). The ROP stage
corresponds to the “Output Merger” stage of the DirectX 10 pipeline, where Z-
buffer checking ensures only visible fragments are processed further, and visible
fragments, if partially transparent, are blended with existing frame buffer pixels and
antialiased. The final processed pixel is sent to the frame buffer memory for scanout
and display to the monitor.
The classic GPU pipeline has basically included the same fundamental stages for the
past 20 years, but with significant evolution over time. Many processing constraints
and limitations exist with classic pipeline architectures, as did variations in DirectX
implementations across GPUs from different vendors.
A few notable problems of pre-DirectX 10 classic pipelines include the following:
limited reuse of data generated within the pipeline to be used as input to a
subsequent processing step; high state change overhead; excessive variation in
hardware capabilities (requiring different application code paths for different
hardware); instruction set and data type limitations (such as lack of integer
instructions and weakly defined floating point precision); inability to write results to
memory in mid-pipeline and read them back into the top of the pipeline; and
resource limitations (registers, textures, instructions per shader, render targets, and
so on.) 1
Let’s proceed and see how the GeForce 8800 GPU architecture totally changes the
way data is processed in a GPU with it unified pipeline and shader architecture.
18 TB-02787-001_v1.0
November 8, 2006
GeForce 8800
Architecture in Detail
When NVIDIA’s engineers started designing the GeForce 8800 GPU architecture
in the summer of 2002, they set forth a number of important design goals. The top
four goals were quite obvious:
Significantly increase performance over current-generation GPUs.
Notably improve image quality.
Deliver powerful GPU physics and high-end floating-point computation ability.
Provide new enhancements to the GPU pipeline (such as geometry shading and
stream output), while working collaboratively with Microsoft to define features
for the next major version of Direct X (DirectX 10 and Windows Vista).
In fact, many key GeForce 8800 architecture and implementation goals were
specified in order to make GeForce 8800–class GPUs most efficient for DirectX 10
applications, while also providing the highest performance for existing applications
using DirectX 9, OpenGL, and earlier DirectX versions.
The new GPU architecture would need to perform well on a variety of applications
using different mixes of pixel, vertex, and geometry shading in addition to large
amounts of high quality texturing.
The result was the GeForce 8800 GPU architecture that initially included two
specific GPUs—the high-end GeForce 8800 GTX and the slightly downscaled
GeForce 8800 GTS.
Figure 12 again presents the overall block diagram of the GeForce 8800 GTX for
readers who would like to see the big picture up front.
But fear not, we’ll start by describing the key elements of the GeForce 8800
architecture followed by looking at the GeForce 8800 GTX in more detail, where
we will again display this “most excellent” diagram and discuss some of its specific
features.
TB-02787-001_v1.0 19
November 4, 2006
NVIDIA GeForce 8800 Architecture Technical Brief
20 TB-02787-001_v1.0
November 8, 2006
GeForce 8800 Architecture in Detail
In the generalized unified GPU diagram shown in Figure 13, the classic pipeline
uses discrete shader types represented in different colors, where data flows
sequentially down the pipeline through different shader types. The illustration on
the right depicts a unified shader core with one or more standardized, unified shader
processors.
Data coming in the top left of the unified design (such as vertices), are dispatched to
the shader core for processing, and results are sent back to the top of the shader
core, where they are dispatched again, processed again, looped to the top, and so on
until all shader operations are performed and the pixel fragment is passed on to the
ROP subsystem.
TB-02787-001_v1.0 21
November 8, 2006
NVIDIA GeForce 8800 Architecture Technical Brief
In general, numerous challenges had to be overcome with such a radical new design
over the four-year GeForce 8800 GPU development timeframe.
Looking more closely at graphics programming, we can safely say that in general the
number of pixels outnumbers vertices by a wide margin, which is why you saw a
much larger number of pixel shader units versus vertex shader units in prior fixed-
shader GPU architectures. But different applications do have different shader
processing requirements at any given point in time—some scenes may be pixel-
shader intensive and other scenes may be vertex-shader intensive. Figure 14 shows
the variations in vertex and pixel processing over time in a particular application.
In a GPU with a fixed number of specific types of shader units, restrictions are
placed on operating efficiency, attainable performance, and application design.
Figure 15 shows a theoretical GPU with a fixed number of 4 vertex shader units and
8 pixel shader units, or a total of 12 shader units altogether.
22 TB-02787-001_v1.0
November 8, 2006
GeForce 8800 Architecture in Detail
In Figure 15, the top scenario shows a scene that is vertex shader-intensive, which
can only attain performance as fast as the maximum number of vertex units, which
in this case is “4.” In the bottom scenario, the scene is pixel shader-intensive, which
might be due to various complex lighting effects for the water. In this case, it is
pixel-shader limited and can only attain a maximum performance of “8,” equal to
the number of pixel shader units, which is the bottleneck in this case. Both
situations are not optimal because hardware is idle and performance is left on the
table, so to speak. Also, it’s not efficient from a power (performance/watt) or die
size and cost (performance/sqmm) perspective.
In Figure 16, with a unified shader architecture, at any given moment when an
application might be vertex-shader intensive, you can see the majority of unified
shader processors are applied to processing vertex data, and in this case, the overall
performance is increased to “11.” Similarly, if its pixel shader heavy, the majority of
unified shader units can be applied to pixel processing, also attaining a score of “11”
in our example.
TB-02787-001_v1.0 23
November 8, 2006
NVIDIA GeForce 8800 Architecture Technical Brief
Unified stream processors (SPs) in GeForce 8800 GPUs can process vertices, pixels,
geometry or physics—they are effectively general purpose floating-point processors.
Different workloads can be mapped to the processors, as shown in Figure 17.
Note that geometry shading is a new feature of the DirectX 10 specification that we
cover in detail later in the DirectX 10 section. The GeForce 8800 unified stream
processors can process geometry shader programs, permitting a powerful new range
of effects and features, while reducing dependence on the CPU for geometry
processing.
The GPU dispatch and control logic can dynamically assign vertex, geometry,
physics, or pixel operations to available SPs without worrying about fixed numbers
of specific types of shader units. In fact, this feature is just as important to
developers, who need not worry as much that certain aspects of their code might be
too pixel-shader intensive or too vertex-shader intensive.
Not only does a unified shader design assist in load-balancing shader workloads, but
it actually helps redefine how a graphics pipeline is organized. In the future, it is
possible that other types of workloads can be run on a unified stream processor.
24 TB-02787-001_v1.0
November 8, 2006
GeForce 8800 Architecture in Detail
TB-02787-001_v1.0 25
November 8, 2006
NVIDIA GeForce 8800 Architecture Technical Brief
With the GeForce 8800 architecture, the image in Figure 18 shows a collection of
stream processors (SPs) with associated numbers of Texture Filtering (TF), Texture
Addressing (TA), and cache units to ensure a balanced design. The ratios of unit
types shown below in a subset slice of a typical GeForce 8800 GPU are maintained
when scaling up to 128 SPs specifically in a GeForce 8800 GTX GPU.
Each GeForce 8800 GPU stream processor is fully generalized, fully decoupled,
scalar (see “Scalar Processor Design Improves GPU Efficiency”), can dual-issue a
MAD and a MUL, and supports IEEE 754 floating-point precision.
The stream processors are a critical component of NVIDIA GigaThread
technology, where thousands of threads can be in flight within a GeForce 8800
GPU at any given instant. GigaThread technology keeps SPs fully utilized by
scheduling and dispatching various types of threads (such as pixel, vertex, geometry,
and physics) for execution.
All stream processors are driven by a high-speed clock domain that is separate from
the core clock that drives the rest of the chip. For example, the GeForce 8800 GTX
core clock is 575 MHz and its stream processors run at 1.35 GHz delivering
exceptionally high shader performance.
26 TB-02787-001_v1.0
November 8, 2006
GeForce 8800 Architecture in Detail
TB-02787-001_v1.0 27
November 8, 2006
NVIDIA GeForce 8800 Architecture Technical Brief
games with AA capability built in, the “Enhancing the Application Setting” is an
easy way to improve overall image quality.
In many games, the new 16× high quality mode will yield frame-per-second
performance results similar to standard 4× multisampled mode, but with much
improved image quality. In certain cases, such as the edge of stencil shadow
volumes, the new antialiasing modes will not be enabled and those portions of the
scene will fall back to 4× multisampled mode.
Below is a close-up image of how the new CSAA 16× mode compares to standard
4× multisampled AA mode.
Figure 19. Coverage sampling antialiasing (4× MSAA vs. 16× CSAA)
GeForce 8800 GPUs support both the FP16 and the FP32 component for HDR
rendering, which can work simultaneously with multisampled antialiasing delivering
incredibly rich images and scenery.
28 TB-02787-001_v1.0
November 8, 2006
GeForce 8800 Architecture in Detail
Anisotropic Filtering (AF) improves the clarity and sharpness of various scene
objects that are viewed at sharp angles and/or recede into the distance (Figure 20).
One example is a roadway billboard with text that looks skewed and blurred when
viewed at a sharp angle (with respect to the camera) when standard bilinear and
trilinear isotropic texture filtering methods are applied. Anisotropic filtering
(combined with trilinear mipmapping) allows the skewed text to look much sharper.
Similarly, a cobblestone roadway that fades into the distance can be sharpened with
anisotropic filtering.
TB-02787-001_v1.0 29
November 8, 2006
NVIDIA GeForce 8800 Architecture Technical Brief
Refer to Lumenex Engine: The New Standard in GPU Image Quality (TB-02824-001) for
more details.
Following is a discussion of three important architectural enhancements that permit
better overall GPU performance.
30 TB-02787-001_v1.0
November 8, 2006
GeForce 8800 Architecture in Detail
Decoupled Shader/Math,
Branching, and Early-Z
Decoupled Shader Math and Texture Operations
Texture addressing, fetching, and filtering can take many GPU core clock cycles. If
an architecture requires a texture to be fetched and filtered before performing the
next math operation in a particular shader, the considerable texture fetch and
filtering (such as 16× anisotropic filtering) latencies can really slow down a GPU.
GeForce 8800 GPUs can do a great job tolerating and essentially “hiding” texture
fetch latency by performing a number of useful independent math operations
concurrently.
For comparison, a GeForce 7 Series GPU texture address calculation was
interleaved with shader floating-point math operations in Shader Unit 1 of a pixel
pipeline. Although this design was chosen to optimize die size, power, and
performance, it could cause some shader math bottlenecks when textures were
fetched, preventing use of a shader processor until the texture was retrieved.
GeForce 8800 GPUs attacks the shader math and texture processing efficiency
problem by decoupling shader and texture operations so that texture operations can
be performed independent of shader math operations.
Figure 22 illustrates math operations (not dependent on specific texture data being
fetched) that can be executed while one or more textures are being fetched from
frame buffer memory, or in the worst case, from system memory. Shader processor
utilization is improved. In effect, while a thread fetching a texture is executing, the
GeForce 8800 GPU’s GigaThread technology can swap in other threads to execute,
ensuring that shader processors are never idle when other work needs to done.
TB-02787-001_v1.0 31
November 8, 2006
NVIDIA GeForce 8800 Architecture Technical Brief
32 TB-02787-001_v1.0
November 8, 2006
GeForce 8800 Architecture in Detail
TB-02787-001_v1.0 33
November 8, 2006
NVIDIA GeForce 8800 Architecture Technical Brief
A few methods use Z-buffer information to help cull or prevent pixels from being
rendered if they are occluded. Z-cull is a method to remove pixels from the pipeline
during the rasterization stage, and can examine and remove groups of occluded
pixels very swiftly.
A GeForce 8800 GTX GPU can cull pixels at four times the speed of GeForce
7900 GTX, but neither GPU catches all occlusion situations at the individual pixel
level.
Z comparisons for individual pixel data have generally occurred late in the graphics
pipeline in the ROP (raster operations) unit. The problem with evaluating individual
pixels in the ROP is that pixels must traverse nearly the entire pipeline to ultimately
discover some are occluded and will be discarded. With complex shader programs
that have hundreds or thousands of processing steps, all the processing is wasted on
pixels that will never be displayed!
What if an Early-Z technique could be employed to test Z values of pixels before
they entered the pixel shading pipeline? Much useless work could be avoided,
improving performance and conserving power.
GeForce 8800 Series GPUs implement an Early-Z technology, as depicted in Figure
25, to increase performance noticeably.
Next, we’ll look at how the GeForce 8800 GPU architecture redefines the classic
GPU pipeline and implements DirectX 10–compatible features. Later in this
document, we describe key DirectX 10 features in more detail.
34 TB-02787-001_v1.0
November 8, 2006
GeForce 8800 GTX GPU
Design and Performance
W have already covered a lot of the basics, so now we can look at the specifics of
the GeForce 8800 GTX architecture without intimidation. The block diagram
shown in Figure 26 should now look less threatening if you’ve read the prior
sections.
TB-02787-001_v1.0 35
November 4, 2006
NVIDIA GeForce 8800 Architecture Technical Brief
36 TB-02787-001_v1.0
November 8, 2006
GeForce 8800 GTX GPU
Design and Performance
TB-02787-001_v1.0 37
November 8, 2006
NVIDIA GeForce 8800 Architecture Technical Brief
The ROPs also support frame buffer blending of FP16 AND FP32 render targets,
and either type of FP surface can be used in conjunction with multisampled
antialiasing for outstanding HDR rendering quality. Eight MRTs (Multiple Render
Targets) can be utilized, which is also supported by DX10. Each of the MRTs can
define different color formats. New high-performance, more efficient compression
technology is implemented in the ROP subsystem to accelerate color and Z
processing.
As shown in Figure 26 six memory partitions exist on a GeForce 8800 GTX GPU,
and each partition provides a 64-bit interface to memory, yielding a 384-bit
combined interface width. The 768 MB memory subsystem implements a high-
speed crossbar design, similar to GeForce 7x GPUs, and supports DDR1, DDR2,
DDR3, GDDR3, and GDDR4 memory. The GeForce 8800 GTX uses GDDR3
memory default clocked at 900 MHz. With a 384-bit (48 byte wide) memory
interface running at 900 MHz (1800 MHz DDR data rate), frame buffer memory
bandwidth is very high at 86.4 GBps. With 768 MB of frame buffer memory, far
more complex models and textures can be supported at high resolutions and image
quality settings.
Balanced Architecture
NVIDIA engineers spent a great deal of time ensuring the GeForce 8800 GPU
Series was a balanced architecture. It wouldn’t make sense to have 128 streaming
processors or 64 pixels worth of texture filtering power if the memory subsystem
weren’t able to deliver enough data, or if the ROPs were a bottleneck processing
pixels, or if the clocking of different subsystems was mismatched. Also, the GPUs
must be built in a manner that makes them power efficient and die-size efficient
with optimal performance. The graphics board must be able to be integrated into
mainstream computing systems without extravagant power and cooling.
Each of the unified streaming processors can handle different types of shader
programs to allow instantaneous balancing of processor resources based on
demand. Internal caches are designed for extremely high performance and hit rates,
and combined with the high-speed and large frame buffer memory subsystem, the
streaming processors are not starved for data.
During periods of texture fetch and filtering latency, GigaThread technology can
immediately dispatch useful work to a processor that, in past architectures, may have
needed to wait for the texture operation to complete. With more vertex and pixel
shader program complexity, many more cycles will be spent processing in the shader
complex, and the ROP subsystem capacity was built to be balanced with shader
processor output. And the 900 MHz memory subsystem ensures even the highest-
end resolutions with high-quality filtering can be processed effectively.
We have talked a lot about hardware and cannot forget that drivers play a large part
in balancing overall performance. NVIDIA ForceWare® drivers work hand-in-hand
with the GPU to ensure superior GPU utilization with minimal CPU impact.
Now that you have a good understanding of GeForce 8800 GPU architecture, let’s
look at DirectX 10 features in more detail. You will then be able to relate the
DirectX 10 pipeline improvements to the GeForce 8800 GPU architecture.
38 TB-02787-001_v1.0
November 8, 2006
DirectX 10 Pipeline
TB-02787-001_v1.0 39
November 4, 2006
NVIDIA GeForce 8800 Architecture Technical Brief
While similar in many respects to Shader Model 3, new features added with Shader
Model 4 include a new unified instruction set; many more registers and constants;
integer computation; unlimited program length; fewer state changes (less CPU
intervention); 8 multiple render target regions instead of 4; more flexible vertex
input via the input assembler; the ability of all pipeline stages to access buffers,
textures, and render targets with few restrictions; and the capability of data to be
recirculated through pipeline stages (stream out).
Shader Model 4 also includes a very different render state model, where application
state is batched more efficiently, and more work can be pushed to the GPU with
less CPU involvement. Table 1 shows DirectX 10 Shader Model 4 versus prior
shader models.
40 TB-02787-001_v1.0
November 8, 2006
DirectX 10 Pipeline
Stream Output
Stream output is a very important and useful new DirectX 10 feature supported in
GeForce 8800 GPUs. Stream output enables data generated from geometry shaders
(or vertex shaders if geometry shaders are not used) to be sent to memory buffers
and subsequently forwarded back into the top of the GPU pipeline to be processed
again (Figure 28). Such dataflow permits more complex geometry processing,
advanced lighting calculations, and GPU-based physical simulations with little CPU
involvement.
Stream output is a more generalized version of the older “render to vertex buffer”
feature that permits data generated from geometry shaders (or from vertex shaders
if geometry shaders are not used) to be sent to “stream buffers” and subsequently
forwarded back to the top of the pipeline to be processed again. (See “The Hair
Challenge” for an example of usage.)
TB-02787-001_v1.0 41
November 8, 2006
NVIDIA GeForce 8800 Architecture Technical Brief
Geometry Shaders
High polygon–count characters with realistic animation and facial expressions are
now possible with DirectX 10 geometry shading, as are natural shadow volumes,
physical simulations, faster character skinning, and a variety of other geometry
operations.
Geometry shaders can process entire primitives as inputs and generate entire
primitives as output, rather than processing just one vertex at a time, as with a
vertex shader. Input primitives can be comprised of multiple vertices, such as point
lists, line lists or strips, triangle lists or strips, a line list or strip with adjacency info,
or a triangle list or strip with adjacency info. Output primitives can be point lists,
line strips, or triangle strips.
Limited forms of tessellation—breaking down primitives such as triangles into a
number of smaller triangles to permit smoother edges and more detailed objects—
are possible with geometry shaders. Examples could include tessellation of water
surfaces, point sprites, fins, and shells. Geometry shaders can also control objects
and create and destroy geometry (they can read a primitive in and generate more
primitives, or not emit any primitives as output).
Geometry shaders also can extrude silhouette edges, expand points, assist with
render to cube maps, render multiple shadow maps, perform character skinning
operations, and enable complex physics and hair simulations. And, among other
things, they can generate single-pass environment maps, motion blur, and stencil
shadow polygons, plus enable fully GPU-based particle systems with random
variations in position, velocity, and particle lifespan.
Note that software-based rendering techniques in existence for years can provide
many of these capabilities, but they are much slower, and this is the first time such
geometry processing features are implemented in the hardware 3D pipeline.
A key advantage of hardware-based geometry shading is the ability to move certain
geometry processing functions from the CPU to the GPU for much better
performance. Characters can be animated without having the CPU intervene, and
true displacement mapping is possible, permitting vertices to be moved around to
create undulating surfacing and other cool effects.
42 TB-02787-001_v1.0
November 8, 2006
DirectX 10 Pipeline
Improved Instancing
DirectX 9 introduced the concept of object instancing, where a single API draw call
would send a single object to the GPU, followed by a small amount of “instance
data” that can vary object attributes such as position and color. By applying the
varying attributes to the original object, tens or hundreds of variations of an object
could be created without CPU involvement (such as leaves on tree or an army of
soldiers).
DirectX 10 adds much more powerful instancing by permitting index values of
texture arrays, render targets, and even indices for different shader programs to be
used as the instance data that can vary attributes of the original object to create
different-looking versions of the object. And it does all this with fewer state changes
and less CPU intervention.
In general, GeForce 8800 Series GPUs work with the DX10 API to provide
extremely efficient instancing and batch processing of game objects and data to
allow for richer and more immersive game environments.
TB-02787-001_v1.0 43
November 8, 2006
NVIDIA GeForce 8800 Architecture Technical Brief
Vertex Texturing
Vertex texturing was possible in DirectX 9 and is now a major feature of the
DirectX 10 API and able to be used with both vertex shaders and geometry shaders.
With vertex texturing, displacement maps or height fields are read from memory
and their “texels” are actually displacement (or height) values, rather than color
values. The displacements are used to modify vertex positions of objects, creating
new shapes, forms, and geometry-based animations.
With DirectX 9, the physics simulation of the hair is performed on the CPU.
Interpolation and tessellation of the control points of the individual hairs in the
physics simulation is also performed by the CPU. Next, the hairs must be written to
memory and copied to the GPU, where they can finally be rendered. The reason we
don’t see very realistic hair with DirectX 9 games is that it’s simply too CPU-
intensive to create, and developers can’t afford to spend huge amounts of CPU
cycles just creating and animating hair at the expense of other more important game
play objects and functions.
With DirectX 10, the physics simulation of the hair is performed on the GPU, and
interpolation and tessellation of control points is performed by the geometry shader.
The output from the geometry shader is transferred to memory using stream output,
and read back into the pipeline to actually render the hair.
Expect to see far more realistic hair in DX10 games that take advantage of the
power of GeForce 8800 Series GPUs.
44 TB-02787-001_v1.0
November 8, 2006
Conclusion
As you are now aware, the GeForce 8800 GPU architecture is a radical departure
from prior GPU designs. Its massively parallel unified shader design delivers
tremendous processing horsepower for high-end 3D gaming at extreme resolutions,
with all quality knobs set to the max. New antialiasing technology permits 16× AA
quality at the performance of 4× multisampling, and 128-bit HDR rendering is now
available and can be used in conjunction with antialiasing.
Full DirectX10 compatibility with hardware implementations of geometry shaders,
stream out, improved instancing, and Shader Model 4 assure users they can run their
DirectX 10 titles with high performance and image quality. All DirectX 9, OpenGL,
and prior DirectX titles are fully compatible with the GeForce 8800 GPU unified
design and will attain the best performance possible.
PureVideo functionality built in to all GeForce 8800–class GPUs ensures flawless
SD and HD video playback with minimal CPU utilization. Efficient power
utilization and management delivers outstanding performance per watt and
performance per square millimeter.
Teraflops of floating-point processing power, SLI capability, support for thousands
of threads in flight, Early-Z, decoupled shader and math processing, high-quality
anisotropic filtering, significantly increased texture filtering horsepower and memory
bandwidth, fine levels of branching granularity, plus the 10-bit display pipeline and
PureVideo feature set—all these features contribute to making the GeForce 8800
GPU Series the best GPU architecture for 3D gaming and video playback
developed to date.
TB-02787-001_v1.0 45
November 4, 2006
Notice
ALL NVIDIA DESIGN SPECIFICATIONS, REFERENCE BOARDS, FILES, DRAWINGS, DIAGNOSTICS, LISTS, AND
OTHER DOCUMENTS (TOGETHER AND SEPARATELY, “MATERIALS”) ARE BEING PROVIDED “AS IS.” NVIDIA
MAKES NO WARRANTIES, EXPRESSED, IMPLIED, STATUTORY, OR OTHERWISE WITH RESPECT TO THE
MATERIALS, AND EXPRESSLY DISCLAIMS ALL IMPLIED WARRANTIES OF NONINFRINGEMENT,
MERCHANTABILITY, AND FITNESS FOR A PARTICULAR PURPOSE.
Information furnished is believed to be accurate and reliable. However, NVIDIA Corporation assumes no
responsibility for the consequences of use of such information or for any infringement of patents or other
rights of third parties that may result from its use. No license is granted by implication or otherwise under any
patent or patent rights of NVIDIA Corporation. Specifications mentioned in this publication are subject to
change without notice. This publication supersedes and replaces all information previously supplied. NVIDIA
Corporation products are not authorized for use as critical components in life support devices or systems
without express written approval of NVIDIA Corporation.
Trademarks
NVIDIA, the NVIDIA logo, CUDA, ForceWare, GeForce, GigaThread, Lumenex, NVIDIA nForce, PureVideo, SLI,
and Quantum Effects are trademarks or registered trademarks of NVIDIA Corporation in the United States
and other countries. Other company and product names may be trademarks of the respective companies
with which they are associated
Copyright
© 2006 NVIDIA Corporation. All rights reserved.
NVIDIA Corporation
2701 San Tomas Expressway
Santa Clara, CA 95050
www.nvidia.com