06 - Raytracing
06 - Raytracing
Christoph Garth
Scientific Visualization Lab
Motivation
So far, we have worked on establishing the rendering equation as the basis for
physically-based rendering:
Z
Lo (x, ωr ) = Le (x, ωr ) + f (x, ωr , ωi ) Li (x, ωi ) cos θi dωi
Ω
The central quantity radiance L describes the propagation of light in the scene.
We have also already discussed the vacuum assumption: radiance is only modified on
surfaces. Hence, Li (x, ωi ) = Lo (x0 , ωi ) if the two points x and x0 can “see” each other, i.e.
can be connected by a ray.
Ultimately, Li (x, ω) must be determined at sensor points. Our goal today is a simple
algorithm for image synthesis that is based on this principle.
Most image synthesis algorithms based on the rendering equation construct light
paths that connect light sources to sensors and represent transport of light.
(Consider the 1045 /s photons emitted by the sun – only 1017 /cm2 /s reach the Earth surface.)
Due to the apertue (iris), light arrives at each sensor / image location only from a
small set of directions.
In the real world, cameras (and also the eye) involve complex systems of lenses. We will
consider a much simplified setup.
camera ray
camera optical axis
view volume
dimage
image
plane
Camera rays (also primary rays) originate at the camera location and go through the
center of pixels in the image. Light is measured only if it arrives along such rays.
• either α or dimage .
The view volume is the volume of the scene that can be seen by the camera; in this case
a so-called frustum.
Better idea: reverse the direction of light paths (Helmholtz reciprocity!) and trace
photons backwards from the image plane.
The throughput T of the path denotes the fraction of light intensity emitted by
the light source that arrives at the sensor.
• The scene only contains point lights. This means that local lighting models
that we discussed in the previous chapter (e.g. Phong’s model) can be used
to determine surface reflection.
• Surface reflection can be classified into either purely (perfectly) specular
(reflection and refraction), or diffuse, which describes all other forms of
reflection (incl. local lighting models).
In this manner, he was able to eschew the full complexity of BRDFs, which could
not realistically be simulated at the time.
Computer Graphics – Ray Tracing – Whitted Algorithm 6–9
Whitted’s Algorithm
Step 1: construct camera ray r for current pixel and set T = 1.0
diffuse
mirror
diffuse
diffuse
mirror
diffuse
diffuse
mirror
r0
diffuse
rs
r
diffuse
mirror
diffuse
rs rs
r
diffuse
mirror
diffuse
rs rs
r
rs
diffuse
mirror
diffuse
phit = o + thit d,
No hit:
If no object is hit, background intensity is returned.
Avoiding self-intersections:
Offset next ray (reflect / refracted / shadow ray) origin by εn, where n is the surface
normal at the hit point, and ε > 0 is very small.
Path Throughput:
Typically not a single number but an RGB triple.
Computer Graphics – Ray Tracing – Whitted Algorithm 6–17
Practical details (cont’d):
Mixed surfaces:
Whitted’s algorithm is easily extended to surfaces that mix diffuse and (perfect) specular
characteristics. At each hit point, shadows rays as well as specular rays are followed.
direct illumination by point lights perfect specular reflection perfect specular transmission
shadow rays reflected ray refracted ray
There can be no (D < 0) solution, one solution (D = 0) or two solutions (D > 0).
D − hn, qi
t=
hn, di
There is one solution unless hn, di = 0; in this case the ray is parallel to the plane.
Problem?
Intersection tests are arithmetically inexpensive, but must be done for many primitives
Computer Graphics – Ray Tracing – Ray-Object Intersection 6–34
Slightly more formal:
The complexity of the ray tracing algorithm is O(mp), where m the number of
objects and p is the number of pixels.
Typical contemporary production scenes can contain 108 primitives; so this naïve
is prohibitive for realistic scenes with 108 triangles.
• Fewer rays
• adaptive sampling (do not shoot every ray)
• stochastic sampling (→ Chapter 7)
fit
complexity
• Construction:
• determine maximal distance to all object points from object centering
• heuristic: compute bounding box, set bounding sphere with same center point
Two possibilities:
Non-uniform subdivision allows for more flexibility and adjusts better to scene
geometry.
However, here, I would like to briefly discuss a technique that works without
explicit geometrical description, but still allows to model reasonably complex
scenes.
This technique will also be used in some of the homeworks. The core idea is to
represent the entire scene through distance functions.
Object points satistfy d(x) = 0, or approximately d(x) < for a small > 0
While this works for arbitrary distance functions, signed distance functions (SDFs) also give d(x) < 0 if x is
inside the object.
Object points satistfy d(x) = 0, or approximately d(x) < for a small > 0.
While this works for arbitrary distance functions, signed distance functions (SDFs) also give d(x) < 0 if x is
inside the object.
Ray Marching:
This also works if the distance function underestimates the actual distance.
Multiple objects can considered by taking the minimum of all their distances.
Multiple objects can be combined by taking the minimum over the individual distance
functions. Effects such as repetition, deformation, etc. can be achived with surprising
ease (→ homework).
from iquilezles.org
from iquiezles.org
Paths can be expressed as strings over this alphabet (Heckbert, 1990), and regular
expressions can be used to describe families of paths.
“Fully” GI algorithms need to simulate all combinations of interactions, i.e. all paths that
satisfy the regular expression:
L(D|S)*E
no hit).
• caustics (C)
• diffuse interreflection (indirect
C: specular to diffuse D: diffuse to diffuse
illumination / color bleeding, D) NOT possible in ray tracing NOT possible in ray tracing