Render
Render
Rendering
A Comprehensive User’s Guide
Rendering: A Comprehensive User’s Guide was written by Edna Kruger;
updated by Maggie Kathwaroon; and edited by Edna Kruger and John Woolfrey.
Layout by Luc Langevin.
Special thanks to Craig Hall and Pierre Tousignant for their assistance in
assuring the technical integrity of this guide.
© 1996–2000 Avid Technology, Inc. All rights reserved.
SOFTIMAGE and Avid are registered trademarks of Avid Technology, Inc. or
its subsidiaries or divisions. mental ray and mental images are registered
trademarks of mental images GmbH & Co. KG in the U.S.A. and/or other
countries. All other trademarks contained herein are the property of their
respective owners.
This document is protected under copyright law. The contents of this document
may not be copied or duplicated in any form, in whole or in part, without the
express written permission of Avid Technology, Inc. This document is supplied
as a guide for the SOFTIMAGE|3D product. Reasonable care has been taken in
preparing the information it contains. However, this document may contain
omissions, technical inaccuracies, or typographical errors. Avid Technology,
Inc. does not accept responsibility of any kind for customers’ losses due to the
use of this document. Product specifications are subject to change without
notice.
Printed in Canada.
1100
Table of Contents
Contents
C H A P T E R O N E
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Introduction to Rendering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
The Camera . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
C H A P T E R T W O
Rendering Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Previewing before Rendering . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Optimizing Rendering Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Rendering a Sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Using the Standalone Renderer . . . . . . . . . . . . . . . . . . . . . . . . . 32
Rendering a Subregion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Showing and Hiding Object Edges . . . . . . . . . . . . . . . . . . . . . . . 39
Viewing Rendered Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
C H A P T E R T H R E E
Advanced Rendering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Raytracing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Antialiasing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Blurring a Moving Object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Field Rendering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Rendering Tag and Z Channels . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Rendering Faces of an Object . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
C H A P T E R F O U R
SOFTIMAGE|3D Rendering 3
Table of Contents
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .125
4 Rendering SOFTIMAGE|3D
C H A P T E R O N E
Introduction
SOFTIMAGE|3D Rendering 5
Introduction
6 Rendering SOFTIMAGE|3D
Introduction to Rendering
Introduction to Rendering
An object in SOFTIMAGE|3D is defined by points in space that are
connected together to create a surface. An important part of the
process of creating a 3D object or scene in SOFTIMAGE|3D is the
material definition of the object’s surface, the lighting of the scene,
and rendering. The Matter module contains most of the tools
designed to accomplish all these tasks.
For the purpose of rendering, SOFTIMAGE|3D divides these
surfaces into triangles, which is known as tessellation. Rendering is an
integral part of finalizing an object’s material definition. When you
render an object, SOFTIMAGE|3D processes the object’s surface
triangles and normals relative to the light source and creates a visible
surface that is shaded according to the parameters set for material
and texture attributes.
Each triangle is a planar surface with its front face oriented in one
direction. The orientation of this surface is shown by a vector
(direction line) called a normal, located on the points (vertices) of
each triangle. The normal always needs to be oriented in the
direction of the camera to make that surface visible (you can,
however, make all surfaces visible to the camera – see Rendering Faces
of an Object on page 64).
Lighting
If you render an object without defining a light source, SOFTIMAGE|3D
automatically creates a light. The default light is white and infinite (it’s
far away like the sun), and does not cast shadows. When you create your
own light source, the default light disappears.
When the surface has been defined, the rendering process calculates
all these attributes to create a final image. SOFTIMAGE|3D calculates
the relation between the orientation of surface normals and the light
source to determine the surface attributes of each triangle. For more
information about light, see the Light chapter in the Defining
Materials and Textures User’s Guide.
SOFTIMAGE|3D Rendering 7
Introduction
The Camera
The camera is the device that lets you view your scene – what the
camera sees is displayed in the Perspective window. Ultimately, the
camera’s view is what is rendered for your final image. By default,
this is whatever is shown in window B.
When rendering with the Depthcue, Hardware, Hidden-Line,
Wireframe, and Ghost, Rotoscope (Wire), or Rotoscope (Shade)
rendering types, you can use any of the displayed windows.
Rendering with the other rendering types must be done in window B
with the Perspective view.
You can see the camera itself in the parallel projection windows by
choosing the Camera > Show command. The camera is represented
by a camera icon. It also has an interest to which the camera always
points. The interest’s icon is three intersecting lines, like a null object.
You can use more than one camera while working on your scene by
displaying more than one Perspective window. This allows you to view
your scene in different ways and help you determine exactly how to
render your scene. These additional cameras are non-rendering.
There are many parameters related to the camera, including its roll,
field of view, and depth of field, as well as the picture format, etc.
Many of these parameters can be animated.
For more information on moving the camera around in a scene, see
Using the Camera on page 46 in the Working with SOFTIMAGE|3D
User’s Guide.
8 Rendering SOFTIMAGE|3D
The Camera
Depth of Field
The Depth of Field Simulation options provide you with additional
control over camera settings, as well as support for lens shaders used
when rendering with mental ray. The options define the focus of
objects according to their distance from the camera, similar to the
way a real camera works. Depth of field refers to the minimum and
maximum distance from the camera that objects are in focus.
Objects closer than the minimum distance and farther than the
maximum distance become increasingly out of focus.
Note: The mental ray renderer is not a part of the standard
Note
SOFTIMAGE|3D GT package, but may be purchased
separately.
If you select the Automatic option in the Depth of Field area of the
dialog box, it allows you to set standard camera parameters to
modify depth of field. These three parameters work together to
achieve the desired result: Focal Length, F/stop, and Distance.
• Focal length allows you to define the length of the camera lens. A
larger value increases the lens length and decreases depth of field.
For example, 50mm is the standard lens size of a 35mm camera.
• F/stop allows you to set the size of the aperture opening. A smaller
value results in a larger opening, but decreases the depth of field.
• Distance allows you to define the distance from the camera that
objects are at their sharpest focus. Objects located in front of and
beyond this point become out of focus.
SOFTIMAGE|3D Rendering 9
Introduction
If you select the Custom option, you can custom determine your
own settings. These are the parameters that you can set:
• Near Focus is the nearest distance from the camera where objects
are in focus. This is determined in system units.
• Far Focus is the farthest distance from the camera where objects are
in focus. This is determined in system units.
• Max COC is the Maximum Circle of Confusion (COC). It controls
the out-of-focus effect in terms of pixels. The higher the value, the
more out of focus the object is. The default is 20.
For example, if you preview with a resolution of 100 x 100 and the
Max COC is set to 5, you must compensate during the final render. If
the final render is 1000 x 1000, the Max COC must be increased to
50. The Max COC is proportional to the image resolution.
• Max occurs at is the distance from the camera where Max COC
occurs. This is determined in system units.
10 Rendering SOFTIMAGE|3D
The Camera
SOFTIMAGE|3D Rendering 11
Introduction
12 Rendering SOFTIMAGE|3D
The Camera
SOFTIMAGE|3D Rendering 13
Introduction
Example 1
You are trying to model a real camera where the film (or the CCD, if
it’s a video camera) is not perfectly centred behind the lens. This is
often the case when trying to match computer-generated images
with real-world images.
Taking the example of a 35mm camera with a 50mm lens, the film
(projection plane) size is 24 x 36mm (its aspect ratio is 1.5).
However, suppose the film in your camera was offset by 1mm
horizontally and -0.5mm vertically from the true centre of projection
14 Rendering SOFTIMAGE|3D
The Camera
Example 2
Asymmetrical viewing frustums are also necessary to do correct
stereo perspective projections. The typical “improvised” method of
rotating the camera a bit to the right for the left eye and a bit to the
left for the right eye provides an adequate approximation, but
producing correct stereo projection images requires the use of
asymmetrical frustums.
SOFTIMAGE|3D Rendering 15
Introduction
16 Rendering SOFTIMAGE|3D
C H A P T E R T W O
Rendering Basics
SOFTIMAGE|3D Rendering 17
Rendering Basics
18 Rendering SOFTIMAGE|3D
Previewing before Rendering
Previewing a Subregion
To preview a subregion, you can define an area in a scene according
to the parameter settings defined using the Preview > Setup
command. Objects in the defined area are rendered with material
attributes, textures, and lights. Hidden objects are not rendered.
The subregion cropping rectangle is displayed in window B, which is
the default Perspective window. If you have another window selected
(such as Schematic, Right, Front, etc.) when you choose this
command, SOFTIMAGE|3D switches it to the Perspective window.
SOFTIMAGE|3D Rendering 19
Rendering Basics
To preview a subregion:
1. Choose Preview > Setup. In the Subregion area, select the Draw
Subregion on Previewing option.
2. Click the Draw Subregion button. A red box appears in window
B that defines the region you want rendered.
3. To size the cropping rectangle, click on the sides or corners of the
rectangle and drag the mouse. When satisfied with the size,
release the mouse button.
4. To move the rectangle, click inside it and drag. When satisfied
with the position, release the mouse button.
5. The Preview Setup dialog box reappears. Click Exit.
6. Choose the Preview > Subregion command so it is active.
7. Choose Preview > All or Selection and then middle-click.
The preview appears on the screen showing only the subregion
you selected.
You can also define the subregion by assigning values to Subregion
Pixel and Percent (left, right, bottom, top).
For more information on subregion rendering, see Rendering a
Subregion on page 36.
20 Rendering SOFTIMAGE|3D
Previewing before Rendering
To adjust the way the hidden-line is processed for polygon mesh objects,
select the Automatic Discontinuity option in the dialog box that
appears when you choose the Info ➔ Selection command. The
Edge_Flag menu commands in the Matter module also let you adjust the
hidden-line process (see Showing and Hiding Object Edges on page 39).
To preview a hidden-line image, follow these steps:
1. Select any window by clicking the letter identifier box (A, B, C, or
D) in the upper-left corner of the desired window. The window is
highlighted in red. If you don’t select a window, window B is
used.
2. Choose the Hidden_Line command in the Tools module
(Faceted or Smoothed relates to the type of hidden-line renderer
you can choose in the Render Setup dialog box).
The image is rendered and displayed in the window you selected.
3. Middle-click to close the window.
SOFTIMAGE|3D Rendering 21
Rendering Basics
Modelling
When modelling, try to minimize the total number of triangles in a
scene. You can do this starting from the earliest stages of the working
process. For example, if you are drawing a curve to be used for extrusion
or revolution, you should use the fewest points possible to create the
desired contour: the resulting object will have fewer triangles.
If you plan to include lots of smooth rounded objects in your scene,
you can model them as polygonal mesh objects, then simulate their
surface smoothness by setting the Automatic-Discontinuity option
in the dialog box opened by the Info > Selection command: if the
objects were extruded from triangles instead of circles, they would
have fewer triangles.
If you are working with patch or NURBS surface objects, you can
minimize the number of triangles for objects that are located further
away from the camera by lowering their geometric resolution, again
by using the Info ¹ Selection command and its dialog box for patch
models. Also, instead of modelling very complex objects that have
many triangles, you can use texture mapping techniques for
transparency and roughness to simulate the desired effects.
Material Attributes
You can further optimize rendering time by keeping material
attributes such as reflectivity, transparency, and refraction, as well as
raytracing depth, and the number of lights and shadows in a scene to
a minimum. When used in combination, these attributes can
increase rendering time significantly. To save rendering time you can,
for example, apply a reflection map without raytracing to simulate
reflectivity. You can also apply reflectivity or transparency to only
one or two objects in a scene rather than to all of them. If you need to
increase raytracing depth, perform tests to determine the actual
required depth. Also, remember that each light you add to a scene
22 Rendering SOFTIMAGE|3D
Optimizing Rendering Time
Compositing
Compositing is another very practical way to optimize rendering
time. By rendering constant elements of a scene, such as the
background, separately from other elements then compositing them
together afterwards, you can save a lot of rendering time. The time
saved is even greater if raytracing is required. The number of layers
that can be composited is unlimited. For more information, see
Compositing Images on page 111 of the Using Tools User’s Guide.
Render/Memory Guidelines
These are general rendering guidelines and should not be taken as
system limitations because there are ways of maximizing memory
usage or adding to available memory.
SOFTIMAGE|3D uses a Binary Space Partitioning (BSP) scheme (for
raytracing) which needs to “see” an entire scene to compute the rays’
trajectories. This allows for fast rendering, but very large scenes can
put large demands on memory.
Memory can be increased by adding RAM (Random Access Memory)
or by partitioning the disk allotting more disk space for swap space
(virtual memory). Because of the relatively slow access time of reading
information from disk, a scene that requires disk swapping renders
more slowly than a scene that fits into RAM. Swapping to disk varies in
degrees of relative slowness, depending on where and how often the
system needs to retrieve information from the disk.
SOFTIMAGE|3D Rendering 23
Rendering Basics
24 Rendering SOFTIMAGE|3D
Rendering a Sequence
Rendering a Sequence
To render an object or scene and save the image to file, you can use
the Render menu command in the Matter module or the standalone
renderer that is run from the command line in a shell.
With the Render menu command, its accompanying Render Setup
dialog box allows you to specify the type of renderer to use, the file
output name, the frame numbers to be rendered, image resolution,
antialiasing filter, pixel ratio, motion blur, and raytracing depth, as
well as other options.
Choosing a Renderer
To choose a render type, choose the Render menu command and
select a type from the Rendering Type menu. Here’s a brief
description of the various renderer types available to you:
Softimage Renderer
The SOFTIMAGE renderer is the default renderer. The extension of the
output file is .pic. The SOFTIMAGE renderer uses most of the parameters
in the Render Setup dialog box so that you can perform complete
rendering tasks. You can also run this renderer from the command line
(see Running the Renderer from the Command Line on page 31).
mental ray
The mental ray renderer is a high-quality, photorealistic renderer. It
provides an extensive set of built-in functions and can be
dynamically linked with user-defined shaders during the rendering
process. You don’t have to use shaders with mental ray, but there are
many different types of shaders which you can use to create
procedural textures (including bump and displacement maps),
materials, camera lenses, atmospheres, light sources, etc.
You can choose either mental ray 1.9 or mental ray 2.1. Note that
mental ray 2.1 contains features that mental ray 1.9 does not, such as
caustics and global illumination.
After selecting mental ray from the Rendering Type list, you can set
the parameters specific to it. These are accessed by selecting the
Antialiasing and Motion Blur options, as well as the Options
button.
SOFTIMAGE|3D Rendering 25
Rendering Basics
The extensions of the output files available for mental ray are:
.qntntsc (Abekas NTSC), .qntpal (Abekas PAL), .ct16 (RGBA 16
bits), and .rgb.
For more information on rendering in mental ray, see Rendering with
mental ray Software on page 65.
Wire Frame
The wireframe renderer renders wireframe images, which are made
up of the edges of objects and drawn as lines (this is the default view
shown in the windows when you open SOFTIMAGE|3D). This
renderer displays tracing features, such as edges or contour lines
without attempting to remove invisible or hidden parts, or fill
surfaces. The resulting image is in .lin file format.
Depthcue
The Depthcue renderer provides colour wireframe rendering. The
portion of the model closest to the camera appears “brighter” in
colour, giving the sense of depth in the z-axis. It is usually used to
derive more visual feedback from the models displayed on a screen
because the hardware shading is often too slow for complex models
and scenes.
Since the alpha channel is not calculated in the depthcue renderer,
compositing must be done using an external compositor, such as
SOFTIMAGE®|Eddie.
26 Rendering SOFTIMAGE|3D
Rendering a Sequence
Hardware Renderer
The hardware renderer renders the same view you see when you
choose the Shade view mode in a window. The resulting image is
saved in .pic file format. The image produced is of lower quality;
objects show colour and lighting effects, but not shadows, reflection,
or transparency.
SOFTIMAGE|3D Rendering 27
Rendering Basics
Ghost
The Ghost renderer renders the same view you see when you choose
the Ghost view mode in a window. It allows you to display a series of
frozen snapshots of your animated objects at the current frame, next
frames, and previous frames. These frames remain in the view
regardless of the position of the time line pointer and are used as
reference point.
28 Rendering SOFTIMAGE|3D
Rendering a Sequence
Rendering a Scene
When rendering with the Depthcue, Hidden Line, Wireframe,
Hardware, Ghost, or Rotoscope (Wire or Shade) rendering types, you
can use any of the displayed windows. Rendering with the other
rendering types must be done using window B with the Perspective view.
To do a basic rendering of a scene, follow these steps:
1. Select the window you want to render by clicking the letter
identifier (A, B, C, or D) icon in the upper-left corner of the
window’s title bar. The selected window is outlined in red. If no
window is selected, rendering is done in window B.
2. Choose the Render menu command in the Matter module. The
Render Setup dialog box is displayed. The parameters described
in this chapter refer to this dialog box.
3. Select a Rendering type (one of the previously described renderer
types).
4. Specify the sequence by setting the Start and End frame, and the
incrementation step.
5. Leave the default resolution as it is, but you can change it as
described in the next section.
6. Specify the name of the scene that you want to render by clicking
the Select button in the Output Image area and selecting a file
from the browser that appears. You can also type a name by
which you want to save the rendered image.
7. If you have the choice of file formats, select one from the File
Format menu.
For a very basic rendering, this is all you really need to set. If you
want to set raytracing, antialiasing, motion blur, or other more
sophisticated options, see Advanced Rendering on page 43.
8. Click the Render Sequence button to start the rendering process.
By default, the rendered picture is saved to the RENDER_PICTURES
chapter of your working directory.
9. To view the rendered image or sequence, see the Using Tools
User’s Guide for information on different viewing commands and
utilities available.
SOFTIMAGE|3D Rendering 29
Rendering Basics
30 Rendering SOFTIMAGE|3D
Rendering a Sequence
SOFTIMAGE|3D Rendering 31
Rendering Basics
These two techniques put the process into the background, and allow
the current shell to be used for other things. The process continues to
run until the window they are running in is terminated.
If you desire the process to continue after the window is terminated,
or even after logging out, use the nohup command. To do this, create
an alias such as:
alias doit nohup soft -R
For example:
doit final_logo_job -d tutorials -L -s 450 1 >& log &
32 Rendering SOFTIMAGE|3D
Using the Standalone Renderer
Usage
soft -R my_scene.1-0.dsc -m <filename>
SOFTIMAGE|3D Rendering 33
Rendering Basics
34 Rendering SOFTIMAGE|3D
Using the Standalone Renderer
If you render the same thing from frames 11 to 13, you’ll get:
/usr/people/you/obladi/RENDER_PICTURES/blada 11 0 1
/usr/people/you/obladi/RENDER_PICTURES/blada 12 1 1
/usr/people/you/obladi/RENDER_PICTURES/blada 13 2 1
If you render the same file with field rendering on, with the
dominant field even or odd, you’ll get:
/usr/people/you/obladi/RENDER_PICTURES/blada 11 0 1
/usr/people/you/obladi/RENDER_PICTURES/blada 11 0 2
/usr/people/you/obladi/RENDER_PICTURES/blada 12 1 1
/usr/people/you/obladi/RENDER_PICTURES/blada 12 0 2
/usr/people/you/obladi/RENDER_PICTURES/blada 13 2 1
/usr/people/you/obladi/RENDER_PICTURES/blada 13 0 2
If you specify any parameters after the name of the file in the Pre-
frame or Post-frame script text boxes, the script finds these at the
beginning, not the end, of the variable list provided to the script. For
example, if you specify the following path in the script text box:
/path/example_script hello
the example script gives:
hello/usr/people/you/bladi/RENDER_PICTURES/blada 101
hello/usr/people/you/bladi/RENDER_PICTURES/blada 2 1 1
hello/usr/people/you/bladi/RENDER_PICTURES/blada 3 2 1
If you specify the following path:
/path/example_script hello goodbye
the result is:
hello goodbye/usr/people/you/obladi/RENDER_PICTURES/blada 1 0
hello goodbye/usr/people/you/obladi/RENDER_PICTURES/blada 2 1
hello goodbye/usr/people/you/obladi/RENDER_PICTURES/blada 3 2
The doesn’t show the field number at the end of each line because
the example script only prints out the first five arguments.
SOFTIMAGE|3D Rendering 35
Rendering Basics
Rendering a Subregion
If you don’t want to render your whole scene, you can render a portion
of it, called a subregion. There are two ways to do this: by either
choosing certain options in the Render Setup dialog box (Render
menu command) or by using the standalone renderer (soft-R).
36 Rendering SOFTIMAGE|3D
Rendering a Subregion
SOFTIMAGE|3D Rendering 37
Rendering Basics
Later, you can simply composite the rendered images by defining the
full resulting resolution. The composite standalone uses the
information in the “comments” area of the picture file. A sample
composite script exists for nine quadrants resulting in a 1270 x 714
image. Execute this in the directory where the picture files reside.
composite -S 1270 714 final -d -v -s 1 1 1 a b c d e f g h
i
38 Rendering SOFTIMAGE|3D
Showing and Hiding Object Edges
Hiding Edges
The Edge_Flag > Hidden/Rect hidden commands in the Matter
module allow you to define one or more edges of a polygon mesh
object as hidden when using the Hidden Line renderer. The hidden
edges are hidden regardless of the position of the object.
1. Select a polygon mesh object.
2. Choose the Edge_Flag > Hidden/Rect hidden command.
3. Pick the edges individually or perform a rectangular selection
using the left mouse button.
The edges are highlighted in cyan. The middle mouse button
deselects the edge, and the right mouse button toggles.
4. Press Esc to end the mode.
SOFTIMAGE|3D Rendering 39
Rendering Basics
Showing Edges
The Edge_Flag > Visible/Rect visible commands in the Matter
module allow you to define one or more edges of a polygon mesh
object as visible when using the Hidden Line renderer. The visible
edges are visible regardless of the position of the object.
1. Select a polygon mesh object.
2. Choose the Edge_Flag > Visible/Rect visible command.
3. Pick the edges individually or perform a rectangular selection
using the left mouse button.
The edges are highlighted in yellow. The middle mouse button
deselects the edge, and the right mouse button toggles.
4. Press Esc to end the mode.
Tip:
Tip The Show > Edge Flags command also shows or hides
visible or hidden edges. For curves and surface objects, these
are defined automatically upon creation.
40 Rendering SOFTIMAGE|3D
Viewing Rendered Images
SOFTIMAGE|3D Rendering 41
Rendering Basics
42 Rendering SOFTIMAGE|3D
C H A P T E R T H R E E
Advanced Rendering
SOFTIMAGE|3D Rendering 43
Advanced Rendering
44 Rendering SOFTIMAGE|3D
Introduction
Introduction
In addition to the basic rendering process, there are some rendering
features available that are not necessary for doing a basic render,
however, they can produce some useful effects. With such rendering
effects as raytracing, antialiasing, field rendering, or motion blur you
can produce effects that make your animation look much more
realistic, and give it a polished edge.
• Raytracing lets you render a scene with more realistic and precise
details, giving you a high quality render. You need to use raytracing
to calculate reflections, refractions, and shadows.
• Antialiasing is a method of smoothing out and sharpening rough or
fuzzy edges of graphics to produce a more polished look. This is
done by a mathematical process that subsamples pixels.
• Field rendering is used to reduce the strobing effect on fast moving
objects for rendering to video.
• Motion blur defines the relative blur of a moving object, usually for
special effects and for fast moving objects with lateral motion.
You can also render the Tag and Z channels of an image to use tag
channel and depth information, which is useful when compositing
and image-processing.
Details for each of these options is described on the following pages.
Each of these can be accessed from the Render Setup dialog box.
SOFTIMAGE|3D Rendering 45
Advanced Rendering
Raytracing
Raytracing calculates the light rays that are reflected, refracted, and
obstructed by surfaces. When rendering with raytracing, you will
have more realistic and precise results. However, it takes much longer
to render with a higher raytracing value.
If you have a scene with reflection, refraction, transparency, or
shadows in it, it will take a fairly long time to render. The beauty of
raytracing is that you can render what you want: if you want to see
only the effects of the animation, or just the effects of reflection, you
can deselect all of the other features and show only these options.
This will take less time to render, yet you benefit from the realistic
results that rendering with raytracing offers.
If there is no reflection or refraction in your scene, raytracing occurs
at about the same speed as hardware rendering.
The following illustrates when you would want to use raytracing
using the SOFTIMAGE renderer.
1. Select an object.
2. Choose the Material menu command in the Matter module. In the
Material Editor, select a glass material and apply it to the sphere.
3. Increase the Reflection and Transparency values.
4. Choose the Render menu command. The Render Setup dialog
box is displayed.
5. Select SOFTIMAGE as the rendering type.
6. Click the Options button. In the Options dialog box, set the
RayTraced Depth to 3 or 4. This means that the light rays will
bounce around the scene this many times.
7. Specify the sequence by setting the Start and End frame, and the
incrementation step.
8. Click Render Sequence and look at the results of rendering
with raytracing.
Note: If you want to put restrictions on the rendering process,
Note
such as only rendering with the reflection turned on, choose
Preview > Setup. Make sure Global On is selected and
choose which other parameters you want to have selected,
such as 2D Textures, reflectivity, Shadows, etc.
46 Rendering SOFTIMAGE|3D
Raytracing
Raytracing in SOFTIMAGE|3D
SOFTIMAGE|3D raytracing uses a modification of the Glassner/
Kaplann spatial subdivision algorithm. In preprocessing the database,
first the bounding box (containing every object in the scene) is found.
This box is then divided in half on the x-axis. Both of the boxes are
then divided in half on the y-axis. The resulting four boxes are each
divided in half on the z-axis. These eight boxes are divided in half on
the x-axis… ad infinitum. This subdivision is halted when either:
• A box contains less than the preset number of triangles (triangles
per leaf)
or
• The number of divisions is greater than the preset maximum
(maximum tree depth).
In the rendering, the ray is passed from box to box until it hits
something. At each box, the ray must be tested against all contained
triangles. This triangle intersection testing is relatively slow
compared to the calculation of the next appropriate box, so it is to
your advantage to have a small number of triangles in each box.
However, as the number of triangles grows smaller, the amount of
memory required to hold all the boxes increases, as does the time
taken at the preprocessing stage.
SOFTIMAGE|3D Rendering 47
Advanced Rendering
Finding the proper Max Tree Depth is more of an art than a science.
Empirical studies suggest that this is just beyond the point where the
number of leaves begins to decrease. To find this point, set the Max
Tree Depth to some large number (such as 50), select the BSP Tree
statistics in the Render Setup dialog box, and run the program (it is
not necessary to render a complete picture; the program can be
stopped when the first scan line appears). Exit the program and
examine the “stats” file. There should be a relatively obvious point at
which the number of leaves begins to decrease. Set the Max Tree
Depth to this number plus 1.
This is not an exact method, but by experimenting with the number
of Triangles per Leaf and Max Tree Depth, it is quite possible to find
a combination that results in a faster rendering time.
48 Rendering SOFTIMAGE|3D
Antialiasing
Antialiasing
Aliasing usually occurs when there is limited pixel resolution.
Antialiasing is a method of smoothing out and sharpening rough or
jagged edges of images to produce a more polished look. It uses a
mathematical process that subsamples the pixel area.
The number of pixels in a scene depends on the screen resolution.
The greater the resolution, the greater the number of pixels. The
smaller the resolution, the smaller the number of pixels.
The more pixels there are, less aliasing occurs and the edge is
smoother. When there are not enough pixels, you need to use
antialiasing to make the lines look smoother.
Tip:
Tip Avoid adding antialiasing when you render a texture map
that fills up the screen because its edges aren’t visible
anyway. If the texture map itself is already antialiased, you
don’t need to add more antialiasing when you render.
To use antialiasing with the SOFTIMAGE renderer, follow these steps:
1. Choose the Render menu command. The Render Setup dialog
box is displayed.
2. Select the SOFTIMAGE Rendering type. To use antialiasing with
the mental ray renderer, see page 88.
3. Specify the sequence by setting the Start and End frame, and the
incrementation step.
4. Select Antialiasing. The Antialiasing dialog box appears where
you can select either the Bartlett or the Adaptive supersampling
option as antialiasing method.
To help you make a choice on the type of antialiasing you should use,
here’s an explanation of the differences between Adaptive
supersampling and the Bartlett filter.
Bartlett
Bartlett is a static oversampling algorithm based on a variable width
box filter. As you increase the filter level, more of the adjacent
subpixel samples are taken into account while filtering a pixel. The
expense of this algorithm is caused by all pixels being supersampled
at the same rate (sample level) regardless of content affecting that
pixel; that is, a pixel with no geometry covering it gets sampled at the
SOFTIMAGE|3D Rendering 49
Advanced Rendering
same rate as a pixel with lots of geometry. One side effect of this
algorithm is due to using the box filter. As the sample rate (and thus
the width of the box) increases, more subsamples are taken into
account for a pixel; at high sample rates, this will sometimes lead to
“soft” looking images, meaning there are no well-defined edges in
the resulting image. Based on empirical data and practical limits of
time, a sample level of 4 is about the maximum advisable value.
Adaptive
For Adaptive supersampling, the algorithm is intended to decrease
the oversampling rate for those pixels that do not require it. This is
based on the contrasted thresholds specified. First, subpixel samples
are shot at the corners of each pixel. The variation in contrast is
evaluated and, if required, the pixel is subdivided into four subpixels
by sampling the full-sized pixel five more times, once in the centre
and once in the middle of each edge of the pixel. From here, each of
the four subpixels has the algorithm applied recursively, treating each
of the four as if they were actually a full pixel. This process continues
until the recursion limit is reached.
Visible Differences
You can see the difference between images filtered with the Bartlett or
Adaptive algorithm. These differences are primarily from the number
of subpixel samples shared between adjacent full size (result) pixels.
In the Adaptive algorithm, only the subpixel samples along the edges of
the full pixel are shared between adjacent pixels (that is, the four corner
samples are shared among the six adjacent pixels). Here’s a picture:
A B C D E
F G H I J
1 2
K L M N O
3 4
P Q R S T
U V W X Y
50 Rendering SOFTIMAGE|3D
Antialiasing
This matrix represents image pixels: 1, 2, 3, and 4 are the first level
subsamples made for filtering pixel M. Samples 1 and 3 are also used
in the filtering of pixels G, H, L, Q, and R. Thus only these
subsamples are shared (along with any samples made between 1 and
3, are used for filtering pixel L, etc.).
In contrast, the box filter used in Bartlett can be viewed as a window
that moves along the scan line being filtered. All subsamples that fall
in the window are used to obtain the resulting pixel value for a given
pixel on the scanline. Assume we are filtering area M again with a box
filter that covers only two adjacent pixels: the letters now represent
subpixel samples, not image pixels. The window will cover all of the
subpixel samples A-Y, with M weighted most since it is central. Now
move the window to the next sample to filter N: now samples A, F, K,
P, and U are unused, but all the rest are used, as well as the samples
that are outside the diagram past E, J, O, T, and Y. So you can see that
subsample M still contributes to filtered pixels at both N and O. The
subpixel samples have an effect on filtered pixels that are half the
width away of the box filter.
Choosing an Algorithm
Results are usually a matter of preference.
The filtering effect of the Adaptive algorithm is slightly more
localized than the Bartlett. The reduction of computing time is often
found favorable and the filtering adequate with Adaptive. It only
recursively subdivides at the sub-pixel level, so the image will remain
crisper even at high filter levels.
If you don’t mind slightly “softer” looking images, and can deal with the
overhead of a static supersample, you will like Bartlett. Bartlett gives you
a progressively “softer” image the higher the filter level because it is
similar to blurring the image (more pixels are averaged into one).
SOFTIMAGE|3D Rendering 51
Advanced Rendering
You can use motion blur from the Render Setup dialog box or from
the command line using the mb standalone program.
The following illustrates how to apply motion blur to an object using
the Render Setup options:
1. Select an object or objects.
2. Choose the Render menu command. The Render Setup dialog
box is displayed.
3. Select the SOFTIMAGE rendering type.
4. Select Motion Blur. The Motion Blur dialog box appears.
5. Enter the Shutter Speed which allows you to set the length of
time the shutter is open. Values are measured in frames.
6. Enter the Min. Movement which sets the sampling rate while the
shutter is open and the minimum amount of movement to be
blurred. The value is measured in pixels. A low value increases
the sampling rate and the memory required. The default value is
5.
7. Click Accept to return to the Render Setup dialog box.
You can also use the motion blur with the mental ray renderer. This
may be especially useful if you want to apply a motion blur to a single
object in your scene so it is much quicker. For more information on
using mental ray, see Blurring a Moving Object Using mental ray on
page 91.
52 Rendering SOFTIMAGE|3D
Blurring a Moving Object
Half Blur
You can choose to activate or deactivate the blur effect in front of or
behind an object. To produce a more traditional animation, you can
control the blur motion by activating its effect only behind the
moving object. Half Blur is available in only the mb standalone.
SOFTIMAGE|3D Rendering 53
Advanced Rendering
Alpha Channel
By default, resulting images will also have a blurred alpha channel. If
the sequence of animation is not to be composited, it is possible to
turn off the blurring of the alpha channel to reduce the time required
to finish blurring the sequence.
Depth Computation
To reduce computation time, you can disable use of the depth
information in the blurring process. This is only recommended when
the objects in motion do not cross paths.
• Motion blur is extremely handy because the colour element can be
rendered first. Make sure to render colour elements intended for
motion blurring using only small amounts of antialiasing, or no
antialiasing at all. This is because motion blur masks the aliased
edges in the resulting image anyway.
• Motion blur used on 3D procedurally texture-mapped objects
produces poor results.
• Motion blur has no effect on the apparent motion caused by
camera movements. This is due to the number of moving objects
that would over-extend the limits of the buffer memory.
Tip:
Tip If you use field rendering, you can produce a smoother
effect because you are doubling the number of frames by 2.
Memory Usage
Calculating motion blur can be very memory intensive. When the
message “malloc failed, reduce memory demands” is displayed, it
indicates that animated objects have exhausted the memory of the
system and that all available RAM and swap space have been exceeded.
This situation can be solved by using the following techniques:
• Limit rendering to one object at a time, and once completed,
composite the rendered objects together. Applying motion blur to a
single object usually achieves the desired effect.
• Reduce the physical size of the objects being blurred because the
number of replicated objects is a combination of object size, rendered
resolution, shutter speed, and the minimum movement values.
54 Rendering SOFTIMAGE|3D
Blurring a Moving Object
SOFTIMAGE|3D Rendering 55
Advanced Rendering
Inaccessible Data
Since motion blur uses image-based information, inaccuracies can
occur from loss of data as objects enter and exit the frame. This is
because the object position data that originates outside the frame is
inaccessible to the motion blur’s algorithm.
Transparent Objects
Transparent objects may cause unexpected artifacts to appear in
blurred images. Scenes that contain moving transparent objects
passing in front of static objects create visible artifacts. The pixels
associated with the moving object are blurred even though the
colour contribution of those pixels is primarily from the background
object. Conversely, if a moving object passes behind a static
transparent object, the portion of the object visible through the
transparency will be blurred incorrectly.
56 Rendering SOFTIMAGE|3D
Field Rendering
Field Rendering
Field Rendering is used to reduce the strobing effect that results from
fast moving objects when rendering to video. If you are rendering to
film, refer to motion blur (see page 52).
SOFTIMAGE|3D uses a wide-pixel rendering technique. Internally,
the camera’s field of view is perceived as being half as high, but each
scan line is amplified to twice its thickness. The sampling of the
pixel, therefore, is estimated over a larger area than normal. The
camera first considers the even and then the odd lines. It is raised or
lowered to ensure that the two different images have the correct
visual orientation. The direction of compensation depends on
whether your dominant field is set to even or odd. The following
figure illustrates the use of the wide-pixel rendering technique.
1 1
2
3 3
4
The following example shows how the scene is viewed by the camera at
render time. The direction of compensation corresponds to your
dominant field setting (even or odd). Each line is scanned in at 50%
wider than the normal sample line to ensure correct picture proportions.
regular field of view
field render sample pixel
1 pixel
There are two versions of the post-frame script file: inter01 and
inter02. File version inter01 corresponds to a dominant field setting
of 2,1 and inter02 is for a 1,2 setting. Select and use the post-frame
file version that corresponds to your dominant field setting. These
files ensure that merging occurs after each full frame (two fields) has
been processed. The post-frame script files invoke the standalone
program called interleave. The script files are located in
/usr/softimage/3D/tools/scripts/inter01.
SOFTIMAGE|3D Rendering 57
Advanced Rendering
Note: The inter03 and inter04 scripts perform the same function
Note as inter01 and inter02 respectively, except that they remove
the separate image fields after the composite frames have
been created.
To use field rendering with the SOFTIMAGE or mental ray renderer,
follow these steps:
1. Choose the Render menu command in the Matter module.
2. In the Render Setup dialog box, select either SOFTIMAGE
renderer or mental ray as the rendering type.
3. Select the Field Rendering option. The Field dialog box is
displayed.
4. Select the Active option, then select the Even or Odd option to
correspond to your Dominant field environment.
5. Click the Accept button.
6. Enter a Post-frame script in its text box, or click the Post-frame
button to open a browser in which you can locate the /usr
/softimage/3D/tools/scripts/inter01 file which is used to
combine the rendered frames. Select the post-frame script file
inter01 or inter02 that corresponds to your video editing
environment. You may want to copy this file to your working
directory.
Note: If the fields fail to combine after they have been rendered,
Note
verify to ensure that the appropriate ASCII file has been
entered in the Post-frame text box and not mistakenly
placed in the Pre-frame text box.
58 Rendering SOFTIMAGE|3D
Field Rendering
Creating a Setup
These two steps will help you recognize the four possible resulting
field combinations produced by the post-frame script files:
1. Select the appropriate post-frame script file to suit your
dominant field setting.
2. Render a single frame from a merged sequence and record a few
seconds on tape. A cube moving over 10 frames with a stationary
patch sphere was used for this test. The cube is used to verify
correct field order, and the sphere is used to verify environment
type.
Compare your results with the following:
• A normal image where the correct field setting and appropriate
post-frame script were selected.
• If the cube appears to jump back and forth, the incorrect field setting
and inappropriate post-frame script were selected. To view this
clearly, use the VTR in slow frame-by-frame motion. Correct this by
changing your dominant field setting and using the alternate post-
frame script file to reverse the order of the frames when combined.
• If the cube appears broken at the top and bottom pixels, but the
motion is smooth, an incorrect dominant field setting and correct
post-frame script were selected. Correct this by changing your
dominant field setting.
• If the sphere is stable and the motion is not smooth, a correct
dominant field setting and incorrect post-frame script were
selected. Correct this by using the alternate post-frame script file.
Note: The Avanzar video output board has the ability to read and
Note
combine the fields as it records to tape. This may eliminate
the need to use post-script files, unless you need to evaluate
the quality of the combined frame or have access to the
flipbook display on the workstation.
Tip:
Tip It is recommended that NTSC users set the Dominant field
to ODD in the Render Setup dialog box, and save the Setup
File with the Preferences > Setup File > Save command. By
including the -s option at the end of the soft alias string, this
feature is automatically set to the correct position for each
SOFTIMAGE|3D Rendering 59
Advanced Rendering
60 Rendering SOFTIMAGE|3D
Rendering Tag and Z Channels
Tag Channels
The information contained in the Tag Channels consist of 1-bit
stored layers that are identified by using the Select ➔ Set Named
Selection command specifying a name, and selecting the User Tag
option. The rendered file has the form of <name>.<frame number>
with a .tag extension instead of the usual .pic extension.
User Tag
Items to be included in a user tag selection. Tagged objects are
rendered into a separate file which contains a mask showing all pixels
in the image affected by the tagged model.
Note: You can define multiple objects within each Tag layer, but
Note
since there is no depth information, Painterly Effects
considers the whole layer as one section. Only one pixel is
designated as the “tagged” pixel. Assign objects that require
a specific effect to a separate layer. In a case where two
different scene images (layers) have the same tagged pixel,
the topmost layer will be used.
SOFTIMAGE|3D Rendering 61
Advanced Rendering
Z Channel
The Z channel information allows for more advanced compositing
operations. Z channel provides depth information so that you can
position an object in front of and in back of the background image.
One useful application would be to use the Z channel information to
allow an object in a scene to interact with a background image.
Without the Z channel information, the compositer can only decide
which layer should be placed on top, confining selected objects to the
front of the background image.
Z channel information is also useful when compositing a
SOFTIMAGE|3D scene with particles created in the SOFTIMAGE®
Particle program.
62 Rendering SOFTIMAGE|3D
Rendering Tag and Z Channels
Limitations
The quality of the Z compositing resolution diminishes when you
physically intersect any object. This is because the polygonal triangle
of one object is overlapped by the object placed before it. The
program averages and filters the colour pixel information, but
cannot do the same for object location in space. Consequently, the
resulting image will appear non-filtered at object intersection points.
Presently, it is impossible to antialias depth information due to the
program’s single sample per pixel calculation formula.
SOFTIMAGE|3D Rendering 63
Advanced Rendering
64 Rendering SOFTIMAGE|3D
C H A P T E R F O U R
SOFTIMAGE|3D Rendering 65
Rendering with mental ray Software
66 Rendering SOFTIMAGE|3D
Introduction
Introduction
The mental ray renderer is a high-quality, photo-realistic renderer
available in SOFTIMAGE|3D. This renderer allows you to perform
many special effects as part of the rendering process instead of having
to create these effects in the scene itself, which keeps your scenes as
small as possible. This feature of mental ray also lets you save the
settings you used during one rendering process and use them in
another one.
mental ray gives you more versatility, better image quality, and the
ability to run on a network or on a multi-processor machine.
Another great aspect of mental ray lies in its open architecture which
gives you the option of writing and applying custom shaders.
SOFTIMAGE|3D Rendering 67
Rendering with mental ray Software
Custom Shaders
The mental ray renderer provides an extensive set of built-in
functions and can be dynamically linked with user-defined shaders
during the rendering process. You don’t have to use shaders with
mental ray, but there are many different types of shaders you can use
to create procedural textures (including displacement maps),
materials, camera lenses, atmospheres, light sources, etc.
SOFTIMAGE|3D supplies sample shaders of each type, but you will
probably want to use mental ray to its full advantage and create your
own shaders (see the mental ray Programmer’s Reference Guide, which
is located on CD #3). You can use any combination of these shaders
to achieve specific results, such as using a lens flare lens shader for the
camera with a “star” shader for the light.
68 Rendering SOFTIMAGE|3D
Introduction
SOFTIMAGE|3D Rendering 69
Rendering with mental ray Software
The options in this dialog box are “smart” because they edit only the
parameters which relate to a particular type of object. If there is a
patch object and a polygon mesh object in the selected group, the
Propagation parameters edited will affect only the objects to which
they are applicable.
3. Select the Select/Unselect propagated field option for the Edit
mode. Pick the fields you want to edit and they become unghosted.
4. Change the parameters that you want (see the next section), and
click the Apply on Selected button. This applies the parameters
set to the selected objects, but leaves the dialog box open.
5. Click Exit to close the dialog box.
Visibility of Objects
The three Object Visibility options determine if an object is visible
when rendered with mental ray. You can also choose to have a visible
shadow and reflection of that object. These options can be selected
individually or combined to achieve the desired effect. The default is
that all three options are selected.
• Primary rays refers to all rays emitted from the camera. It allows
you to specify the visibility of an object in the scene. When this
option is selected, the object is visible.
• Secondary rays allows you to turn an object’s reflection in a
mirrored surface on or off. When selected, a reflection is visible.
• Shadows allows you to turn an object’s shadow on or off. When
selected, a shadow is cast.
For example, if you create two characters (an angel and a devil) and
make them both walk past a mirror, you can set the Object Visibility
parameters so that only the angel is visible to the camera, but the
shadow and the reflection in the mirror belong to the devil. To do
this, select only the Primary rays option for the angel, and select only
the Secondary rays and Shadows options for the devil.
70 Rendering SOFTIMAGE|3D
Introduction
Motion Blur
The motion blur options allow you to specify varying degrees of
motion blur for a selected object. For information, see Blurring a
Moving Object Using mental ray on page 91.
Shadow Maps
You can enable mental ray shadow maps, which are fast
approximations of raytraced shadows. See Creating mental ray
Shadow Maps on page 99 for more information
Displacement Maps
You can also use mental ray to create displacement maps. See
Creating mental ray Displacement Maps on page 103 for more
information.
Surface Approximation
For information on Surface Approximation, see Surface
Approximation on page 108.
SOFTIMAGE|3D Rendering 71
Rendering with mental ray Software
72 Rendering SOFTIMAGE|3D
Switching Between mental ray versions
SOFTIMAGE|3D Rendering 73
Rendering with mental ray Software
74 Rendering SOFTIMAGE|3D
Previewing with mental ray
SOFTIMAGE|3D Rendering 75
Rendering with mental ray Software
• If the Active option is deselected, the default rendering settings are used.
Note: After you have specified the antialiasing settings, you can
Note
toggle the parameters on and off by middle-clicking the
Active option.
76 Rendering SOFTIMAGE|3D
Disabling Effects When Previewing and Rendering
SOFTIMAGE|3D Rendering 77
Rendering with mental ray Software
78 Rendering SOFTIMAGE|3D
Using Shaders
Using Shaders
Shaders are the key to adding special rendering effects to
SOFTIMAGE|3D. A shader is a simple program written outside of
SOFTIMAGE|3D and then accessed through the mental ray interface
in SOFTIMAGE|3D. A shader is an opening in the architecture of
mental ray which lets you program rendering variables. Instead of
choosing a pre-programmed shader, you could write your own
shader to achieve quality improvements, performance optimization,
or simply create a rendering effect not available in the shaders that
accompany SOFTIMAGE|3D.
Once compiled, the parameters of the programmed shader can be
edited as easily as editing the standard shaders. For more
information, see the mental ray Programmer’s Reference Guide.
SOFTIMAGE|3D ships with a variety of shader libraries. There are
several areas in its interface for applying a shader, depending on the
type of shader it is. There are shaders for material, volume, shadows,
2D and 3D textures, camera lenses, and lights, and atmosphere. The
output shaders perform post-processing on the rendered image.
• Choose the Camera > Settings command to access lens shaders.
• Choose the Light > Define command to access light shaders.
• Choose the Material menu command to access material, volume,
and shadow shaders.
• Choose the Texture > 2D Global/2D Local command to access 2D
texture shaders.
• Choose the Texture > 3D Global/3D Local command to access 3D
texture shaders.
• Choose the Atmosphere > Depth-Fading command to access
volume shaders.
• Choose the Render menu command and select mental ray Options
to access output shaders.
This is a general procedure for selecting a shader once you have chosen
the appropriate command. The exception for this is output shaders:
1. To select the shader to apply, Shader option in the mental ray
area of the dialog box you have open.
2. The browser opens to the chapter for that type of shader appears.
SOFTIMAGE|3D Rendering 79
Rendering with mental ray Software
3. Select one of the shaders and click the Load button. Its name
appears in the text box below the shader type’s name in the main
dialog box.
4. To edit the shader’s parameters, highlight the shader’s name in
the text box and click on the Edit button. The dialog box showing
all parameters for that shader appears.
5. If you want to save the shader by another name and then edit the
parameters, you can modify its name in the text box. This creates
a new shader with the new name, but with the current
parameters. You can then edit the new shader’s parameters as
described above.
The new shader can then be saved and recalled by name in other scenes.
Tip:
Tip Before previewing, go into the Preview Setup dialog box and
select mental ray as the rendering type.
80 Rendering SOFTIMAGE|3D
Using Shaders
SOFTIMAGE|3D Rendering 81
Rendering with mental ray Software
null1 null2 null3 null4 Creates two lightning bolts: one between
null1 and null3, another between null2
and null4
82 Rendering SOFTIMAGE|3D
Using Shaders
unsetenv SGI_ABI
SOFTIMAGE|3D Rendering 83
Rendering with mental ray Software
84 Rendering SOFTIMAGE|3D
Raytracing Using mental ray
Acceleration Method
When choosing an acceleration method for using raytracing with
mental ray, it is wise to return to the findings that you made when you
initially evaluated your scene. The three choices are BSP tree, Ray
classification (mental ray 1.9 only), and Grid (mental ray 2.1 only).
A general rule for choosing between these methods is the overall
complexity of the scene. If the scene information shows that there are
less than 150,000 triangles, then the acceleration method should be
BSP tree (Binary Space Partitioning).
BSP Tree
With BSP tree, the scene is divided into cubes to reduce the number
of computations. Click the Set button beside this option to open the
BSP tree setup dialog box in which you can set the maximum depth
and size.
SOFTIMAGE|3D Rendering 85
Rendering with mental ray Software
In conjunction with the maximum size of the BSP tree, the rendering
process can be adjusted more finely to limit the number of triangles
calculated in each cube by setting the Maximum size. The default for
the size is 4 (triangles).
Ray Classification
Note that Ray classification is available with the mental ray 1.9
rendering software only.
The other method of acceleration, Ray classification, deals with larger
scenes more efficiently than BSP tree. It provides better management
of memory consumption and can help avoid massive memory
swapping. Ray classification exploits the coherence of the rays.
Ray classification checks for intersections between every solid object
to define refraction or reflection.
Click the Setup button beside this option to open the Ray
classification setup dialog box. In it, there are three variables of
optimization to increase the speed of the ray classification.
The Visible option specifies the number of divisions a ray will make
in order to accurately describe the object’s surface.
The Shadow option allows you to specify the shadow ray space division.
The Memory option is a safety device that allows you to specify the
maximum amount of memory to be used for the data structures. The
default value is 10 megabytes.
Grid Acceleration
mental ray 2.1 includes an acceleration method called Grid. The Grid
acceleration method, like other mental ray rendering acceleration
methods, places the scene within a bounding box. The bounding box
size simply represents the volume that objects in the scene occupy.
86 Rendering SOFTIMAGE|3D
Raytracing Using mental ray
For example, if you have a few small objects within a large bounding
area (in other words, the objects are far away from each other), you
should specify a smaller grid size. mental ray will have to evaluate
more voxels, but the ray tracer will eventually evaluate fewer voxels.
On the other hand, if you have a large object or a scene full of objects,
you should increase the grid size, as there will likely be geometry in
many of the voxels.
SOFTIMAGE|3D Rendering 87
Rendering with mental ray Software
4. Set the Min Samples and Max Samples values. This describes the
number of samples taken to compare surrounding colour values
which are averaged to define the colour of a pixel. The default
settings for Min Samples and Max Samples are -1 and 1,
respectively.
If the minimum value is zero, the pixel is sampled at least once. The
default minimum of -1 means that the picture will be subsampled
every four pixels (a 2 pixel-by-2 pixel square).
If the default sampling is used, what happens? mental ray takes two
samples, one on pixel 1 and one on pixel 3, compares them and if the
difference is greater than the contrast specified, it tries advancing the
level of samples. Now the new sampling examines pixel 1, pixel 2,
and pixel 3, looking again at the contrast level.
88 Rendering SOFTIMAGE|3D
Antialiasing Using mental ray
If the contrast is still too high between pixels, mental ray does a third
pass dividing each pixel into four subpixels within itself.
If this still does not meet the supersampling threshold, aliasing
appears because the limit of three levels of sampling has been
executed as the maximum. If this is the case and more antialiasing is
required, you must change the minimum or maximum values to
modify the limit.
5. Set the Threshold Adaptive Sampling sliders. These describe the
contrast level variable for deciding if another level of sampling
will occur.
The smaller the value for each colour (R, G, B), the greater amount
of sampling that must be done to close the contrast gap between the
rendered image and the threshold settings, and the smoother the
antialiasing.
6. Select the Filter Type. The sampling procedure can be
complemented in post-processing to ensure even more
antialiasing. Each of the three filter types (Gaussian, Box, and
Triangle) process subsamples, surrounding and including the
pixel which is being rendered, by using the height and width of
the filter. Based on the value of the pixel, mental ray takes the
average of every pixel and its surrounding pixels and removes
aliasing artifacts.
SOFTIMAGE|3D Rendering 89
Rendering with mental ray Software
The Width and Height options for each of these types define the size
of the filter to be applied.
When you do select a filter, it is applied as an algorithm defining a
curve, peaking at the centre of the pixel sampled.
• The Gaussian filter uses a sloped curve weighting the sampling gently
at the top of the peak and towards the edge of the sampled area.
• The Box filter sums up all the samples in the filter area with an
equal weight.
• The Triangle filter uses a linear curve which affects the pixels by the
least filtering happening at the edges of the sampled area.
filter filter filter
weight weight weight
1 1 1
90 Rendering SOFTIMAGE|3D
Blurring a Moving Object Using mental ray
3D Motion Blur
To use mental ray on specific objects in a scene, follow these steps:
1. Select an object or objects in a scene.
2. Choose Info > Selection for a single object or Info > mental ray
for multiple objects (click the Render Setup button in the Info
dialog box).
3. Select the Motion Blur option in the appropriate dialog box:
Linear or Exact.
If you want the selected object to have a motion blur based only on a
translation keyframe, select the Linear option. If the object is
animated only in translation, using Linear can actually be an
optimization since only one motion vertex is used for the whole
object. Linear is mandatory for objects whose topology is not
consistent throughout the animation. This affects objects such as
meta-clay and animated Booleans.
If the desired effect requires motion blur by animation of rotation,
scale, cluster, shape, lattice, or materials, select the Exact option.
SOFTIMAGE|3D Rendering 91
Rendering with mental ray Software
92 Rendering SOFTIMAGE|3D
Blurring a Moving Object Using mental ray
mental ray motion blur at lowest mental ray motion blur at medium
quality quality
SOFTIMAGE|3D Rendering 93
Rendering with mental ray Software
4. Set the Motion Blur Quality sliders. Valid ranges for the quality
sliders are 0 (full quality) and 1 (worst quality). The default
setting is 0.2. Typically, all sliders will be set to the same value.
Example
A few modifications have to be make to any object that you want to have
motion blur. This is an example of blurring two primitive objects:
1. Get a default Dodecahedron and an Icosahedron.
2. Select the Dodecahedron and choose Boolean > Static. Click Ok
in the dialog box to confirm the other parameters. Select the
Icosahedron to create a union between these two objects.
3. The Boolean > Static command creates a completely new object,
so, you must hide the original objects by choosing Display >
Hide > Unselected.
4. Choose Polygon > Automatic Colourize in the Matter module.
5. Choose the Material menu command and change the shading
model to Lambert.
6. Select the new “jewel” object and choose Info > Selection to open
the Polygon Info dialog box.
7. Change the Automatic Discontinuity value to 44, then click the
Render Setup button and change the Motion Blur option to
Exact. Click Ok to accept the set parameters in the mental ray
Render Setup dialog box and click Ok again to exit the Polygon
Info dialog box.
8. Choose the Render menu command. In the Render Setup dialog
box, select mental ray as the Rendering type, and then select
Motion Blur. In the mental ray Motion Blur dialog box, select
Active and set the Shutter Speed value to 1.
9. Click Accept to accept these changes and exit the Render Setup
dialog box.
94 Rendering SOFTIMAGE|3D
Blurring a Moving Object Using mental ray
10. In the playback box, change the Start frame to 0 and the End
frame to 30.
11. Spin the object: save a rotation keyframe at frame 0 using
SaveKey > Object > Rotation > All. Then go to frame 30, rotate
the object 180 degrees and save another keyframe.
12. Choose the Render menu command again. Set a resolution of at
least 200 pixels.
13. Set the mental ray Antialiasing Max Samples to at least 3. Lesser
values will have a coarse result and probably won’t look very realistic.
14. Render the sequence using mental ray.
Tip:
Tip To get rid of the noisy look, increase the antialiasing level
and/or add a Blur function from the Antialiasing menu in
the Render Setup dialog box.
2D Motion Blur
In addition to the raytraced (3D) motion blur available for the
mental ray renderer, you can activate post-processing (2D) mental
ray motion blur from within SOFTIMAGE|3D. This feature is like
the mb standalone, which is also a post-processing effect (although
you can still use it with this version).
The 2D mental ray motion blur is much faster than the mb
standalone, although they produce similar results. Depending on the
effect and quality you want to put into your scene, 2D motion blur
may be a solution. At the very least, you can use 2D motion blur to
quickly render your scene, and then use 3D (raytraced) motion blur
for your final render.
SOFTIMAGE|3D Rendering 95
Rendering with mental ray Software
96 Rendering SOFTIMAGE|3D
Blurring a Moving Object Using mental ray
6. In the mental ray Motion Blur dialog box, select the Active option.
7. Select the 2D option. When you activate 2D motion blur, you can
control the way the blur will look by selecting one of the
following blur types:
- Uniform: There is no decay in the blur. The blur’s “trail” is constant.
- Linear Decay: There is decay in the blur, and its gradience is linear.
- Gaussian Decay: There is decay in the blur, and its gradience is
Gaussian, which means that the blur has a contour that peaks at
the centre and ramps off smoothly on both sides of the blur. This
option produces the most visually accurate blur, although it is
more subtle than Uniform or Linear.
8. You can increase the quality of the motion blur by increasing the
number of Samples. Use the default setting of 1 at first to see the
effect, and increase it moderately if necessary. Do not use a
setting of less than 0 because 2D motion blur requires a setting of
a least 1 to produce smooth blur effects.
Tip:
Tip If your object is moving slowly toward the camera, we
recommend a Sample setting of 1, which will give you a
smooth blur.
SOFTIMAGE|3D Rendering 97
Rendering with mental ray Software
98 Rendering SOFTIMAGE|3D
Creating mental ray Shadow Maps
mental ray shadow map at map mental ray shadow map at default
resolution of 125 map resolution of 500
Note that mental ray shadow maps are only available with the mental
ray 2.1 rendering software.
From Programming mental ray:
Shadow mapping is a technique that generates fast approximate
shadows. It can be used for fast previewing of models or as an
alternative to the more accurate (but also more costly) ray-tracing–
based approach in scenes where accurate shadows are not required.
Shadow maps are particularly efficient when a scene is rendered several
times without changes in the shadows (for example, an animation
where only the camera is moving).
A shadow map is a fast depth buffer rendering of the model as seen from
a light source. This means that each pixel in a shadow map contains
information about the distance to the nearest object in the model in a
particular direction from the light source. This information can be used
to compute shadows without using shadow rays to test for occluding
objects. The shadow computation is based only on the depth
information available in the shadow maps. For fast previewing of
scenes, shadow maps can be used in combination with scanline
SOFTIMAGE|3D Rendering 99
Rendering with mental ray Software
mental ray soft shadow map at mental ray soft shadow map at default
default map resolution of 500 map resolution of 500 with antialiasing
(0,2) and adjusted step filter and size
values of 50 and 15
You can create softer mental ray shadow maps the same way you
create Softimage soft shadows, except that the Penumbra parameter
is not used by mental ray. Instead, the Filter Size parameter
determines the softness of the shadow’s edges.
Note: You shouldn’t mix area lights and mental ray mental ray
Note
shadow maps, since this combination will likely produce
artifacts. The mental ray shadow map will be jittered by the
area light, which will produce incorrect depth information
and put shadows on objects that in fact cast shadows.
1. Create a spot light.
2. Select Soft in the Define Light dialog box.
3. You can use the Umbra Intensity parameter to define the
intensity of the shadow (0 = black shadow, 1 = no shadow). Try
starting with an intensity of 0.2.
4. You can control the shadow’s resolution using the Map
Resolution parameter in conjunction with the Filter Size and
Filter Step parameters. For a definition of the Map Resolution
parameter, see step 3 in the previous section.
- Filter Size determines the size of the box filter, which is used to
soften the shadow’s edges. A good rule of thumb is to set the
number to about 10% of the Map Resolution. This parameter
identifies the amount of softness in the shadow.
- Filter Step determines the pixel offset, which is used to apply the
filter at a specific number of pixels from the previous application.
This parameter identifies the quality of the softness. The greater
the step, the better the result will be, and (of course) the longer
the render will be. A good rule of thumb here is to add 3 steps for
every 10% of the Filter size.
Troubleshooting
Problems can occur with shadow maps when the object casting the
shadow touches the shadow; the rendered image may incorrectly show
the shadow starting on the object or starting a pixel or two away from
the object. To fix this problem, increase the Map Resolution.
If the back faces of the object are not influential in the scene, then
you could select only the Front face option. However, to ensure
the highest quality image, perhaps for the final render, you
should select the Both option. You would also select Both is you
are rendering a grid (such as a flag) where you need to see both
sides of the object.
Rendering Shadows
For render tests, you can turn off all reflectivity, refraction, or
textures. You can make sure that no shadows are cast for objects. It is
advantageous to be able to turn off shadow components because they
are time-consuming to render. When you’re ready to see the
reflectivity, refraction, textures, or shadows again, you can select
them to make your scene more realistic looking.
1. Choose the Render menu command. The Render Setup dialog
box is displayed.
2. Select mental ray as the Rendering type.
3. Click the Options button. The mental ray Options dialog box appears.
4. Select Trace to activate the raytracer and/or Shadow to create shadows.
The raytracer is automatically selected if a lens shader is active.
A set of global switches for raytracing parameters is also located here.
Surface Approximation
It is possible to reduce the number of triangles in the geometry of an
object and still render a very smooth surface. Dealing with large
models while using parallel processing can slow down the time to
distribute the information as well as occupy unnecessary swap space.
For information on using Surface Approximation in conjunction
with displacement maps, see Controlling Displacement Map Quality
with Surface Approximation on page 104.
To set the surface definition for an object:
1. Select the object.
2. Choose the Info > Selection command and click the Render
Setup button.
3. In the mental ray Render Setup dialog box that appears, you can
decrease the step size, and the number of triangles decreases
drastically.
4. Select a type of surface approximation: Static or Adaptive (see
the next section for more information).
Depending on the surface approximation you choose, mental ray
tries to create the perfect curve between two points.
5. Set the Subdivision limits.
To prevent massive amounts of subdividing, a limit can be assigned
to each selected object which helps to optimize the quality of surfaces
with areas of different curvatures.
There can be a minimum (Min) amount required guaranteeing a
smooth curve, and a maximum (Max) amount guaranteeing the
utmost limit of computation.
6. Click Ok in this dialog box and the next to save your changes.
Changing these values also change the number of steps in the object’s
Info selection dialog box. These parameters remain the same
throughout the animation and if the camera gets very close to the
object, the steps might be seen.
This is the same as the method of surface approximation used by the
SOFTIMAGE renderer.
Adaptive
The Adaptive method of surface approximation interprets the size of
the object in proximity to the camera and adapts the number of steps
to account for the size in the render.
Division
This is the best method for surface approximation for both NURBS
and patch surfaces.
The size of the step can be measured in either the number of pixels in
the length of the step or the SOFTIMAGE|3D units, defined by the
grid setup.
Adaptive surface approximation can be accomplished by one of two
methods: Spatial or Curvature.
Spatial
The Spatial surface method using the pixel length is more intuitive
than using the system units for definition of the step’s length. Within
the object curve, the steps are as long as the number of pixels
specified in the text box.
Pixel Length
of Step Patch edge
Division
Distance specified
by Adaptive options
(Chord length)
If the step is longer than the specified number of units, the steps are
subdivided until they are equal to that number.
Curvature
The Curvature method chord length uses the distance between the
most distant arc of a curve and the step as a parameter by which the
steps will be subdivided. This method is more effective when dealing
with larger flat areas which converge into curved surfaces.
It approximates the flat area with less tessellation than Spatial and
approximates the curved area with more subdivisions of steps where
necessary.
If the combination of the Chord Length and the Angle of the steps do
not meet the parameters in the text boxes, the steps are subdivided.
Surface Normals
The Surface Normals option activates surface normal output and
creates an output file with the .n extension. It will have the same
resolution as the actual picture, except that instead of storing the
RGB value for each pixel, it stores the surface normal used for
rendering the triangle nearest the camera. Normal encoding is used
mainly for post-processing applications.
Z pic
The Z pic option activates depth picture output and creates an
output file with the extension .Zpic. This method is similar to
Normal encoding, except that it is the distance between the camera
and the nearest triangle that is stored instead of the RGB value.
• Edit displays the dialog box containing the parameters that define
the shader you have selected. You can then edit the parameters of
that shader. For example, if you are using the same shader two or
more times in the list, you can modify the parameters slightly for
each time it is used.
• Active makes the shader active or inactive, depending on the
current state of the shader. If a shader is active, it is highlighted.
• Move Up lets you rearrange the order of the shaders when you have
more than one active shader in the list. Since you can have more
than one lens shader active at the same time, it makes a difference
how you order them in the list. The shaders are processed starting
from the top of the list.
• Delete deletes the currently selected shader from the shader list.
• Set Name lets you change the name of a output shader. To change
names, select a shader from the list, modify the name in the Name
text box, and click Set Name. This creates a new shader with the
new name, but with the current parameters. You can edit these
parameters in the dialog box that appears when you click Edit (see
the previous description of Edit). The new shader can then be saved
and recalled by name in other scenes.
• The Name text box lets you save a shader by another name (see Set
Name above).
Example
1. Select an object or load a scene.
2. Choose the Render menu command. In the Render Setup dialog
box, select mental ray as the Rendering Type and set the Start and
End frames 1.
3. Click the Options button. In the Output Shaders area of the
mental ray Options dialog box, click Select.
4. In the browser that appears, go to the /Shader_Lib/GLOWS
directory. Select DGlow as the Output shader and click Load.
5. With the DGlow shader selected, click Edit.
6. In the OZ-Diffusive Glow Postfilter dialog box, click the Select
button beside the Object list text box. The Object list allows you
to apply the DGlow shader to any or all objects in your scene.
Select the object you want and click Ok to exit the Object list.
Saving to a File
You can write the commands received by the raytracer to a file
instead of actually rendering the scene.
1. Choose the Render menu command. The Render Setup dialog
box is displayed.
2. Select mental ray as the Rendering type.
3. Click the Options button. The mental ray Options dialog box appears.
4. Select the Output to File option.
5. Enter the complete path name or a relative path name in the text
box; the file will be written to the current working directory. This
output can then be edited and manually sent to the mental ray
renderer using the irix command.
If the Output to File per frame option is not selected, all frame are
created in the same .mi file. By default, all information for the
sequence of output frames are written into a single file. If the Output
to File per frame option is selected, each frame in the sequence is
written to a separate file.
File Naming
By convention, all file references that SOFTIMAGE|3D is generating
in the scene description to mental ray (the MI file) follow the IRIX
format. This ensures that all the rendering slaves interpret the path
names the same way. Another reason is that the MI file is platform
independent so that an MI file created using a Windows NT version
of SOFTIMAGE|3D can be used on the IRIX version. The
SOFTIMAGE|3D to mental ray translator also converts the file
extensions to ensure cross-platform compatibility. The standard is in
IRIX format, so the file extensions related to dynamic libraries and
shader compilation are converted as follows:
.dll ¹ .so
.lib ¹ .a
.obj ¹ .o
File Localization
The correspondence between the shaders’ libraries on Windows NT
and IRIX is made using the linktab.ini file mechanism since both
versions are installed differently on both platforms. The installation
script installs all the related components of SOFTIMAGE|3D in a
single directory tree. This tree is pointed to by the environment
variable SI_LOCATION. The following is the current directory
structure (in part) after installation on a Windows NT machine:
$SI_LOCATION\3D\
\bin
\rsrc
\flexlm
\setup
\mental_ray
\bin
\inc
\lib
\man
\MR_Shaders
The default installation on Windows NT puts this tree in
c:\SI_LOCATION\3D and in /usr/softimage/3D for the IRIX version.
Setup
All the Windows NT machines, the rendering client, and the
rendering slaves which are used as part of the distributed rendering
must have a linktab.ini file located in the directory pointed to by the
SI_LOCATION environment variable.
The linktab.ini file is an ASCII file that is used to define one-to-one
relations between IRIX and Windows NT paths. Each line of the
linktab.ini file represents such a relationship and contains two
entries. The first entry on the left is the Windows NT path, and the
entry on the right is the IRIX equivalent.
On the Windows NT machine, make sure that the SI_LOCATION
environment variable points to the location where SOFTIMAGE|3D
is installed. This variable should be present in the
%SI_LOCATION%\3D\bin\SETENV.BAT file. To modify or see the
contents of this file, click the User Tools option in the
SOFTIMAGE|3D program group.
The first line ensures that the shaders’ dynamic libraries are visible on all
the platforms. The second ensures that the correspondence between the
rsrc directory on IRIX and Windows NT, especially that the softimage.mi
and noIcon.pic files are looked up correctly. The noIcon.pic is only used to
replace pictures that are not found during rendering.
Repeat these steps for every Windows NT machine that you wish to
include in the distributed mental ray rendering.
Debugging
On a Windows NT machine, generate an MI file that contains a
default sphere with a material shader on it and examine its contents.
(any material shader will do). All file names should be in IRIX
format. If there is one path that is not in IRIX format, then there is a
typographical error in the linktab.ini file. To generate an MI file,
choose the Render menu command in the Matter module. In the
Render Setup dialog box, select the mental ray renderer and select
Options. The MI file generation controls are located in the lower
right corner of the dialog box.
2. Select or deselect the server hosts for the rendering job. Your
local .rayhost file is automatically updated so that the rendering
job will use the hosts you selected.
3. Render using the mental ray renderer.
creates the text file in the current directory. Different renders are
separated by dashed lines (-----) in the file.
You can also specify a full path for the file name:
set MR_VERBOSE_FILE=C:\users\maggie\mrerrorlog.txt
More verbosity!
mental ray can report more complete information if you specify the
following:
set SI_MI_TRACER2 c:\Softimage\etc\etc\ray2.exe -verbose 6
Example
If the .mi file contains frames 300 to 400 and you want to render
frames 330 to 360, you would enter the following:
ray2 Frame300-400.mi -render 30 60 1
mental ray will start rendering when it reaches the thirtieth frame
contained in the .mi file and will stop when it reaches the sixtieth
frame contained in the file.
M colour 56
depth computation 54
Materials first/last frames 55
optimizing rendering time 22 growth and scale 56
previewing 19 half blur 53
mb standalone 53 high curvature movement 55
Memory inaccessible data 56
guidelines 23 limitations 55
motion blur 54 mb standalone 53
requirements 24 memory 54
mental ray renderer 25, 67, 72 mental ray 91
acceleration method 85 rotation 55
adaptive supersampling 89 transparent objects 56
antialiasing 88
BSP tree 85 N
contour rendering 112 Network rendering 67
depth computation 113 Normals 7
displacement mapping 103 mental ray 113
faces 106
motion blur 91 O
multiple objects 69 Optimizing rendering time 22
networks 67 Output shaders 112, 113
optimizing time 23 setting up 112
output shaders 112, 113 surface normals 113
previewing 74
ray classification 86 P
rayhosts file 67 Pre and post-frame scripts 58
raytracing 84 Previewing 19
saving raytracer commands to file 116 .lin files 20
script files 81 mental ray 74
setting up 70 subregions 19
shaders 68, 79, 81 Projection, camera 13
shadows 107
single objects 69 R
surface approximation 108 Ray classification 86
surface normals 113 rayhosts file 67
Z channels 113 Raytracing 46
.mi files 116 acceleration method 85
Modelling, optimizing rendering time 22 mental ray 84
Motion blur 45, 52, 53 reflection mapping 48
alpha channel 54 saving commands to file 116
antialiasing 56 Reflection mapping 48
calculating 53 optimizing rendering time 22